Delayed PDC Output Stream From Archive

I am new to OpenPDC but wondered if there is a way to easily output a user definable delayed output stream of C37.118 synchrophasor data to another PDC (i.e. user definable time shifting of data)?

I would say the easiest way would be to create an output stream with a large lag time and turn of preemptive publishing. However, there are a few problems with this approach since the system will rely on RAM to save real-time data to the data frame buffer. First, the delayed stream won’t start right away; it has to wait for real-time data to accumulate in the buffer. This will also be a problem for restarts and configuration changes. Second, there will be hardware limitations, since you will have significantly less space in RAM than you would on your hard drive. Third, this approach could severely hurt performance of the entire system because of the way the .NET garbage collector works.

I’m guessing the first issue would be a deal-breaker for that approach. Ideally, you could set the system up to retrieve delay-stream data from the archive. However, that is typically done through temporal sessions, and temporal sessions are generally used for on-demand data streams for fixed time ranges; not for ongoing data streams like this one. Unfortunately, I don’t think it’s going to be possible with a temporal session without some changes to the source code.

Which brings me to my final suggestion, which would be custom code. Theoretically, you could design an adapter to query temporal data, buffer it, and send it into a C37.118 concentrator using source code and/or objects from GSF.TimeSeries.Transport.DataPublisher and the GSF.PhasorProtocols namespace. In answer to your original question, I wouldn’t really call this an easy solution, but it seems doable.

We have a client wanting to buffer data up to as much as 24 hours. I would assume that would be far too much for internal RAM. Am I understanding you correctly in your last point that the best option may be to stream the data down to a database where it could then be read back via custom scripting and streamed back out to a secondary PDC using a selected offset from the clock? This would all you to “play back” the queried data on an output stream with whatever delay you would select. Am I understanding you correctly?

Well, 24 hours of data at 30 fps would be 2.6 million frames of data that you’d have to store in memory. That’s 2.6 MB of data per byte of data per frame. Or, in other words, if a single frame of data takes 100 bytes of memory, it would take 260 MB of data. That may be an acceptable amount, but it does depend on the size of your frames.

Although it’s hard for me to say how this approach would affect system performance, I should think that the biggest problem would be that the system would need to buffer 24 hours of real-time data before the delayed stream can start. In other words, the stream wouldn’t start until 24 hours after the last time you restarted the service or initialized the output stream.

In answer to your question, it does sound like you are understanding me correctly. The openPDC already has the capability of streaming data from a historic archive, but it isn’t designed for a continuous delayed stream. So the custom script would really just need to periodically query historic data and then provide that buffer to a C37.118 concentrator.

Thank you for your insights. I’ll head down this path and see where it leads! Appreciate the help.

If you use the openHistorian as a “buffer” where your real-time data flows into the openHistorian - you can create a temporal subscription from a stand-alone openPDC where startTimeContraint=*-1D, i.e., now minus one day and endTimeConstraint=2030-00-00, i.e., some far fetched future time, then so long as the playback speed is set to as close to input speed as possible (e.g., 30 frames per second) - you should maintain a delayed playback. There may some drift since replay is based on a timer, but the drift should be slow.

FYI - all the data flows in this setup between openHistorian and openPDC assume use of GEP protocol. The GEP protocol supports both real-time and historical data transfers.

Also, if you already have an archive with the openPDC, you can simply use another openPDC with a temporal subscription to the first without having to install openHistorian.

Thanks for the great responses, really appreciate it! I’ll be giving this a try.

I am in the process of trying to implement the method described by Ritchie above, where openHistorian is used as a buffer to pass delayed data to openPDC. I have openHistorian set up and archiving data, but I’m having trouble figuring out the specifics of getting a subscription set up between openHistorian and openPDC. Does this require use of SIEGate, or can it be done directly between the two? Any direction you can provide would be appreciated!

Basically just need an STTP (or GEP) connection between openPDC and openHistorian where openPDC would be the “subscriber” and openHistorian would be the “publisher”. The publisher components for STTP and GEP are pre-configured, so all you would need to do is setup a “subscription” in openPDC to the openHistorian. I suggest using STTP since this protocol is newer than GEP and has a few improvements, plus this protocol is becoming a new standard, i.e., IEEE 2664.

To create a subscription, open the openPDC Manager UI (not web UI) tool and navigate to “Inputs > Subscription Based Inputs > Create Internal Subscription”. From here enter an acronym like “OH” and provide the host name for the openHistorian, use “localhost” if it is installed on the same system. The default port number for STTP internal publisher on openHistorian is 7175. I typically uncheck the “Use Source Prefix” checkbox so that all data in openHistorian will be replicated “as-is” to openHistorian, i.e., without an “OH!” prefix. Also, verify that STTP is the selected protocol. Now click next.

Since you want this to be a historical replay subscription, I would add the following to the end of the “Connection String” field:

;startTimeConstraint=*-1D; endTimeConstraint=*+3650D; processingInterval=33

This says start subscription starting one day ago and end subscription in about 10 years from now with data being published every 33 milliseconds.

The openHistorian publisher will see this as a temporal subscription and start publishing data from the archive instead of publishing real-time measurements.

From the perspective of the openPDC, you will now be processing streaming day-old data.

Note that if you are going to be trying to get this data into a C37.118 stream, you need to tell it ignore old timestamps (and maybe some other flags) so that it will concentrate very old data…

Ritchie

Note that the replay interval is critical - it needs to match real-time data rate. If you replay faster than real-time, your replay will eventually catch-up to real-time. At that point data will only be published as fast as it can be archived and then read back from disk, pseudo-real-time.

Thank you for this detailed explanation - it’s been a huge help. I think I may be close to getting this working, but openPDC, although it says “Subscriber connected = true” is not yet receiving any measurements from openHistorian:
openPDC Capture
Is there something I need to do to authenticate the subscriber, or is that only for remote connections? In my case I have openHistorian and openPDC running on the same machine.
I also wasn’t sure what value to put down for the Access ID on the input parameters. I have at the default of 0 for now.

There should be no place to input an Access ID when you are using STTP.

Can you take a picture of the “View Iaon Tree” screen by navigating to “System > View Iaon Tree”?

Also, you might want to pull up the console on the openHistorian and watch for messages while you re-initialize the STTP subscription in the openPDC to verify that temporal subscription is working properly.

Thanks,
Ritchie

Was getting ready to post an update when I saw you had replied. I just got openPDC successfully receiving data via the subscription this morning. I think my problem may have been related to the way I’d previously upgraded openHistorian and tried to migrate the database from the old version to the new. I did a fresh install of both openHistorian & openPDC with clean configuration databases, and the subscription seems to be working now.

I did have to play with the processingInterval value a little bit. I am receiving the synchrophasor data at 10 frames / sec, so I initially tried a value of 100, since it seems 100 milliseconds would be the appropriate value for that frame rate. However, that value was streaming roughly 10 times faster than the original input, so I changed it to 1000. That change got me fairly close to an accurate replay rate, but it seems to be creeping backwards by about 15 seconds / hour (the delay increases by 15 seconds). I’m a little confused as to what unit the interval timer is using, since 1000 doesn’t seem to align with milliseconds for 10 frames / sec.

My next challenge is getting that delayed data back into a C37.118 stream as you described. If you have any pointers related to ignoring timestamps or other flags, I would appreciate any input you can offer!

Thanks,
Mark

If the original frame rate is 10 frames per second, I would think a process interval of 100 would be right - but getting the output rate perfect may be a challenge just because the system was never designed to attempt to match a real-time replay rate - it’s generally designed to just provide a “base-speed” for data replay. Things like “missing samples” will never properly get accounted for in the existing algorithm implementation. That said, there may be somethings we could do in the code (e.g., new settings) to better match your target use case, however, my suggestion for now using the existing code is just to play around with processing interval, slowly reducing it from 1000ms (perhaps at 100ms increments) until you have a reasonable approximation of real-time that floats +/- as little as possible.

To create a historical IEEE C37.118 output that will properly process historical timestamps, try the following from the output stream page:

  1. Add the following to the “Connection String” field:
        processByReceivedTimestamp=true
    
  2. Under “Advanced Properties”:
    a) Uncheck the “Allow Sorts By Arrival” checkbox.
    b) Uncheck the “Perform Timestamp Reasonability Check” checkbox.
    c) Check the “Use Local Clock As Real-time” checkbox.
    d) Check the “Ignore Bad Timestamps” checkbox.
    e) Check the “Round to Nearest Timestamp” checkbox.
    f) For 10 samples per second, set the “Time Resolution” field to “10000”

Hopefully this will negate the importance of the “Lead Time” field, but I would set it to match “Lag Time”. Although the “Lag Time” field should technically be set to a value that would allow enough time to read all the timestamps for a given 1/10 of a second interval, I would just start with “5” - but you can adjust up or down as needed.

Click “Save” then “Initialize”. Hopefully this will allow a concentrated stream of historical data.

I think I may need some guidance on just the basics of configuring a C37.118 output stream for an external connection.

I’ve created an output stream following the OpenPDC documentation on Github and using the SHELBY sample as a go by. I can connect to the stream locally using the PMU Connection tester, but after connection there is no data getting received by the tester. It shows the correct frame rate, and it shows the available phasors as expected, but no data populating the graph. When I connect to the SHELBY sample output stream locally, it exhibits the same behavior (connects, but receives no data).

My second problem is that I haven’t figured out how to connect to the stream from a remote machine, which is my end goal. I thought perhaps it was as simple as changing interface=0.0.0.0 to my target IP under the TCP Channel, but wasn’t able to get that to work.

Any ideas would be appreciated!

Thanks,
Mark

Hello Mark,

If the documentation you are referring to is the openPDC Manager Configuration guide, note that the screenshots in this guide provide an example that uses a TCP command channel and a separate UDP data channel. Any computer on the network that isn’t blocked by a firewall will be able to connect to that TCP command channel to request a configuration frame, but it will not receive data over the UDP channel because the example is configured to broadcast to localhost:8800 over UDP. You can fix this by changing the broadcast address for the UDP channel, but unless you have strong opinions about separating your command channel and your data channel, it would be easier to simply not use a UDP channel at all.

  1. Leave UDP Channel blank when configuring the Output Stream in openPDC. When making changes, don’t forget to Save and then Initialize.
  2. In PMU Connection Tester, click on Configure Alternate Command Channel, check the Not defined checkbox, and click the Save button.
  3. Also in PMU Connection Tester, switch from the UDP tab to the TCP tab and configure the IP and port the same way you had previously configured the alternate command channel.

Thanks,
Stephen