Query PMU's to only show PMU's with Bad Data or other flags

Hi,

Can someone help me with how to query PMU data that has flags asserted? I know how to ‘exclude’, but not how to ‘exclusively include’. Effectively I would like a query that I could query a large sample of data and ONLY show the data that has the bad data flag asserted?

We have over 30 PMU’s coming into a hardware PDC at 200frames/sec (50Hz). The hardware PDC is serving our OH at 200frames/sec. We have had some PMU’s with an intermittent time stamp issue (instead of the PMU stamping on the 0.005000, 0.0100000, 0.0150000 as expected) they are time stamping with (0.0049998, 0.009997, 0.0149998). We have a time variance setting in our hardware PDC that will accept a time variance in the PMU timestamp. Currently it is set to 2usec, though when we receive a PMU timestamp of, for e.g. 0.009997, the hardware PDC was expecting to see a timestamp in the range of 0.009998 to 0.010002, therefore will ignore the timestamp of 0.009997. The hardware then passes the concentrated message to our OH server, though will flag ‘bad data, NaN’ for the PMU that didn’t meet the timing requirement as set by the time variance setting.

Hence my question about how to query OH data to only show PMU’s that have the Bad Data flag so I can use it as a fault finding mechanism to show which PMU’s are causing issue with their timestamp? I would use this query to see how effective adjusting the time variance setting in our hardware PDC would be.

Any suggestion?

Are you using Grafana? If so select the “Query Options” to filter out (or include) what qualities you want:

For example - here you can exclude “Normal” and only show bad data

Thanks Ritchie. Yes I have tried that, but it doesn’t seem to be doing what I want (ie if I select normal it just wipes everything out). It appears the OH is still seeing the data as normal (maybe because it is being served by a PDC rather than querying the PMU directly).

I think I may have a solution though. If I query the PMU:FLAG, the normal flag should be 192 Decimal (ie data is good, time accuracy is <10us). If I ignore all the data with a flag that is 192, that should do it for me.

For example, here you can see when I exclude bad data, it isn’t excluding anything. If I exclude normal data, it wipes out the whole trace.

Another option is to query the statistics of the PMU, i.e., the OHSTAT data source possibly with a text filter like FILTER ActiveMeasurements WHERE PointTag LIKE '%!PMU:ST2'.

For that matter you could just run the “Statistics Trending Tool” from the WIndows start menu under the openHistorian program shortcuts. In here you could search for PMU:ST2 and click the “Select All” button. Then, perhaps for the last 5 to 10 minutes, trend or export the data to CSV.

Each exported value at a given timestamp represents the number of times the time quality bit was encountered over the last 10 seconds. If the bit was consistently set for 50 times per second, the value would be 30,000.

Oh, I see - in your graph, you are selecting the FLAG measurement - this is the source measurement for determining quality - this particular measurement’s quality is always GOOD, i.e., “Normal”.

You need to pick something like the FREQ measurement for a PMU whose bad time quality will be derived from the FLAG measurement.

Here you can see (same time range), the frequency measurement. I’ve circled the parts where there is a time sync issue

When applying the exclude bad data, it removes these points

Although unfortunately, when applying the inverse (remove all normal data), everything is wiped out

I’ll have a look in the OHSTAT database and see if that is telling me anything. Appreciate the help mate.

And graphana doesn’t appear to be liking me querying the OHSTAT database.

Although when I change it to an element list, it can definitely see the data source, but gives me the same error

I’ll have a play around with the statistics trending tool and see if that helps out

Ugh, lot’s of troubles today. In your last default data source image attempt - try changing your query type to Text Editor and entering an expression like the following:

Eval(0.02, {(cast(_v0, int) AND 48) = 0}, GPA_WESTPOINT_1:FLAG)

Changing the tag name to one of your flag values. No filtering. This will just be zero or one based on the time quality bit.

Also, guess I never asked what version of the openHistorian you are using?

I think historianstats filter will be one of the better solutions to quickly identifying an issue. As you can appreciate, having 200frames/sec of data, and wanting to check for 1 frame of bad data is a challenge. If I want to check every frame of data for a selected PMU over an entire day, would I have to make the chart resolution 17,280,000 to check every frame?

We’re running server and manager version 2.8.157.0 (as reported in the openhistorian manager software anyway.

Appreciate the help, you’ve given me two solutions that I can work with now. Slowly coming up to speed on the software, been using it for a few months now and it has been great!

Another option you can consider is using the DeviceStats adapter, which was designed for this activity, i.e., finding the needles in a haystack for missing data points and/or bad time qualities:

This adapter uses SQL Server to track “minute” stats and “daily” stats for each PMU using the following SQL script:

You can then use Grafana to create graphs and tables against the SQL Server database results.

The details tracked per minute are:

  • received point count
  • data error count
  • time error count
  • min latency
  • max latency
  • avg latency

Oh that looks excellent. I’ll definitely check it out! It is definitely the needle in a haystack scenario.

Cheers Ritchie.

Hi Ritchie, I’ve added in the device stats as a custom action adapter. I’d just like it to use the default database for storing the device statistics.

A few questions if I may, sorry if they are noob questions, I’m still wrapping my head around it all.

  • Down the bottom of the screenshot, when I save / initialise the adapter, it is giving me that error in the log?
  • Would the adapter save statistics for each of my PMU’s? Considering that I have only one input (being a hardware PDC configured in OH as a concentrator), with multiple PMU’s, would it only store the device states for the PDC connection, or for each PMU that it is parsing from that stream?
  • When I do a list -A in the console, it shows that the adapter is running, but is storing no statistics? Is there something that I’ve done wrong in configuring the connection string (ie leaving it blank / as default)?
  • When I do get the device stats up and running, how would I query that data in grafana? I assume that it would be against the default database, but I assume it will generate some new point id’s?

Cheers,
Scott.

Hi Scott,

Were you able to add the SQL tables to your database using the SQL script? The “results” are stored in the database tables for this tool, not as standard statistics.

Sorry for the confusion.

Thanks,
Ritchie

Thanks Ritchie,

I assume we would have to run the SQL script via the database management tools rather than directly from the OH management interface? I wasn’t sure whether the management interface was smart enough to automatically run the script when adding in the action adapter?

Thanks for getting back to me. Using the console window and looking at the status and statistics recorded in there was very helpful just to see which PMU’s were causing intermittent issues over a 24hr period. Made a few tweaks to our PDC’s and all is running sweet since.

Regards,
Scott

Good deal. Yes, you have to run the SQL Script from SQL Server Management Studio (SSMS). Once this exists, the summary results will start writing to the database. Then, you can add a SQL Server database source to Grafana so the results can be visualized. Sorry, for this adapter “some assembly is required”.

Thanks,
Ritchie

Thanks for your help, much appreciated.

If I get around to installing the script I’ll let you know how it went.

Cheers