Hello, Team,
I hope you’re all doing well.
We’re currently working on a Python API that includes a function designed to receive a list of PMUs, organized by PPAs, along with a specified time range. The purpose of this function is to retrieve the relevant data and subsequently save it locally within a DataFrame.
Here’s an example of how this part of the code looks:
time_filter = timestampSeekFilter.CreateFromRange(start_time, end_time)
point_filter = pointIDMatchFilter.CreateFromList(point_ids)
reader = instance.Read(time_filter, point_filter)
key = historianKey()
value = historianValue()
df = pd.DataFrame(columns=['Hora UTC', 'Valor', 'PPA'])
while reader.Read(key, value):
match = re.search(r'@ (.+)', key.ToString(metadata))
horario= match.group(1)
#print(value.AsQuality)
new_row = pd.DataFrame({'Hora UTC': [horario], 'Valor': [value.ToString()], 'PPA': [key.ToString().split('@')[0].strip()]})
df = pd.concat([df, new_row], ignore_index=True)
As you can see, we’re using openHistorian.historianKey
and openHistorian.historianValue
to retrieve data from snapDB.
I have two requests for support:
-
Decimal Precision:
Is there a way to adjust the returned values to display with the maximum number of decimal places?
To address this, I modifiedopenHistorian.historianValue >> function ToString()
to return more decimal places, and the same adjustment was made foropenHistorian.historianKey >> function ToString()
. However, I believe there may be a more optimized and less intrusive way to achieve this goal. -
Data Acquisition Rate:
I would like to acquire data at lower rates (also specifying this parameter in the function mentioned above). For example, for PMUs that store data at 120fps, I would prefer to acquire only at 10fps. In this case, we have applications that do not necessarily require all 120 phasors to yield reliable results.
I would appreciate any suggestions you may have on both points mentioned above.