OpenHistorian - Max Number of Tags

Hello all,

I’m looking into the limits of your system : I would like to store a lot of values (a lot being around 10 000 points/sec) and I’m starting to hit a wall way lower. What I’m trying to figure out is what is causing this wall hitting. Here are a few words about my config (which I can easily provide as sqlite .db file if you wish).

  • OpenHistorian version : 2.7.265

  • Input made by Modbus. To make it more readable and to ensure a proper sampling interval, I chose to read 120 registers per blocks, read a block every second (750ms polling rate, 250ms intersequence). If there is only 1 sequence, is the total read time 1s or 750ms ? I assumed it would based on an another discussion (Maximum number of registers per second), but I’m unsure.
    Also : is it ‘easier’ for the historian server to have 2 sequences (500ms polling rate, 2*250ms intersequence), or even more, rather than having multiple devices with only one sequence ?

  • Type of database : I chose to use a Sqlite configuration database because of its lightweightness and because I’m more familiar with it. Could it be linked to a lower efficiency ? I wouldn’t see why as the number of access on this db is rather low.

  • Configuration of the devices : to simplify the configuration, I have developped a very small code to add devices/measurement to a .db file. Basically I do a clean install of OpenHistorian and I then add all my devices and measurements to the configuration database and simply restart the service. Everything seems to be working fine when I add 1 to 1000 measurements, but when I start going for 5000, I can’t access anything anymore, although I don’t really understand why : the CPU on my machine seems to be OK, the read and write rates are tolerable, but the web manager can’t reach the login screen, and the ‘normal’ manager doesn’t display any data. Even stopping the service takes at least 1min.

I’ve tried with 1000 points, 2000, 3000, 4000 : it seems that up to 2000 it seems perfect, over I can’t even be sure that it’s logging any data at all.

Can you help me troubleshoot this issue ?

Thanks in advance,

Best regards,

KB

No issues with points per second on openHistorian, we have synchrophasor systems running 100s of thousands of points per second. Also, the SQLite database will be “slower” than a database like SQL Server, that should not be a big issue - depending on what screen in the manager you are trying to access.

That said, maybe there are some limitations on the Modbus adapter - not sure I’ve ever tested it at the scale you are mentioning.

Can we get a copy of one or two of your recent log files. Look in the openHistorian installation folder, Logs subfolder - file extension *-openHistorian - *.logz.

Thanks,
Ritchie

When

Hello !

Thanks for your quick answer once more !

Here are a few Logz files I found, it seems there’s absolutely nothing with ‘above normal’ severity
Logs.zip (1.9 MB)
This first file has the logs for all the first tests I’ve tried since re-reinstalling (this afternoon).

Logs - 1.zip (620.8 KB)
This one is only the logs for the 5000 points test with Modbus which doesn’t work

Thanks in advance,

Best regards,

KB

Looks like there may be some configuration issues with the openHistorian based on a few errors I am seeing. Specficially:

Time: 11/16/2020 11:22:32 AM - Framework - Error - None - None
Event Name: Unexpected Error when rolling over a file
Exception: 
System.Exception: Out of free space

This might have caused a memory build up and made system unresponsive.

Can you provide connection string to the openHistorian (i.e., commonly PPA adapter) - plus disk space available on target drives?

Also, several OSI-PI errors related to missing 'OSIsoft.AFSDK - you might want to disable this adapter to reduce noise if PI output is not currently active.

Thanks,
Ritchie

Hello Ritchie,

Ok thanks a lot, I’ll look for the OSI-PI deactivation on my own (if you have documentation, that would be greatly appreciated).

Here are the disk space available : it should be enough shouldn’t it ?
image

Where can I find the openHistorian’s connection string ? I found the following info :

and this :

image

Do you have more information on the file which could be causing trouble and build up ?

Thanks a lot !

Edit : found something that looks like what you’re asking :

image

Keep me posted if you need more :slight_smile:

Here are a few things we can try to adjust:

  1. ArchiveCurtailmentInterval, units in seconds, defines how often the system attempts to delete old files. You might want to run this timer 4 times per day, or a setting like 21600 , since your drive is fairly small.
  2. DesiredRemainingSpace, units in GigaBytes, defines how much remaining space to “leave” on a drive, i.e., when to consider the drive full. The default value here is 500, already larger than your entire drive size. This is likely the primary issue, i.e., openHistorian considers your archive folder already full so it won’t roll over into a final .D file. Try setting the value to something more reasonable for your drive size, e.g., 20.
  3. MaximumArchiveDays, units in days, (which you already specify in your connection string) defines the maximum data time span for archived data. You will need to adjust this parameter based on how much data you archiving per day. You want to make sure max archive days can still leave enough desired space on drive per DesiredRemainingSpace setting.

Values in the connection string are formatted like key1=value1; key2=value2, for example:

MaximumArchiveDays=31; DesiredRemainingSpace=20; ArchiveCurtailmentInterval=21600

Also, looks like your D: has more space available, so you might consider moving your archive to that drive, see this WiKi page for more info:

Hope that helps!

Thanks,
Ritchie

FYI - I updated the following openHistorian WiKi page to capture information about these other important settings:

Thanks,
Ritchie

Thanks a lot, I have tried those tweaks, they seem to be working fine. I’ll try pushing the Modbus adapter to its limit to see how much of a stroll in the park my usual plant is for it :slight_smile:

Thanks again !

Best regards,

KB