openHistorian On Linux

There is a possibility that the architecture does not match your execution environment, if you type the following:

file grafana-server

Does this match the architecture of your operating system?

That must be it, aarch64 for my OS (Raspberry Pi) and x86_64 for Grafana.

Ah, yeah, I had no idea you’re running this on a Raspberry PI. Yup, that’d do it. We need to replace those Grafana binaries with a version that will work on RPI.

Download the standalone Linux binaries from this address that match your ARM architecture:

Technically, I think once you unzip the file you will only need the files out of the bin folder everything else “should” be the same. So just replace the bin files with those for ARM and mark the files as executable.

Any luck? Need any help with last step?

Sorry Ritchie I had to work on another project today but I got some time to do it tonight.
I copied the files in the bin folder, marked them as executable and reinitialized the grafana instance in the openHistorian console, and grafana started!!

That’s great news, thanks very much for the help.

The only thing that I witnessed after grafana was working for a few minutes, I was looking at the system dashboard with CPU, memory graphs then suddenly I lost the connection with grafana. Apparently openHistorian was still running. and I had to enter init 3 in the console to reinitialize it.

I saw a lot of connection refused in the console:
Exception Type: System.Net.Http.HttpRequestException
Exception Message: Connection refused

I do not know if it’s a one time thing, I guess I will see in the next few days/weeks. I have attached the ErrorLog.txt, if you want to have a look.

The other thing that I saw, and it took me a while to figure it out, you are going to see a few authentication errors like this one below in the ErrorLog.txt file:
The provided anti-forgery token was meant for user “.\ie_admin”, but the current user is “”.
with type:
System.Web.Mvc.HttpAntiForgeryException

Apparently, if I log in once to openHistorian, and I do not log out but just close the tab in the browser, when I want to log back in, but for some reason I click on the long URL, I enter my password then I get the error message.

Since I’m already logged in and I enter again my username and password then I get that anti-forgery token error.
I just need to go in the URL, and remove everything after 8180, click enter and I’m in openHistorian directly.

Finally, I’m going to monitor the RAM usage over the next few days/weeks.
Hopefully I do not see too much of an increase to the point that it’s using all the RAM then it crashes.

Thanks again for the help, I appreciate it.
StatusErrorLog.zip (97.5 KB)

I saw that anti-forgery error also, just hit refresh on the browser and the issue will go away. This seems to be an issue related to Linux only and I’ll need to figure out what causes it before we release the final version.

OK, thanks Ritchie.

This morning, I had a Mono memory crash. I do not see any errors in the ErrorLog.txt file at 7:10 AM GMT, but I got a bunch of memory crash Blob files.

I’m attaching a zip file with the ErrorLog.txt, StatusLog.txt and all the Blob files.

Jerome
mono_crash.mem.zip (188.6 KB)

Never saw memory utilization go above about 570MB, how much memory does the device have? Any other activities going on consuming lots of RAM? There are some memory control settings we can apply.

It’s on our test bench, there is nothing no other application running, when I checked the dashboard on grafana last night, openHistorian was using about 500 MB, and grafana 130 MG. We have 4 GB of RAM on that RPI.

I don’t think we ran out of memory, I do not see “out of memory” in the journalctl.
So not sure what happened, I’m going to reboot the device and see if it happens again.

With openPDC, I see a memory increase every day (it’s using 800 MB per day so after about 5 days, openPDC crashes).
I do not see the same behavior with openHistorian, like you said, after a couple of days, I only saw it at 550 MB, but I wanted to monitor it over a longer period of time.

However, that mono crash memory error, it’s kind of new.

Is there a way to disable grafana at startup, just in case it happens again, I might want to run just openHistorian and not grafana, and see if I witness this mono memory crash.

Jerome

Ritchie,

On the RAM subject, I mentioned openPDC that I have running on one of our x86_64 Ubuntu server 22.04.
It’s in a container and we have assigned 4 GB of RAM.
It was taking 800 MB per day so after 5 days, we reached 3.95 GB of RAM (98.74%), and it has stayed like that for more than 24 hours.
I would have expected to crash by now (like it did on RPI), but now it seems stable using 3.95 GB of RAM.
It’s too soon to say how long it will be running like that before the container crashes and it is restarted.

I’m not sure I understand how the memory is handled with openPDC.
Is there a setting somewhere in a conf file where we specify the amount of memory that openPDC should use, and dump older data after a certain time?

You were telling me about some memory control settings that we could apply, I assume it would be similar for both openPDC and openHistorian, I’d like to know more about those when you get a chance.

Thanks,

Jerome

What I was saying about openPDC on x86_64, we assigned 4 GB of RAM to this docker container, the RAM increased 800 MB per day, reached 3.9 GB after about 5 days. I was expecting it to crash but it’s been stable at 3.9 GB for a couple of days.

openpdc memory usage over time

Jerome

Several ways to disable Grafana:

  1. Simple thing is just to rename executable so OH can’t find it at startup
  2. If you have access to SQLite Browser, you can disable CustomActionAdapter that starts Grafana

For openPDC, open Status.txt file, search for any “Queued” items - large queues are the primary culprit for any memory backups.

For the associated adapter, network output may be slow - could be destination not accepting data quick enough, could be busy network, could be lack of CPU on local machine.

openPDC socket setting include an option to set a maximum queue threshold, in this case, however, it just dumps data on the ground once queue size is hit - so it’s better to figure out where the bottle neck is.

Thanks,
Ritchie

Hello Ritchie,

After using openHistorian this summer, I have been asked to dockerize it, like I did with openPDC.
I have succeeded but there is one thing not working, it is not recording the d2i files in the Archive folder.
I’m not sure why.

Would you have any suggestions for that?

I’m attaching the ErrorLog and StatusLog files.
I do see a couple of weird things:

In the StatusLog:
Logs.zip (25.4 KB)
[PPA] Connection attempt failed: IPv6 stack is not available for socket creation on ::0:38402

In the ErrorLog:
Timeout waiting to save data to /opt/openHistorian/ConfigurationCache/AdoSecurityCache.bin

I have all those ports open while running the docker container: 8900, 8500,8280,6051,6052:6052,5018,6057,6165,6151,6152,6166,6167,8180,6352,4712/udp,8009,8883

Thanks,

Jerome Patrizi

Additional info: to dockerize openHistorian, I started with the mono:6.12.0.182-slim image then mono:6.12.0.182 and finally ubuntu:22.04.
Same result for all, it’s not recording the d2i files in the Archive folder, for some reasons.

FYI - for openPDC, I used mono:6.12.0.182-slim and it worked. I was able to add an entry in the CustomOutputAdapter and it was saving the data in the Archive folder as csv files.

Update about the ports, I also have those in the container: 38402, 7177, 7175, 6155, 6156,6175, 6177,6176, 6055,6056, 6356, 5020, 8510 on top of the others ports mentioned in the previous post. Same errors though, AdoSecurityCache.bin, IPV6 is not available on 38402 etc … and of course, no d2i files recorded in the Archive folder.

Not sure what port 38402 is used for, but it could be possible that docker is not set up to use IPv6, it might need to be enabled somewhere!!

Update #2: after enabling IPv6 in the docker daemon, it’s now recording the d2i files into the Archive folder!!
I still have that error in the ErrorLog file but I’m not sure how critical this is (Timeout waiting to save data to /opt/openHistorian/ConfigurationCache/AdoSecurityCache.bin)

Jerome

Great to hear you’ve had some success after enabling IPv6. This is enabled by default, so I could see where that’d be useful. You can force full use of IPv4, but this would need to be done for all ports, including 38402 - which is the primary openHistorian API port.

FYI, to force use of IPv4, server and client connection strings would need to be modified to include ; interface=0.0.0.0, which basically says bind to default network interfaces and use IPv4, inferred by the IP format, i.e., x.x.x.x - for example, the default is ; interface=::0 which is used when otherwise not specified and is an IPv6 address.

The timeout for writing to the AdoSecurityCache.bin may not be critical as long as the file exists - this could simply be a startup access issue where two threads are trying to write to file. Would need more log info to be sure.

Thanks,
Ritchie

Thank you Ritchie.

We would like to force IPv4 for all ports as IPv6 is experimental with Docker, so we prefer to use IPv4 for all ports.
You mentioned that the client and server connection strings need to be modified but which file exactly is that?

I checked in the openhistorian.exe.config file but I do not see anything related to port 38402.

Thanks,

Jerome

Or is is in the database that I can specify IPv4 on port 38402 for the API?
If that’s the case, I can run a query and update the connection string during installation, but I would need to know which table I need to update the ConnectionString.

Whether it’s a configuration change in a config file or in the database, do you mind telling where it is?

Thanks,

Jerome

For most ports the connections strings are in the config file. For the openHistorian this would be /opt/openHistorian/openHistorian.exe.config.

See XML configuration entities with the name of ConfigurationString, for example:

    <remotingServer>
      <add name="ConfigurationString" value="Port=8510" description="Data required by the server to initialize." encrypted="false" />
    </remotingServer>

Here you would want to ensure the value was: "Port=8510; interface=0.0.0.0" to force use of IPv4.

For the 38402 port, this is the openHistorian API instance port. To force this to use IPv4 you need to adjust the value in the database as the API instance is a time-series library (TSL) adapter whose configuration is loaded from the database.

Normally, the easiest way to change this is to use the openHistorian Manager application, but since this is a WPF application it only runs on Windows. On Linux, you’ll need to use database tools to update the database table record. Specifically, you will need to modify the ConnectionString field for the Historian table where the TypeName field is equal to openHistorian.Adapters.LocalOutputAdapter. The ConnectionString field needs to include ; dataChannel={port=38402; interface=0.0.0.0} to force IPv4.

Note that the default when the DataChannel key is not included in the ConnectionString is: ; dataChannel={port=38402; interface=::0} which specifically uses IPv6 on Linux. On Windows the same expression actually uses a dual-stack socket which means it will listen on both IPv6 and IPv4 simultaneously.

Database tools will need to be command line based if your Linux instance does not include a UI. For example, if your openHistorian instance is configured using SQLite, you could use something like the following command to update the database record to force IPv4 for the openHistorian API adapter instance:

sqlite3 -line /opt/openHistorian/ConfigurationCache/openHistorian.db "UPDATE Historian SET ConnectionString = ConnectionString || '; dataChannel={port=38402; interface=0.0.0.0}' WHERE TypeName = 'openHistorian.Adapters.LocalOutputAdapter';"