I have openPDC 126.96.36.199 running on Ubuntu 22.04 LTS on a raspberry pie.
I have noticed that there is a memory buildup until the service crashes (when it reaches 100% memory, usually after about 5 days) .
I only have 1 PMU connection, and I have 1 custom output adapter that archive the PPA PMU data as csv file in the Archive folder (rollover file every 30 minutes).
I’m attaching the journalctl file (out of memory error on June 25th at 10:15:32, Out of memory: Killed process 289491 (mono) total-vm:5564172kB) as well as the ErrorLog.txt, Health.txt, Status.txt, StatusLog.txt and the conf file openPDC.exe.config.
Note that I can run openPDC either after installing it manually on Ubuntu or inside a docker container, and I have the same memory buildup issue.
Also note that I tried to update the MAxSendQueueSize in the openPDC.exe.conf file for RemotingServer, internaldatapublisher, externaldatapublisher and tlsdatapublisher from -1 to 500000, hoping that it would maybe help with memory, but it does not.
So there must be something else, are you able to help and see why I get those memory issues?
Thanks for your help, please let me know if you see a parameter that needs to be changed in order to avoid this memory buildup.
openPDC memory buildup and crash.zip (34.7 KB)
I do see a gradual climb in memory usage when looking at
StatusLog.txt–about 130 MB over the course of 2 hours. That log cuts off at 10:09:32, probably indicating the point where memory could no longer be allocated by the process. Unfortunately, the
Status.txt file (captured at 10:00) does not seem to indicate that any particular adapter is to blame for the memory usage. The CSV adapter does report some mildly suspicious stats indicating a slight buildup, but it’s nowhere near the scale I would expect for the 3.3 GB of memory used by the process.
Below you can see that it’s in the middle of processing a collection of over 22k measurements, and it’s already built up 130 more despite the fact that the average items processed per function call is significantly lower than both of those figures. 20k measurements at 100 KB per measurement would use about 2 GB of memory, but that seems like a lot more memory than a measurement should typically use up. I feel that this could be more an indication of the strain on the CSV adapter that the high memory usage is causing rather than the cause of the memory usage itself.
Queued items to process: 130
Items being processed: 22,256
Total items processed: 703,904,511
Total function calls: 25,778,952 to ProcessMeasurements
Average items processed: 27.31 per function call
All this suggests to me that the cause is going to be difficult to discover. It could be a memory leak in the Mono runtime or perhaps a resource we aren’t disposing properly that would only cause buildup on Mono. You might try disabling all the adapters you think you don’t need to see if the memory buildup persists. If it does, maybe try running a test without the CSV adapter to see if that makes a difference. These types of activities could help to narrow down the culprit.
Thanks Stephen for the reply.
I started again the openPDC service on Sunday afternoon (not as a container), and I changed all the MaxSendQueueSize to 500000. I’m still archiving the PMU data in csv files.
The memory is already at 52% for the openPDC service. I ran the pmap command for that openPDC process, I have attached a zip file. We can see how much memory is used by the libraries and other files required to run openPDC. It’s already using 4.2 GB after 3 days.
I see 13,426 libraries/files used by the openPDC processes in the file I atached.
I just ran the same command a few minutes later, I now see 13,463 libraries/files.
I don’t see anything that use a lot of memory, it’s just the accumulation of more libraries/files over time that max out the memory then the crash.
Once it crashes again, I will try again without archiving the PMU data in the csv files, to see if I still get the memory buildup.
pmap.zip (58.3 KB)
I have one more suggestion for you to try, based on your feedback. There is a configuration file that sits alongside the
openPDC.exe executable called
openPDC.exe.config. In that config file, there is a setting called
WebHostingEnabled. Try changing the value of that setting to
False, and then restart openPDC.
Thanks Stephen, I disabled WebHostingEnabled but it did not help.
I also disabled the export to csv, it did not help, same memory buildup.
At that point, I only have a PMU file with about 26 data point. And memory keeps increasing about 20% per day.
I also disabled that device, to see what happens with no devices connected. Now, memory slowly increases, I guess due to the STAT data.
So there is to be something with that PMU connection that creates this buildup.
Do you see anything else that I could disable?
I think you can close that topic for now.
I do not know why openPDC is using so much memory, but since there is a POSIX release of openHistorian, I’m using it instead of openPDC.
I do not see the memory increase with openHistorian, it works better for us, and allow us to graph/export data through the web interface.