I’m trying to read data using CSVInputAdapter. It seems that at the beginning it start reading from the first row of input CSV file properly, but after some rows, suddenly stops and due to ‘AutoRepeat=True’ it returns to the first row again. This happens many times without reading from the beginning of rows to the end. I am not sure what the problem is and would appreciate if you could help me. Here are screenshots that might help:
(Custom Input Adapter )
Are you saying that the
CsvInputAdapter doesn’t read to the end of the file? Here is the logic it uses to determine whether it has reached the end of the file.
Perhaps there is a line in your CSV file that has fewer columns than the input adapter expects.
Thank you so much for your reply,
All columns are similar, but the CSV file is very large. Could it be the reason of this problem?
Yes, if the number of fields in the CSV row it’s processing is less than the number of columns defined in the connection string, the adapter will end the stream early and start over as if it had reached the end of the stream.
Thank you so much,
That CSV file has 15000 rows and 5200 columns and all columns are defined in the connection string. I am not sure where the error is. Do you think the number of rows or columns are the reason of this problem?
I’m pretty sure that error from
ErrorLog.txt is unrelated to the issue you described. It likely comes about as a race condition between the input adapter’s processing function and the adapter’s
Dispose() function. The adapter is disposed when you attempt to initialize it via the openPDC Manager or when shutting down the openPDC, so I’d only expect it to happen at those times.
If there is any single line among the 15000 lines in your CSV file which has less than 5199 commas, the CSV input adapter will start over when it attempts to process that line. Assuming I am understanding your problem correctly, that is the only thing I know of that would cause the behavior you described.
Thank you very much,
That single line (or break point) is not a certain row and it dynamically changes so It is not predictable. I have tested It with two computers and the result is the same.
the error log is:
Exception Source: mscorlib
Exception Type: System.ArgumentException
Exception Message: The output char buffer is too small to contain the decoded characters, encoding ‘Unicode (UTF-8)’ fallback ‘System.Text.DecoderReplacementFallback’.
Parameter name: chars
Exception Target Site: ThrowCharsOverflow
---- Stack Trace ----
openPDC.exe: N 00158
System.Text.Encoding.ThrowCharsOverflow(decoder As DecoderNLS, nothingDecoded As Boolean)
openPDC.exe: N 14174907
System.Text.UTF8Encoding.GetChars(bytes As Byte*, byteCount As Int32, chars As Char*, charCount As Int32, baseDecoder As DecoderNLS)
openPDC.exe: N 01187
System.Text.DecoderNLS.GetChars(bytes As Byte, byteIndex As Int32, byteCount As Int32, chars As Char, charIndex As Int32, flush As Boolean)
openPDC.exe: N 00284
System.Text.DecoderNLS.GetChars(bytes As Byte, byteIndex As Int32, byteCount As Int32, chars As Char, charIndex As Int32)
openPDC.exe: N 00039
openPDC.exe: N 00154
openPDC.exe: N 00218
CsvAdapters.CsvInputAdapter.ReadNextRecord(currentTime As Int64)
openPDC.exe: N 00126
Is this error log unrelated,too?
Hm, no, that one is probably relevant. This one also looks like it’s caused by a race condition, but it’s between simultaneous calls to the
ReadNextRecord() function itself. Perhaps your CSV adapter instance’s processing function may not be able to keep up with the timer interval it’s using. You might want to try setting the
UseHighResolutionInputTimer setting to
true. Doing so should avoid the race condition there.
It worked and that was the main problem. Thank you very much for the efficient support.