You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
I have (large) files that seem valid to the naked eye but fail to validate against my schema. I’ve tried different things to find out what the issue was but no luck. For instance, I get Error: Expected @totalColumns of 13 and found 11 on line 1346 but if I open the file in a text editor and remove the first 1000 lines then I get Error: Expected @totalColumns of 13 and found 11 on line 1350. I’m guessing the error is not actually where I’m told, but I have no way to see what value the validator is seeing and trying to match with each column.
I got a bit of hope when I saw the -t option but that only seems to give me errors (a lot of them!) about the schema and not the content of the CSV.
So either there’s a method for debugging and I missed it, or it should be implemented.
The text was updated successfully, but these errors were encountered:
-t is a trace of how the schema is parsed - what you're seeing there aren't errors as such. It's possible there's something there though that might shed some light on this case which does otherwise seem a bit odd.
If you can open the file in a text editor, can't you just go direct to the reported line number too to inspect it directly.
What do you think such a debug report should include specifically?
Line 1345:
Value "123456" matches rule for column 1 ("id").
Value "John" matches rule for column 2 ("name").
Value "abcde" doesn’t match rule for column 3 ("age"): is not positiveInteger
Line 1346:
…
Feel free to play with the format, that’s just a general idea. What matters here is that I should be able to check what the validator considered to be the value of each cell to check agains my expectations.
For the record I solved the above issue in the meantime: some values had a non escaped quote in them (more specifically the contained "> in the middle of them) and I guess that was triggering an offset in what value was considered for each column. I suppose the error was thus not exactly at the specified line because the line termination at the end of the row was seen as part of the value of a given field.
I have (large) files that seem valid to the naked eye but fail to validate against my schema. I’ve tried different things to find out what the issue was but no luck. For instance, I get
Error: Expected @totalColumns of 13 and found 11 on line 1346
but if I open the file in a text editor and remove the first 1000 lines then I getError: Expected @totalColumns of 13 and found 11 on line 1350
. I’m guessing the error is not actually where I’m told, but I have no way to see what value the validator is seeing and trying to match with each column.I got a bit of hope when I saw the
-t
option but that only seems to give me errors (a lot of them!) about the schema and not the content of the CSV.So either there’s a method for debugging and I missed it, or it should be implemented.
The text was updated successfully, but these errors were encountered: