Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Delimiter Discovery #400

Open
cawoodm opened this issue Aug 31, 2023 · 13 comments
Open

Delimiter Discovery #400

cawoodm opened this issue Aug 31, 2023 · 13 comments

Comments

@cawoodm
Copy link

cawoodm commented Aug 31, 2023

Summary

The library should detect common delimiters , or ;.

Motivation

Many international users will be working with commas or semi-colons on a daily basis depending on who generated their CSV. Currently we need to manually parse the first line and see which delimiter is used.

Alternative

let delimiter = csvString.match(/[,;]/)?.[0];
let data = csv.parse(csvString, {delimiter};

Draft

let data = csv.parse(csvString, {
  detect_delimiters: [',', ';'],
};

Additional context

Since it may be difficult to detect "just any" delimiter the developer can pass an array of common delimiters which they expect.

@hadyrashwan
Copy link

Hey @cawoodm,
Anyone is working on this one ? I take work on it.

@wdavidw
Copy link
Member

wdavidw commented Jan 12, 2024

While I am not against the idea, I can't say that I fully support the idea. However, if you come up with a clean delimiter_auto option, I'll probably merge it.

@hadyrashwan
Copy link

@wdavidw I looked on GitHub to see what other people were doing.

node-csv-string detect function, which basically looks for the first occurrence of one of the delimiters, I guess is fine for most cases.

Another more advanced implementation was the detect-csv determineMost function , which looks at a sample and returns the delimiter with the highest occurancy count.

What do you think ?

@wdavidw
Copy link
Member

wdavidw commented Jan 15, 2024

I would tend to discover the character, like in the second methods, after filtering any already used character in options (eg quotes, row delimiters, ...) and general ascii characters ([a-zA-z0-9]) (including accented characters).

@hadyrashwan
Copy link

hadyrashwan commented Jan 29, 2024

@wdavidw
I created a small proof of concept for the auto_delimiter option.

master...hadyrashwan:node-csv:patch-1

When running tests, the below happens, not sure why:

  • All tests pass when I run the test script. Except the encoding with BOM option.
  • When I run the encoding tests (packages/csv-parse/test/option.encoding.coffee) on its own, it works.
  • I added a small test for \t based on the delimiter tests to see how the logic runs it did detect the delimiter successfully, however it did not pass the test.

Question:

  • We are committing the dist files, is this expected ?

Missing parts:

  • Handling of characters coming from escape, quote, and record delimiter options.
  • Add more tests.
  • Add a references in TS definition.
  • Add a new page about the auto_delimiter in the docs.

Appreciate your feedback :)

@wdavidw
Copy link
Member

wdavidw commented Jan 29, 2024

I'll take some time to review later. In the mean time, what do you mean by "We are committing the dist files".

@wdavidw
Copy link
Member

wdavidw commented Jan 29, 2024

A few notes for now:

  1. delimiter_auto and not auto_delimiter, __discoverDelimiterAuto and not __autoDiscoverDelimiter
  2. Disabled by default, default value is false
  3. When normalizing the option, add consistency check, for exemple it cannot equal the values of record_delimiter (all those rules require tests)
  4. Dont convert to string, you shall compare values directly inside the buffer
  5. Write more unit tests but in particular one which write data one byte at a time (see https://github.com/adaltas/node-csv/blob/master/packages/csv-parse/test/api.stream.events.coffee#L53 as an example)
  6. My strategy would be to discover delimiter before any parsing is done, here is how I will start my experiment
    1. Work around the __needMoreData, if delimiter_auto is activated, and only in the first line, you shall allocated a safe buffer size dedicated to discovery
    2. Start discovery (your __autoDiscoverDelimiter function) just after bom handling and before actual parsing (https://github.com/adaltas/node-csv/blob/master/packages/csv-parse/lib/api/index.js#L109)

@hadyrashwan
Copy link

hadyrashwan commented Jan 30, 2024

I'll take some time to review later. In the mean time, what do you mean by "We are committing the dist files".

When I'm working I always see the build files in the dist folders added to git and not ignored.

Some projects add those build files in the git ignore file.

Just want to make sure that I'm not adding those files by mistake.

@NoahCzelusta
Copy link

NoahCzelusta commented Mar 7, 2024

A couple of comments on the method of detecting delimiters:

  • We cannot safely assume that the most common character is THE CSV delimiter. The CSV delimiter is the character that consistently split the row into the same number of columns on each row.
  • CSVs can safely store strings that can contain the delimiter, so parsing has to be a little more intelligent (either considering quotes or allowing for a small degree of inconsistency in column count per row.

Python has a great example of handling these in their implementation. The pandas library uses this implementation but only reads from the first line of the file (here).

@carlbleick
Copy link

Are there any plans to open a PR for that? As far as I can see the current changes are only present on a branch.

I would definitely love to see that feature.

@vincerubinetti
Copy link

Here's another algorithm for detecting the delimiter that seems like a good idea:
https://stackoverflow.com/a/19070276/2180570

@wdavidw
Copy link
Member

wdavidw commented May 22, 2024

I didn't have the time yet. The solution needs to deal with the streaming nature of the parser. The solution would be to extract a limited amount of bytes from the stream, apply a detection algorythm such as the one proposed above, then replay the bytes stored on the side with the detected delimiter. I need some time to do it correctly.

@schibrikov
Copy link

schibrikov commented Oct 31, 2024

Also highly interested in that. My current workaround is to clone the original stream, read the first chunk on a duplicate, detect the delimiter, then get back and handle the first stream.

So basically:

const [mainStream, delimiterStream] = stream.tee();
const reader = delimiterStream.getReader();
const { value } = await reader.read();
const delimiter = discoverDelimiter(value);
reader.cancel();

return { delimiter, stream: mainStream };

Looks somewhat messy, but it works. I have to dynamically handle different types of files avoiding static configuration as much as possible.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

No branches or pull requests

7 participants