-
-
Notifications
You must be signed in to change notification settings - Fork 36
readCsv
Reads a csv file and processes and/or returns the data.
Performance-optimized for large files and offering a range of options available via the Apache Commons CSV library.
By default the method returns an object to which further optional calls can be chained before finally calling execute()
to process the file and (if required) return the data.
readCsv( filepath )[...optional configuration calls].execute();
Chainable? No.
filepath = "c:/temp/my.csv";
result = spreadsheet.readCsv( filepath )
.intoAnArray()
.execute();
An ordered struct with 2 keys:
-
columns
: array of header/column names (empty if none specified) -
data
: array of row value (native java) arrays
The format of the struct makes it simple to convert the result to a query should you wish:
filepath = "c:/temp/my.csv";
result = spreadsheet.readCsv( filepath )
.intoAnArray()
.withFirstRowIsHeader()
.execute();
resultAsQuery = DeserializeJson( SerializeJson( result ), false );
Important: For performance reasons, each row is returned as a native Java array rather than a CFML array. This means that you may not be able to use "member functions" such as row.Find( "string" )
or other CFML operations on them, depending on your engine. Generally speaking using standard CFML array functions should be ok, e.g. ArrayFind( row, "string" )
.
Predefined Formats are preset Commons CSV configuration combinations optimized for for different contexts, such as tab-delimited data.
filepath = "c:/temp/myTabDelimited.csv";
result = spreadsheet.readCsv( filepath )
.intoAnArray()
.withPredefinedFormat( "TDF" )
.execute();
If not specified the DEFAULT predefined format will be used.
Whether to allow missing column names in the header line. Default: true
. Commons CSV documentation
Whether to flush on close. Default: true
. Commons CSV documentation
Sets the comment start marker to the specified character. Commons CSV documentation
Sets the delimiter character. Commons CSV documentation
To set tab as the delimiter you can use any of the following values as parameters: "#Chr( 9 )#"
, "\t"
, "tab"
, "TAB"
Sets the duplicate header names behavior. Possible values: "ALLOW_ALL", "ALLOW_EMPTY", "DISALLOW". Commons CSV documentation
Sets the escape character. Commons CSV documentation
Whether to use the first row as the header and exclude it from being processed as part of the data. Default: true
.
Manually sets the row values which will be detected as the header. To auto-detect the header from the first row, use withFirstRowIsHeader()
(see above) Commons CSV documentation
Sets the empty line skipping behavior, true to ignore the empty lines between the records, false to translate empty lines to empty records. Default: true
. Commons CSV documentation
Sets the parser case mapping behavior, true to access name/values, false to leave the mapping as is. Default: true
. Commons CSV documentation
Sets the parser trimming behavior, true to remove the surrounding spaces, false to leave the spaces as is. Default: true
. Commons CSV documentation
Converts strings equal to the given nullString to null when reading records. Commons CSV documentation
Sets the quote character. Commons CSV documentation
Ignore the specified number of rows at the start of the file. Should be a positive integer.
Sets whether to skip the header record. Default: true
. Commons CSV documentation
Sets whether to add a trailing delimiter. Default: true
. Commons CSV documentation
Sets whether to trim leading and trailing blanks. Default: true
. Commons CSV documentation
If you would like to exclude or only include certain rows in your CSV file from being processed you can use withRowFilter()
to supply a User Defined Function (UDF) which accepts the array of row values and returns true if it should be included. For example to skip any row that contains columns with the word "tobacco" in them:
filter = function( row ){
return !ArrayFindNoCase( row, "tobacco" );
};
result = spreadsheet.readCsv( filepath )
.intoAnArray()
.withRowFilter( filter )
.execute();
Important: As described above, rows are native Java rather than a CFML arrays so your UDF should avoid using "member functions". For example instead of:
return !row.FindNoCase( "tobacco" );
use
return !ArrayFindNoCase( row, "tobacco" );
Rather than return the CSV data into an array for subsequent processing, you may wish to process each row directly as it is read from the file. This is especially suited to very large files as it avoids the need to load the data into memory.
You can do this by using withRowProcessor()
to pass in a UDF which will be executed on each row and accepts the current row values and row number.
processor = function( row, rowNumber ){
//insert into DB or whatever
}
spreadsheet.readCsv( filepath )
.withRowProcessor( processor )
.execute();
You can perform row processing and return the processed data if you wish.
processor = function( row, rowNumber ){
//insert into DB or whatever
}
result = spreadsheet.readCsv( filepath )
.intoAnArray()
.withRowProcessor( processor )
.execute();