This repository has been archived by the owner on Sep 9, 2020. It is now read-only.
Releases: datarobot/batch-scoring
Releases · datarobot/batch-scoring
v1.14.2
Bugfixes
- Added check to detect and warn about quoted delimiters during --fast mode with --keep_cols.
Security fixes
- Update
requests
dependency due to https://nvd.nist.gov/vuln/detail/CVE-2018-18074
v1.14.1
v1.14.0
Bugfixes
- Fixed setting proxy with env vars (HTTP_PROXY, HTTPS_PROXY, NO_PROXY)
- Enforce shelve to use dbm.dumb/dumbdbm modules for Python 3.x/2.7 respectively to prevent hiccups on a big amount of generated checkpoint. Was caught on macos X (ndbm backend).
Enhancements
- New command batch_scoring_deployment_aware for scoring with new deployment aware routes.
v1.13.3
v1.13.2: Merge pull request #120 from datarobot/fix-wheel
Build different wheels for Python 2 and Python 3
v1.13.0
v1.13.0 2017 November 10
Enhancements
- Brings back support for legacy predictions (api/v1) and a new parameter for specifying api version (
--api_version
).
Checkbatch_scoring --help
for a list of valid options and the default value. - Adds
--no_verify_ssl
argument for disabling SSL verification and--ca_bundle
for specifying certificate(s) of trusted Certificate Authorities. - Default for timeout is now None, meaning that the code does not enforce a timeout for operations to the server. This allows completion of runs with higher numbers of threads, particularly in MacOS. The value remains modifiable, and 30 seconds is a reasonable value in most cases.
Bugfixes
- An issue which caused exit codes to not be set correctly from executables installed via the standalone installer has been addressed. The exit codes will now be set correctly.
- An issue which caused script crashes if one or more boolean options were specified in the config file.
v1.12.1
v1.12.0
1.12.0 (2017 August 9)
Enhancements
- Batch scoring now works with Python 3.6 on Windows (offline installs require 3.5 though)
- Logs now include version, retry attempts and whether output file was removed.
- New argument
no-resume
that allows you to start new batch-scoring run from scratch without being questioned about previous runs. - The version of the dependency
trafaret
has been pinned to0.10.0
to deal with a breaking change in the interface
of that package.
Documentation
- A new "Version Compatibility" section has been added to the README to help surface to users any
incompatibilities between versions ofbatch_scoring
and versions ofDataRobot
.
v1.11.0
1.11.0 (2017 May 30)
New Features
- New parameter
field_size_limit
allows users to specify a larger maximum field
size than the Pythoncsv
module normally allows. Users can use a larger number
for this value if they encounter issues with very large text fields, for example.
Please note that using larger values for this parameter may cause issues with
memory consumption.
Bugfixes
-
Previously, files whose first few lines did not fit within 512KB would error during
the auto-sampler (which finds a reasonable number of rows to send with each batch).
This issue hsa been fixed by adding a fallback to a default of 10 lines per
batch in these cases. This parameter can still be overridden by using the
n_samples
parameter. -
Fix issue when client error message wasn't logged properly.