- Fix CloudFetch "row number N is not contained in any arrow batch" error (#234)
- Security: Resolve HIGH vulnerability in x/net (CVE-2023-39325) (#233 by @anthonycrobinson)
- Expose
dbsql.ConnOption
type (#202 by @shelldandy) - Fix a connection leak in PingContext (#240 by @jackyhu-db)
- Added connection option
WithSkipTLSHostVerify
(#225 by @jackyhu-db)
- Fix: handle
nil
values passed as query parameter (#199 by @esdrasbeleza) - Fix: provide content length on staging file put (#217 by @candiduslynx)
- Fix formatting of *float64 parameters (#215 by @esdrasbeleza)
- Fix: use correct tenant ID for different Azure domains (#210 by @tubiskasaroos)
- Added OAuth support for GCP (#189 by @rcypher-databricks)
- Staging operations: stream files instead of loading into memory (#197 by @mdibaiee)
- Staging operations: don't panic on REMOVE (#205 by @candiduslynx)
- Fix formatting of Date/Time query parameters (#207 by @candiduslynx)
- Bug fix for ArrowBatchIterator.HasNext(). Incorrectly returned true for result sets with zero rows.
- Added .us domain to inference list for AWS OAuth
- Bug fix for OAuth m2m scopes, updated m2m authenticator to use "all-apis" scope.
- Logging improvements
- Added handling for staging remove
- Named parameter support
- Better handling of bad connection errors and specifying server protocol
- OAuth implementation
- Expose Arrow batches to users
- Add support for staging operations
- Improve error information when query terminates in unexpected state
- Do not override global logger time format
- Enable Transport configuration for http client
- fix: update arrow to v12
- Updated doc.go for retrieving query id and connection id
- Bug fix issue 147: BUG with reading table that contains copied map
- Allow WithServerHostname to specify protocol
- bug fix for panic when executing non record producing statements using DB.Query()/DB.QueryExec()
- allow client provided authenticator
- more robust retry behaviour
- bug fix for null values in complex types
- Improved error types and info
- Feat: Support ability to retry on specific failures
- Fetch results in arrow format
- Improve error message and retry behaviour
Fixing cancel race condition
- Package doc (doc.go)
- Handle FLOAT values as float32
- Fix for result.AffectedRows
- Use new ctx when closing operation after cancel
- Set default port to 443
- Package doc (doc.go)
- Handle FLOAT values as float32
- Fix for result.AffectedRows
- Add or edit documentation above methods
- Tweaks to readme
- Use new ctx when closing operation after cancel
- Handle parsing negative years in dates
- fix thread safety issue
- Don't ignore error in InitThriftClient
- Close optimization for Rows
- Close operation after executing statement
- Minor change to examples
- P&R improvements
- Fix thread safety issue in connector
- Support for DirectResults
- Support for context cancellation and timeout
- Session parameters (e.g.: timezone)
- Thrift Protocol update
- Several logging improvements
- Added better examples. See workflow
- Added dbsql.NewConnector() function to help initialize DB
- Many other small improvements and bug fixes
- Removed support for client-side query parameterization
- Removed need to start DSN with "databricks://"
- Fix: Could not fetch rowsets greater than the value of
maxRows
(#18) - Updated default user agent
- Updated README and CONTRIBUTING
- Add escaping of string parameters.
- Fix timeout units to be milliseconds instead of nanos.
- Fix module name
- Initial release