All notable changes to this project will be documented in this file. Changes are grouped as follows:
Added
for new features.Changed
for changes in existing functionality.Deprecated
for soon to be removed functionality.Removed
for removed functionality.Fixed
for any bugfixes.Security
in case of vulnerabilities.
- Annotations
- Extraction pipelines configuration
- Data models
- Added support for returning data model destination type when retrieving transformation jobs.
- When trying to retrieve rows from CDF RAW it will no longer retry the request indefinitely. Instead it will throw an Exception.
- Fine grained control over authentication and headers via
CogniteClient.ofAuthHeader()
. This is method is intended for advanced use cases--most usage scenarios should use.ofClientCredentials()
orofTokenSupplier()
.
- Custom HTTP requests via
client.experimental().cdfHttpRequest(requestURI)
. See Documentation for more information.
- Java 17
UploadQueue.awaitUploads()
waits for all data in the queue to be uploaded before returning.
Engineering diagrams
:Annotation
collected from aDiagramResponse
has been refactored toDiagramResponse.Annotation
. This is to prepare for the release of the stand-aloneAnnotation
resource type.
Login
. This is an API key specific functionality. Has been removed as API keys will be fully removed from Cognite Data Fusion in the near future.
UploadQueue.stop()
now waits until all uploads have finished before returning.
- Thread timeout for
upload queue
. The threads in the upload queue may prevent a client from shutting down properly.
- The new cursor-based
data-points
iterator is enabled by default. - Support for
geoLocation
onassets
. - New versions of
CogniteClient.ofClientCredentials()
andCogniteClient.ofToken()
which include thecdfProject
parameter. TheCDF project
must be specified for the client to work correctly.
- Deprecated
CogniteClient.ofClientCredentials()
andCogniteClient.ofToken()
which do not include thecdfProject
parameter. Please migrate to the new versions of these methods which include thecdfProject
parameter. - Deprecate
CogniteClient.ofKey()
as the API key authentication method is soon to be removed from Cognite Data Fusion.
- Issue a warning if the
CDF project
is not configured for the client.
- Improved performance when reading time series
data point
. When reading 2 - 100 time series in a single request you should see up to 10x improvement in throughput. - A new cursor-based iteration of time series
data points
. This is pre-release functionality which can be enabled by setting a feature flag in theClientConfig
.
ClientConfig config = ClientConfig.create()
.withExperimental(FeatureFlag.create()
.enableDataPointsCursor(true)); // Enable the new cursor-based iterator
CogniteClient.withClientConfig(config); // Pass the config to the client
- The default auth scope breaking when using certain combinations of
CogniteClient.withBaseUrl()
andCogniteClient.withScopes()
.
- Configurable timeout for async api jobs (i.e.
entity matching
andengineering diagram parsing
). UseClientConfig.withAsyncApiJobTimeout(Duration timeout)
to specify a custom timeout. The default timeout is 20 mins. - Support for configuring a proxy server: Documentation.
UploadQueue
for various resource types to optimize data upload to Cognite Data Fusion: Documentation.State store
for storing watermark progress states for data applications (extractors, data pipelines, etc.): Documentation.Extraction pipeline heartbeat
for sending regularSEEN
status runs to Cognite Data Fusion: Documentation.
- Improve javadoc
Assets
- Improve javadoc
Contextualization
- Improve javadoc
DataPoints
- Improve javadoc
Datasets
- Improve javadoc
EngineeringDiagrams
- Improve javadoc
EntityMatching
- Improve javadoc
Events
- Improve javadoc
Experimental
- Improve javadoc
ExtractionPipelineRuns
- Improve javadoc
ExtractionPipelines
- Improve javadoc
Files
- Improve javadoc
Labels
- Improve javadoc
Login
- Improve javadoc
Raw
- Improve javadoc
RawDatabases
- Improve javadoc
RawRows
- Improve javadoc
RawTables
- Improve javadoc
Relationships
- Improve javadoc
Request
- Improve javadoc
SecurityCategories
- Improve javadoc
SequenceRows
- Improve javadoc
Sequences
- Improve javadoc
ThreeD
- Improve javadoc
ThreeDAssetMappings
- Improve javadoc
ThreeDFiles
- Improve javadoc
ThreeDModels
- Improve javadoc
ThreeDModelsRevisions
- Improve javadoc
ThreeDNodes
- Improve javadoc
ThreeDOutputs
- Improve javadoc
ThreeDRevisionLogs
- Improve javadoc
Timeseries
- Improve javadoc
TransformationJobMetrics
- Improve javadoc
TransformationJobs
- Improve javadoc
TransformationNotifications
- Improve javadoc
Transformation
- Improve javadoc
TransformationSchedules
list() extractionPipelineRuns
takesexternal id
as a required filter parameter. Also fixed CDF api URL used.
- Added
Transformation Notifications
- Geo-location attribute on the
files
resource type is supported.
- Added
Transformations
- Added
Transformation Jobs
- Added
Transformation Schedules
- File metadata updates: Fix CDF API payload format
- Experimental: Geo-location attribute to files resource type (Geo-location proto-structure is a subject of future changes)
Sequences
upsert support including modified column schema. Theupsert
functionality includes both modifiedsequences headers
/SequenceMetadata
andsequences rows
/SequenceBody
. For more information, please refer to the documentation: https://github.com/cognitedata/cdf-sdk-java/blob/main/docs/sequence.md#update-sequences and https://github.com/cognitedata/cdf-sdk-java/blob/main/docs/sequence.md#insert-rows
- File binary upload null pointer exception when running on Android devices.
- Fix shaded dependencies. Some of the shaded Kotlin libraries caused conflicts when using the SDK from a Kotlin environment.
- Fix duplicated protobuf class files.
- Fix dependency vulnerability. Bump
jackson-databind
tov2.13.2.1
.
- File binary download retrying on
SSLException
andUnknownHostException
. Both may indicate a saturated link for a long-running job (which file binary downloads often are). - Writing
sequence
columns representing integers could cause exceptions.
- Files API supports S3 buckets as intermediate store for both read from and write to.
Files.download()
took aPath
argument instead ofURI
.
- Write string data points. Write requests will chunk strings at 1M UTF8 bytes per request to respect API limits.
- File binary download. Expired URLs not retried properly.
- Add utility class
com.cognite.client.util.RawRows
for working withRawRow
object. Please refer to the documentation for more information.
- Added
3D Models Revisions
- Added
3D File Download
- Added
3D Asset Mapping
EngineeringDiagrams
promoted from experimental to stable. It has the same signature and behavior as before and is located under thecontextualization
family:CogniteClient.contextualization().engineeringDiagrams()
.- Added convenience methods to the
Request
object for easier handling of items (byexternalId
orid
). You can useRequest.withItemExternalIds(String... externalId)
andRequest.withItemInternalIds(long... internalId)
to add multiple items to the request. - Added convenience methods for retrieving items by
externalId
andid
:client.<resourceType>().retrieve(String... externalId)
andclient.<resourceType>().retrieve(String... externalId)
. This is implemented by the resource typesAssets
,DataPoints
,Datasets
,Events
,ExtractionPipelines
,Files
,Relationships
,Sequences
andSequencesRows
.
- The experimental version of
EngineeringDiagrams
is deprecated given the new, stable version. - The single item methods
Request.withItemExternalId(String externalId)
andRequest.withItemInternalId(long internalId)
have been deprecated in favour of the new multi-item versions.
- The old, experimental
pnid
api. This api has been replaced by theEngineeringDiagrams
api.
- Added
3D Models
- Increased read and write timeouts to match sever-side values
- Upsert of
sequenceMetadata
not identifying duplicate entries correctly.
- Experimental streaming support for
events
andassets
. - Added
login status
by api-key
- Upsert of
sequenceMetadata
not respecting the max number of cells/columns per batch.
- Request retries did not work properly in 1.7.0.
- File binary upload uses PUT instead of POST
- Improved robustness for file binary upload. Add batch-level retries on a broader set of exceptions.
- Improved robustness for file binary upload. Add batch-level retries when the http2 stream is reset.
- Parsing error on file binary download using internal
id
.
- Retry requests on
UnknownHostException
. - Retry requests on Google Cloud Storage timeouts.
- Support for
dataSetId
in theLabels
resource type.
- File binary upload robustness.
Labels
using api v1 endpoint.
- Support for
ExtractionPipeline
andExtractionPipelineRun
so you can send extractor/pipeline observations and heartbeats to CDF. - Improved performance of
Relationships.list()
with added support for partitions. - Support for including the
source
andtarget
object of arelationship
when usinglist()
orretrieve()
. - Improved performance of
Sequences.list()
with added support for partitions.
- More stability improvements to file binary downloads, in particular in situations with limited bandwidth.
- When trying to upload a
File
without a binary, the SDK could throw an exception if the binary was set to be aURI
. Relationship.sourceType
was wrongly set equal toRelationship.targetType
.
- Support for interacting with a non-encrypted api endpoint (via
CogniteClient.enableHttp(true)
). This feature targets testing scenarios. The client will default to secure communication via https which is the only way to interact with Cognite Data Fusion.
- In some cases
baseUrl
would not be respected when using OIDC authentication. - Data set upsert fails on duplicate
externalId
/id
.
- Refactor the experimental
interactive P&ID
to the newengineering diagram
api endpoint. Basically,client.experimental().pnid().detectAnnotationsPnid()
changes toclient.experimental().engineeringDiagrams().detectAnnotations()
. Please refer to the documentation for more information.
- A performance regression introduced in v1.0.0. Performance should be back now :).
- Streaming reads may not start for very large/high end times.
- Streaming support for reading rows from raw tables. More information in the documentation
- Support for recursively deleting raw databases and tables.
ensureParent
when creating a raw table.
- Lingering threads could keep the client from shutting down in a timely manner.
- More robust file binary download when running very large jobs.
- Improved guard against illegal characters in file names when downloading file binaries.
- Utility methods for converting
Value
to various types. This can be useful when working with CDF.Raw which represents its columns asStruct
andValue
. - Ability to synchronize multiple hierarchies via
Assets.synchronizeMultipleHierarchies(Collection<Asset> assetHierarchies)
. - Utility methods for parsing nested
Struct
andValue
objects. This can be useful when working with CDF.Raw.
- Synchronize asset-hierarchy capability.
list()
convenience method that returns all objects for a given resource type.- User documentation.
- Breaking change: Remove the use of wrapper objects from the data transfer objects (
Asset
,Event
, etc.). Please refer to the documentation for more information. - Improved handling of updates for
Relationships
.
- Logback config conflict (issue #37).
- Increased dependencies versions to support Java 11
- Fully automated CD pipeline
- Labels are properly replaced when running upsert replace for
assets
andfiles
. - Support for recursive delete for
assets
.
- Repeated annotations when generating interactive P&IDs.
- Populate auth headers for custom CDF hosts.
- Null values in raw rows should not raise exceptions.
- Error when using api key auth in combination with custom host.
- Support for native token authentication (OpenID Connect).
- Error when trying to download a file binary that only has a file header object in CDF.
- Error when creating new
relationship
objects into data sets. - Error when uploading file binaries with >1k asset links.
- Added support for updating / patching
relationship
.
- Fixed duplicates when listing
files
. The list files partition support has been fixed so that you no longer risk duplicates when manually handling partitions.
- Error when iterating over Raw rows with streams.
- The initial release of the Java SDK.