Releases: lensesio/stream-reactor
Releases · lensesio/stream-reactor
1.2.0
1.1.0
Release Notes
- Upgrade to Kafka 1.1.0
- Added SSL, subscription, partitioning, batching and key selection to Pulsar source and sink Elastic6 connector @caiooliveiraeti !
- HTTP Basic Auth for Elasticsearch http client thanks @justinsoong !
- Add polling timeout on the JMS source connector to avoid high CPU in the source connector poll thanks #373 @matthedude
- Fixes on the elastic primary key separator thanks @caiooliveiraeti!
- Fix on the MQTT class loader
- Fix on the JMS class loader
- Fix on JMS to close down connections cleanly #363 thanks @matthedude!
- Fix on MQTT to correctly handle authentication
- Moved MongoDB batch size to KCQL. connect.mongodb.batch.size is deprecated
- Added connect.mapping.collection.to.json to treat maps, list, sets as json when inserting into Cassandra
- Added support for Elastic Pipelines thanks @caiooliveiraeti!
- Moved ReThinkDB batch size to KCQL connect.rethink.batch.size is deprecated
- MQTT source allows full control of matching the topic
INSERT INTO targetTopic SELECT * FROM mqttTopic ... WITHREGEX=`$THE_REGEX`
- Upgrade Kudu Client to 0.7
- Upgrade Azure documentDB client to 1.16.0
- Upgrade Elastic5 to elastic4s 5.6.5
- Upgrade Elastic6 to elastic4s 6.2.5
- Upgrade Hazelcast client to 3.10
- Upgrade InfluxDB client to 2.9
- Upgrade MongoDB client to 3.6.3
- Upgrade Redis client to 2.9
- Kudu connector now accepts a comma separated list of master addresses
- Added missing connect.elastic.retry.interval to elastic5 and elastic6
- Added a default value set property to Cassandra to allow DEFAULT UNSET to be added on insert. Omitted columns from maps default to null. Alternatively, if set UNSET, pre-existing value will be preserved
- Cassandra source batch size now in KCQL. connect.cassandra.batch.size is deprecated .
Release 1.0.0
Kafka 1.0.0 support
0.4.0
0.4.0
Documentation available at http://lenses.stream
This release is for Kafka 0.11
- Apache Pulsar - New Apache Pulsar source and sink Kafka connectors !
- FTP - Add FTPS support via configuration
ftp.protocol
, eitherftp
(default) orftps
- MQTT source - Fix High CPU, thanks @masahirom
- InfluxDB sink - Supports Dates and BigDecimals
- Redis sink - Allow multiple primary keys
- Kudu sink - Improve logging
- JMS source - Support transacted queues #285 thanks @matthedude
- Cassandra - Upgrade to Cassandra driver 3.3.0 and refactor tests
- Cassandra sink - Added DELETE functionality when null null payload, thanks @sandonjacobs
- Cassandra sink - Fix writing multiple topics to the same table in Cassandra #284
- Cassandra source - Configurable timespan queries. You can now control the timespan the Connector will query for
- Cassandra source - Allow setting start poll timestamp
- Cassandra source - Allow setting initial query timestamp
- kafka-connect-common - Support handling primary keys with doc strings, thanks @medvekoma
MQTT enhancements
We allow the records pulled from MQTT to be sent over to Kafka as JSON.
We also allow to pick a field(-s) from the incoming MQTT message and use them as the key for the Kafka message
kudu-fix
test release for kudu fix
Release 0.3.0
September-2017 release of (stream-reactor) for Kafka 0.11 and Confluent 3.3
- Added MQTT Sink and wildcard support
- JMS and MQTT connectors new KCQL support for WITHCONVERTERS and WITHTYPE
- Added FLUSH MODE to Kudu. Thanks! @patsak
- Upgrade CoAP to 2.0.0-M4.
Check 0.2.6 release notes below to find the complete CHANGELOG
Release 0.2.6
September-2017 release of (stream-reactor) for Kafka 0.10.2.0 and Confluent 3.2.2
Features
- Added MQTT Sink
- Upgrade to Confluent 3.2.2
- Upgrade to KCQL 2x
- Add CQL generator to Cassandra source
- Add KCQL INCREMENTALMODE support to the Cassandra source, bulk mode and the timestamp column type is now take from KCQL
- Support for setting key and truststore type on Cassandra connectors
- Added token based paging support for Cassandra source
- Added default bytes converter to JMS Source
- Added default connection factory to JMS Source
- Added support for SharedDurableConsumers to JMS Connectors
- Upgraded JMS Connector to JMS 2.0
- Moved to Elastic4s 2.4
- Added Elastic5s with TCP, TCP+XPACK and HTTP client support
- Upgrade Azure Documentdb to 1.11.0
- Added optional progress counter to all connectors, it can be enabled with
connect.progress.enabled
which will periodically report log messages processed - Added authentication and TLS to ReThink Connectors
- Added TLS support for ReThinkDB, add batch size option to source for draining the internal queues.
- Upgrade Kudu Client to 1.4.0
- Support for dates in Elastic Indexes and custom document types
- Upgrade Connect CLI to 1.0.2 (Renamed to connect-cli)
Bug Fixes
- Fixes for high CPU on CoAP source
- Fixes for high CPU on Cassandra source
- Fixed Avro double fields mapping to Kudu columns
- Fixes on JMS properties converter, Invalid schema when extracting properties
Misc
- Refactored Cassandra Tests to use only one embedded instance
- Removed unused batch size and bucket size options from Kudu, they are taken from KCQL
- Removed unused batch size option from DocumentDb
- Rename Azure DocumentDb
connect.documentdb.db
toconnect.documentdb.db
- Rename Azure DocumentDb
connect.documentdb.database.create
toconnect.documentdb.db.create
- Rename Cassandra Source
connect.cassandra.source.kcql
toconnect.cassandra.kcql
- Rename Cassandra Source
connect.cassandra.source.timestamp.type
toconnect.cassandra.timestamp.type
- Rename Cassandra Source
connect.cassandra.source.import.poll.interval
toconnect.cassandra.import.poll.interval
- Rename Cassandra Source
connect.cassandra.source.error.policy
toconnect.cassandra.error.policy
- Rename Cassandra Source
connect.cassandra.source.max.retries
toconnect.cassandra.max.retries
- Rename Cassandra Sink
connect.cassandra.source.retry.interval
toconnect.cassandra.retry.interval
- Rename Cassandra Sink
connect.cassandra.sink.kcql
toconnect.cassandra.kcql
- Rename Cassandra Sink
connect.cassandra.sink.error.policy
toconnect.cassandra.error.policy
- Rename Cassandra Sink
connect.cassandra.sink.max.retries
toconnect.cassandra.max.retries
- Rename Cassandra Sink Sink
connect.cassandra.sink.retry.interval
toconnect.cassandra.retry.interval
- Rename Coap Source
connect.coap.bind.port
toconnect.coap.port
- Rename Coap Sink
connect.coap.bind.port
toconnect.coap.port
- Rename Coap Source
connect.coap.bind.host
toconnect.coap.host
- Rename Coap Sink
connect.coap.bind.host
toconnect.coap.host
- Rename MongoDb
connect.mongo.database
toconnect.mongo.db
- Rename MongoDb
connect.mongo.sink.batch.size
toconnect.mongo.batch.size
- Rename Druid
connect.druid.sink.kcql
toconnect.druid.kcql
- Rename Druid
connect.druid.sink.conf.file
toconnect.druid.kcql
- Rename Druid
connect.druid.sink.write.timeout
toconnect.druid.write.timeout
- Rename Elastic
connect.elastic.sink.kcql
toconnect.elastic.kcql
- Rename HBase
connect.hbase.sink.column.family
toconnect.hbase.column.family
- Rename HBase
connect.hbase.sink.kcql
toconnect.hbase.kcql
- Rename HBase
connect.hbase.sink.error.policy
toconnect.hbase.error.policy
- Rename HBase
connect.hbase.sink.max.retries
toconnect.hbase.max.retries
- Rename HBase
connect.hbase.sink.retry.interval
toconnect.hbase.retry.interval
- Rename Influx
connect.influx.sink.kcql
toconnect.influx.kcql
- Rename Influx
connect.influx.connection.user
toconnect.influx.username
- Rename Influx
connect.influx.connection.password
toconnect.influx.password
- Rename Influx
connect.influx.connection.database
toconnect.influx.db
- Rename Influx
connect.influx.connection.url
toconnect.influx.url
- Rename Kudu
connect.kudu.sink.kcql
toconnect.kudu.kcql
- Rename Kudu
connect.kudu.sink.error.policy
toconnect.kudu.error.policy
- Rename Kudu
connect.kudu.sink.retry.interval
toconnect.kudu.retry.interval
- Rename Kudu
connect.kudu.sink.max.retries
toconnect.kudu.max.reties
- Rename Kudu
connect.kudu.sink.schema.registry.url
toconnect.kudu.schema.registry.url
- Rename Redis
connect.redis.connection.password
toconnect.redis.password
- Rename Redis
connect.redis.sink.kcql
toconnect.redis.kcql
- Rename Redis
connect.redis.connection.host
toconnect.redis.host
- Rename Redis
connect.redis.connection.port
toconnect.redis.port
- Rename ReThink
connect.rethink.source.host
toconnect.rethink.host
- Rename ReThink
connect.rethink.source.port
toconnect.rethink.port
- Rename ReThink
connect.rethink.source.db
toconnect.rethink.db
- Rename ReThink
connect.rethink.source.kcql
toconnect.rethink.kcql
- Rename ReThink Sink
connect.rethink.sink.host
toconnect.rethink.host
- Rename ReThink Sink
connect.rethink.sink.port
toconnect.rethink.port
- Rename ReThink Sink
connect.rethink.sink.db
toconnect.rethink.db
- Rename ReThink Sink
connect.rethink.sink.kcql
toconnect.rethink.kcql
- Rename JMS
connect.jms.user
toconnect.jms.username
- Rename JMS
connect.jms.source.converters
toconnect.jms.converters
- Remove JMS
connect.jms.converters
and replace my kcqlwithConverters
- Remove JMS
connect.jms.queues
and replace my kcqlwithType QUEUE
- Remove JMS
connect.jms.topics
and replace my kcqlwithType TOPIC
- Rename Mqtt
connect.mqtt.source.kcql
toconnect.mqtt.kcql
- Rename Mqtt
connect.mqtt.user
toconnect.mqtt.username
- Rename Mqtt
connect.mqtt.hosts
toconnect.mqtt.connection.hosts
- Remove Mqtt
connect.mqtt.converters
and replace my kcqlwithConverters
- Remove Mqtt
connect.mqtt.queues
and replace my kcqlwithType=QUEUE
- Remove Mqtt
connect.mqtt.topics
and replace my kcqlwithType=TOPIC
- Rename Hazelcast
connect.hazelcast.sink.kcql
toconnect.hazelcast.kcql
- Rename Hazelcast
connect.hazelcast.sink.group.name
toconnect.hazelcast.group.name
- Rename Hazelcast
connect.hazelcast.sink.group.password
toconnect.hazelcast.group.password
- Rename Hazelcast
connect.hazelcast.sink.cluster.members
tpconnect.hazelcast.cluster.members
- Rename Hazelcast
connect.hazelcast.sink.batch.size
toconnect.hazelcast.batch.size
- Rename Hazelcast
connect.hazelcast.sink.error.policy
toconnect.hazelcast.error.policy
- Rename Hazelcast
connect.hazelcast.sink.max.retries
toconnect.hazelcast.max.retries
- Rename Hazelcast
connect.hazelcast.sink.retry.interval
toconnect.hazelcast.retry.interval
- Rename VoltDB
connect.volt.sink.kcql
toconnect.volt.kcql
- Rename VoltDB
connect.volt.sink.connection.servers
toconnect.volt.servers
- Rename VoltDB
connect.volt.sink.connection.user
toconnect.volt.username
- Rename VoltDB
connect.volt.sink.connection.password
toconnect.volt.password
- Rename VoltDB
connect.volt.sink.error.policy
toconnect.volt.error.policy
- Rename VoltDB
connect.volt.sink.max.retries
toconnect.volt.max.retries
- Rename VoltDB
connect.volt.sink.retry.interval
toconnect.volt.retry.interval
Pre release 0.2.5-alpha - Hot fixes for Hbase
Release of Hbase sink connector only for addressing ByteBuffer conversions for #204
Stream-Reactor v0.2.5
- Added Azure DocumentDB Sink Connector
- Added JMS Source Connector.
- Added UPSERT to Elastic Search
- Support Confluent 3.2 and Kafka 0.10.2.
- Cassandra improvements withunwrap
- Upgrade to Kudu 1.0 and CLI 1.0
- Add ingest_time to CoAP Source
- InfluxDB bug fixes for tags and field selection.
- Added Schemaless Json and Json with schema support to JMS Sink.
- Support for Cassandra data type of timestamp in the Cassandra Source for timestamp tracking.