-
Notifications
You must be signed in to change notification settings - Fork 22
Commit
This commit does not belong to any branch on this repository, and may belong to a fork outside of the repository.
- Loading branch information
Showing
7 changed files
with
165 additions
and
165 deletions.
There are no files selected for viewing
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Original file line number | Diff line number | Diff line change |
---|---|---|
@@ -0,0 +1,60 @@ | ||
:name: Keys Source Connector | ||
[[_source_keys]] | ||
= {name} | ||
|
||
The {name} captures changes happening to keys in a Redis database and publishes keys and values to a Kafka topic. | ||
The data structure key will be mapped to the record key, and the value will be mapped to the record value. | ||
|
||
**Make sure the Redis database has keyspace notifications enabled** using `notify-keyspace-events = KEA` in `redis.conf` or via `CONFIG SET`. | ||
For more details see {link_redis_notif}. | ||
|
||
[[_source_keys_class]] | ||
== Class Name | ||
|
||
The {name} class name is `com.redis.kafka.connect.RedisKeysSourceConnector`. | ||
|
||
The corresponding configuration property would be: | ||
|
||
[source,properties] | ||
---- | ||
connector.class = com.redis.kafka.connect.RedisKeysSourceConnector | ||
---- | ||
|
||
[[_source_keys_delivery]] | ||
== Delivery Guarantees | ||
|
||
The {name} does not guarantee data consistency because it relies on Redis keyspace notifications which have no delivery guarantees. | ||
It is possible for some notifications to be missed, for example in case of network failures. | ||
|
||
Also, depending on the type, size, and rate of change of data structures on the source it is possible the connector cannot keep up with the change stream. | ||
For example if a big set is repeatedly updated the connector will need to read the whole set on each update and transfer it over to the target database. | ||
With a big-enough set the connector could fall behind and the internal queue could fill up leading up to updates being dropped. | ||
Some preliminary sizing using Redis statistics and `bigkeys`/`memkeys` is recommended. | ||
If you need assistance please contact your Redis account team. | ||
|
||
[[_source_keys_tasks]] | ||
== Tasks | ||
|
||
The {name} should only be configured with one task as keyspace notifications are broadcast to all listeners and cannot be consumed in a round-robin fashion. | ||
|
||
[[_source_keys_redis_client]] | ||
include::{includedir}/_redis_client.adoc[leveloffset=+1] | ||
|
||
|
||
[[_source_keys_config]] | ||
== Configuration | ||
[source,properties] | ||
---- | ||
connector.class = com.redis.kafka.connect.RedisKeysSourceConnector | ||
redis.keys.pattern = <glob> <1> | ||
redis.keys.timeout = <millis> <2> | ||
topic = <name> <3> | ||
---- | ||
<1> Key pattern to subscribe to. | ||
This is the key portion of the pattern that will be used to listen to keyspace events. | ||
For example `foo:*` translates to pubsub channel `$$__$$keyspace@0$$__$$:foo:*` and will capture changes to keys `foo:1`, `foo:2`, etc. | ||
See {link_redis_keys} for pattern details. | ||
<2> Idle timeout in millis. | ||
Duration after which the connector will stop if no activity is encountered. | ||
<3> Name of the destination topic. | ||
|
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Original file line number | Diff line number | Diff line change |
---|---|---|
@@ -0,0 +1,96 @@ | ||
:name: Stream Source Connector | ||
[[_source_stream]] | ||
= {name} | ||
|
||
The {name} reads from a Redis stream and publishes messages to a Kafka topic. | ||
|
||
[[_source_stream_class]] | ||
== Class Name | ||
|
||
The {name} class name is `com.redis.kafka.connect.RedisStreamSourceConnector`. | ||
|
||
The corresponding configuration property would be: | ||
|
||
[source,properties] | ||
---- | ||
connector.class = com.redis.kafka.connect.RedisStreamSourceConnector | ||
---- | ||
|
||
[[_source_stream_delivery]] | ||
== Delivery Guarantees | ||
|
||
The {name} can be configured to ack stream messages either automatically (at-most-once delivery) or explicitly (at-least-once delivery). | ||
The default is at-least-once delivery. | ||
|
||
=== At-Least-Once | ||
|
||
In this mode, each stream message is acknowledged after it has been written to the corresponding topic. | ||
|
||
[source,properties] | ||
---- | ||
redis.stream.delivery = at-least-once | ||
---- | ||
|
||
=== At-Most-Once | ||
|
||
In this mode, stream messages are acknowledged as soon as they are read. | ||
|
||
[source,properties] | ||
---- | ||
redis.stream.delivery = at-most-once | ||
---- | ||
|
||
[[_source_stream_tasks]] | ||
== Tasks | ||
|
||
Reading from the stream is done through a consumer group so that multiple instances of the connector configured via the `tasks.max` can consume messages in a round-robin fashion. | ||
|
||
[[_source_stream_redis_client]] | ||
include::{includedir}/_redis_client.adoc[leveloffset=+1] | ||
|
||
[[_source_stream_schema]] | ||
== Message Schema | ||
|
||
=== Key Schema | ||
|
||
Keys are of type String and contain the stream message id. | ||
|
||
=== Value Schema | ||
|
||
The value schema defines the following fields: | ||
|
||
[options="header"] | ||
|==== | ||
|Name|Schema|Description | ||
|id |STRING |Stream message ID | ||
|stream|STRING |Stream key | ||
|body |Map of STRING|Stream message body | ||
|==== | ||
|
||
[[_source_stream_config]] | ||
=== Configuration | ||
|
||
[source,properties] | ||
---- | ||
connector.class = com.redis.kafka.connect.RedisStreamSourceConnector | ||
redis.stream.name = <name> <1> | ||
redis.stream.offset = <offset> <2> | ||
redis.stream.block = <millis> <3> | ||
redis.stream.consumer.group = <group> <4> | ||
redis.stream.consumer.name = <name> <5> | ||
redis.stream.delivery = <mode> <6> | ||
topic = <name> <7> | ||
---- | ||
|
||
<1> Name of the stream to read from. | ||
<2> {link_stream_msg_id} to start reading from (default: `0-0`). | ||
<3> Maximum {link_xread} wait duration in milliseconds (default: `100`). | ||
<4> Name of the stream consumer group (default: `kafka-consumer-group`). | ||
<5> Name of the stream consumer (default: `consumer-${task}`). | ||
May contain `${task}` as a placeholder for the task id. | ||
For example, `foo${task}` and task `123` => consumer `foo123`. | ||
<6> Delivery mode: `at-least-once`, `at-most-once` (default: `at-least-once`). | ||
<7> Destination topic (default: `${stream}`). | ||
May contain `${stream}` as a placeholder for the originating stream name. | ||
For example, `redis_${stream}` and stream `orders` => topic `redis_orders`. | ||
|
This file was deleted.
Oops, something went wrong.