Skip to content

Commit

Permalink
docs: Fixed formatting issues
Browse files Browse the repository at this point in the history
  • Loading branch information
jruaux committed May 7, 2024
1 parent 8d263dc commit c6e8c63
Show file tree
Hide file tree
Showing 7 changed files with 165 additions and 165 deletions.
3 changes: 2 additions & 1 deletion docs/guide/src/docs/asciidoc/index.adoc
Original file line number Diff line number Diff line change
Expand Up @@ -11,5 +11,6 @@ include::{includedir}/overview.adoc[]
include::{includedir}/quickstart.adoc[]
include::{includedir}/install.adoc[]
include::{includedir}/sink.adoc[]
include::{includedir}/source.adoc[]
include::{includedir}/source-stream.adoc[]
include::{includedir}/source-keys.adoc[]
include::{includedir}/resources.adoc[]
3 changes: 2 additions & 1 deletion docs/guide/src/docs/asciidoc/install.adoc
Original file line number Diff line number Diff line change
Expand Up @@ -14,4 +14,5 @@ Download the latest release archive: {link_releases}.

== Manually

Follow the instructions in {link_manual_install}
Follow the instructions in {link_manual_install}.

3 changes: 2 additions & 1 deletion docs/guide/src/docs/asciidoc/overview.adoc
Original file line number Diff line number Diff line change
Expand Up @@ -11,6 +11,7 @@ This guide provides documentation and usage information across the following top
* <<_docker,Docker Example>>
* <<_install,Install>>
* <<_sink,Sink Connector>>
* <<_source,Source Connector>>
* <<_source_stream,Stream Source Connector>>
* <<_source_keys,Keys Source Connector>>
* <<_resources,Resources>>

6 changes: 3 additions & 3 deletions docs/guide/src/docs/asciidoc/sink.adoc
Original file line number Diff line number Diff line change
@@ -1,6 +1,6 @@
:name: Sink Connector
[[_sink]]
= Sink Connector Guide
:name: Redis Kafka Sink Connector
= {name}

The {name} consumes records from a Kafka topic and writes the data to Redis.

Expand All @@ -20,7 +20,7 @@ connector.class = com.redis.kafka.connect.RedisSinkConnector
The {name} guarantees that records from the Kafka topic are delivered at least once.

[[_sink_tasks]]
== Multiple tasks
== Tasks

The {name} supports running one or more tasks.
You can specify the number of tasks with the `tasks.max` configuration property.
Expand Down
60 changes: 60 additions & 0 deletions docs/guide/src/docs/asciidoc/source-keys.adoc
Original file line number Diff line number Diff line change
@@ -0,0 +1,60 @@
:name: Keys Source Connector
[[_source_keys]]
= {name}

The {name} captures changes happening to keys in a Redis database and publishes keys and values to a Kafka topic.
The data structure key will be mapped to the record key, and the value will be mapped to the record value.

**Make sure the Redis database has keyspace notifications enabled** using `notify-keyspace-events = KEA` in `redis.conf` or via `CONFIG SET`.
For more details see {link_redis_notif}.

[[_source_keys_class]]
== Class Name

The {name} class name is `com.redis.kafka.connect.RedisKeysSourceConnector`.

The corresponding configuration property would be:

[source,properties]
----
connector.class = com.redis.kafka.connect.RedisKeysSourceConnector
----

[[_source_keys_delivery]]
== Delivery Guarantees

The {name} does not guarantee data consistency because it relies on Redis keyspace notifications which have no delivery guarantees.
It is possible for some notifications to be missed, for example in case of network failures.

Also, depending on the type, size, and rate of change of data structures on the source it is possible the connector cannot keep up with the change stream.
For example if a big set is repeatedly updated the connector will need to read the whole set on each update and transfer it over to the target database.
With a big-enough set the connector could fall behind and the internal queue could fill up leading up to updates being dropped.
Some preliminary sizing using Redis statistics and `bigkeys`/`memkeys` is recommended.
If you need assistance please contact your Redis account team.

[[_source_keys_tasks]]
== Tasks

The {name} should only be configured with one task as keyspace notifications are broadcast to all listeners and cannot be consumed in a round-robin fashion.

[[_source_keys_redis_client]]
include::{includedir}/_redis_client.adoc[leveloffset=+1]


[[_source_keys_config]]
== Configuration
[source,properties]
----
connector.class = com.redis.kafka.connect.RedisKeysSourceConnector
redis.keys.pattern = <glob> <1>
redis.keys.timeout = <millis> <2>
topic = <name> <3>
----
<1> Key pattern to subscribe to.
This is the key portion of the pattern that will be used to listen to keyspace events.
For example `foo:*` translates to pubsub channel `$$__$$keyspace@0$$__$$:foo:*` and will capture changes to keys `foo:1`, `foo:2`, etc.
See {link_redis_keys} for pattern details.
<2> Idle timeout in millis.
Duration after which the connector will stop if no activity is encountered.
<3> Name of the destination topic.

96 changes: 96 additions & 0 deletions docs/guide/src/docs/asciidoc/source-stream.adoc
Original file line number Diff line number Diff line change
@@ -0,0 +1,96 @@
:name: Stream Source Connector
[[_source_stream]]
= {name}

The {name} reads from a Redis stream and publishes messages to a Kafka topic.

[[_source_stream_class]]
== Class Name

The {name} class name is `com.redis.kafka.connect.RedisStreamSourceConnector`.

The corresponding configuration property would be:

[source,properties]
----
connector.class = com.redis.kafka.connect.RedisStreamSourceConnector
----

[[_source_stream_delivery]]
== Delivery Guarantees

The {name} can be configured to ack stream messages either automatically (at-most-once delivery) or explicitly (at-least-once delivery).
The default is at-least-once delivery.

=== At-Least-Once

In this mode, each stream message is acknowledged after it has been written to the corresponding topic.

[source,properties]
----
redis.stream.delivery = at-least-once
----

=== At-Most-Once

In this mode, stream messages are acknowledged as soon as they are read.

[source,properties]
----
redis.stream.delivery = at-most-once
----

[[_source_stream_tasks]]
== Tasks

Reading from the stream is done through a consumer group so that multiple instances of the connector configured via the `tasks.max` can consume messages in a round-robin fashion.

[[_source_stream_redis_client]]
include::{includedir}/_redis_client.adoc[leveloffset=+1]

[[_source_stream_schema]]
== Message Schema

=== Key Schema

Keys are of type String and contain the stream message id.

=== Value Schema

The value schema defines the following fields:

[options="header"]
|====
|Name|Schema|Description
|id |STRING |Stream message ID
|stream|STRING |Stream key
|body |Map of STRING|Stream message body
|====

[[_source_stream_config]]
=== Configuration

[source,properties]
----
connector.class = com.redis.kafka.connect.RedisStreamSourceConnector
redis.stream.name = <name> <1>
redis.stream.offset = <offset> <2>
redis.stream.block = <millis> <3>
redis.stream.consumer.group = <group> <4>
redis.stream.consumer.name = <name> <5>
redis.stream.delivery = <mode> <6>
topic = <name> <7>
----

<1> Name of the stream to read from.
<2> {link_stream_msg_id} to start reading from (default: `0-0`).
<3> Maximum {link_xread} wait duration in milliseconds (default: `100`).
<4> Name of the stream consumer group (default: `kafka-consumer-group`).
<5> Name of the stream consumer (default: `consumer-${task}`).
May contain `${task}` as a placeholder for the task id.
For example, `foo${task}` and task `123` => consumer `foo123`.
<6> Delivery mode: `at-least-once`, `at-most-once` (default: `at-least-once`).
<7> Destination topic (default: `${stream}`).
May contain `${stream}` as a placeholder for the originating stream name.
For example, `redis_${stream}` and stream `orders` => topic `redis_orders`.

159 changes: 0 additions & 159 deletions docs/guide/src/docs/asciidoc/source.adoc

This file was deleted.

0 comments on commit c6e8c63

Please sign in to comment.