This repo contains the source code needed for Confluent Operations Training for Apache Kafka.
docker-compose up -d
docker-compose down
Or, destroy your cluster completely (lose all topic data):
docker-compose down -v
Enter the tools
container to run commands against the Kafka cluster:
docker-compose exec tools /bin/bash
Each command can be run from within the tools
container.
Create a topic "my-topic":
kafka-topics \
--bootstrap-server kafka-1:9092 \
--create \
--topic my-topic \
--replication-factor 3 \
--partitions 2
Create a consumer with group.id=my-group
:
kafka-console-consumer \
--bootstrap-server \
kafka-3:9092 \
--group my-group \
--topic my-topic \
--from-beginning
Create a producer that produces to "my-topic":
kafka-console-producer \
--broker-list \
kafka-1:9092,kafka-2:9092 \
--topic my-topic
Do a producer performance test to topic "my-topic"
kafka-producer-perf-test \
--topic my-topic \
--num-records 1000000 \
--throughput 10000 \
--record-size 1000 \
--producer-props \
bootstrap.servers=kafka-2:9092
How might you do a consumer performance test, I wonder?
- Enter the broker host
kafka-1
:
docker-compose exec kafka-1 /bin/bash
- Take a look at
server.propertes
:
root@kafka-1: less /etc/kafka/server.properties
This is just an example broker configuration file. For complicated reasons, the actual configuration file the container uses is called kafka.properties
and is created from environment variables in docker-compose.yml
.
- Take a look at
docker-compose.yml
environment varables and compare that tokafka.properties
:
root@kafka-1: less /etc/kafka/kafka.properties
- Other components of the cluster have similar configuration files. Explore them, too! Look up what the configuration properties do in more detail in the Confluent docs
Open up Google Chrome and go to localhost:9021 to monitor your cluster with Confluent Control Center!
From this repo, there is a ./data
folder. This folder is mapped to the /data
folder inside the tools
container. This means you can create projects inside the ./data
folder on your local machine with your favorite IDE and then run that code from within the tools
container to interact with the Kafka brokers. Here is an example python producer that uses the C-based librdkafka
library rather than the native Java library. You can create your own producer.py
file in ./data
. Then run your app from within the tools
container:
pip install confluent-kafka
python /data/producer.py
- Training Page
- Start with the free introductory course for a great conceptual foundation of Kafka!
- Confluent Platform quickstart
- This is a great hands-on to get started with the basics!
- Kafka Connect example
- Kafka Connect the best way to get data in and out of Kafka!
- Many other awesome examples
- Ansible playbook
- Automate configuration!
- Configurations!
- So many configurations! Become friends with the configurations. Brokers. Consumers. Producers. Topics. Oh my!