Kafka Streams
Jump to navigation
Jump to search
http://kafka.apache.org/documentation/streams
http://www.confluent.io/blog/introducing-kafka-streams-stream-processing-made-simple/
http://docs.confluent.io/3.0.0/streams/
Getting started
wget http://packages.confluent.io/archive/3.0/confluent-3.0.0-2.11.zip unzip confluent-3.0.0-2.11.zip # *** IMPORTANT STEP **** # The subsequent paths and commands used throughout this quickstart assume that # your are in the following working directory: <pre> cd confluent-3.0.0/
# Start ZooKeeper. Run this command in its own terminal. cd confluent-3.0.0/ ./bin/zookeeper-server-start ./etc/kafka/zookeeper.properties
Next we launch the Kafka broker, which will listen on localhost:9092 and connect to the ZooKeeper instance we just started. Since this is a long-running service, too, you should run it in its own terminal.
# Start Kafka. Run this command in its own terminal cd confluent-3.0.0/ ./bin/kafka-server-start ./etc/kafka/server.properties
First, we need to create the input topic, named streams-file-input:
./bin/kafka-topics --create \ --zookeeper localhost:2181 \ --replication-factor 1 \ --partitions 1 \ --topic streams-file-input
Next, we generate some input data and store it in a local file at /tmp/file-input.txt:
echo -e "all streams lead to kafka\nhello kafka streams\njoin kafka summit" > /tmp/file-input.txt