A tool for distributed, high-volume replication between Apache Kafka clusters based on Kafka Connect. Designed for easy operation in a high-throughput, multi-cluster environment.
- Dynamic Configuration: Uses the Kafka Connect REST API for dynamic API-driven configuration
- Precise Replication: Supports a regex whitelist and an explicit topic whitelist
- Simple Management for Multiple Kafka Clusters: Supports multiple source clusters with one Worker process
- Continuous Ingestion: Continues consuming from the source cluster while flushing and committing offsets
- Built for Dynamic Kafka Clusters: Able to handle topics and partitions being created and deleted in source and destination clusters
- Scalable: Creates a configurable set of worker tasks that are distributed across a Kafka Connect cluster for high performance, even when pushing data over the internet
- Fault tolerant: Includes a monitor thread that looks for task failures and optionally auto-restarts
- Monitoring: Includes custom JMX metrics for production ready monitoring and alerting
Mirus is built around Apache Kafka Connect, providing SourceConnector
and SourceTask
implementations
optimized for reading data from Kafka source clusters. The MirusSourceConnector
runs a
KafkaMonitor
thread, which monitors the source and destination Apache Kafka cluster
partition allocations, looking for changes and applying a configurable topic whitelist. Each task
is responsible for a subset of the matching partitions, and runs an independent KafkaConsumer
and
KafkaProducer
client pair to do the work of replicating those partitions.
Tasks can be restarted independently without otherwise affecting a running cluster, are monitored continuously for failure, and are optionally automatically restarted.
To understand how Mirus distributes work across a cluser of machines please read the Kafka Connect documentation.
To build a package containing the Mirus jar file and all dependencies, run mvn package -P all
:
target/mirus-${project.version}-all.zip
This package can be unzipped for use (see Quick Start).
Maven also builds the following artifacts when you run mvn package
. These are useful if you need
customized packaging for your own environment:
target/mirus-${project.version}.jar
: Primary Mirus jar (dependencies not included)target/mirus-${project.version}-run.zip
: A package containing the Mirus run control scripts
These instructions assume you have expanded the mirus-${project.version}-all.zip
package.
A single Mirus Worker can be started using this helper script.
> bin/mirus-worker-start.sh [worker-properties-file]
worker-properties-file
: Path to the Mirus worker properties file
, which configures the Kafka Connect framework. See quickstart-worker.properties for an example.
--override property=value
: optional command-line override for any item in the Mirus worker properties file. Multiple override options are supported (similar to the equivalent flag in Kafka).
Mirus includes a simple tool for reading and writing offsets. This can be useful for migration from other replication tools, for debugging, and for offset monitoring in production. The tool supports CSV and JSON input and output.
For detailed usage:
> bin/mirus-offset-tool.sh --help
To run the Quick Start example you will need running Kafka and Zookeeper clusters to work with. We will assume you have a standard Apache Kafka Quickstart test cluster running on localhost. Follow the Kafka Quick Start instructions.
For this tutorial we will set up a Mirus worker instance to mirror the topic test
in loop-back mode to
another topic in the same cluster. To avoid a conflict the destination topic name will be set to
test.mirror
using the destination.topic.name.suffix
configuration option.
-
Build the full Mirus project using Maven
> mvn package -P all
-
Unpack the Mirus "all" package
> mkdir quickstart; cd quickstart; unzip ../target/mirus-*-all.zip
-
Start the
quickstart
worker using the sample worker properties file> bin/mirus-worker-start.sh config/quickstart-worker.properties
-
In another terminal, confirm the Mirus Kafka Connect REST API is running
> curl localhost:8083 {"version":"1.1.0","commit":"fdcf75ea326b8e07","kafka_cluster_id":"xdxNfx84TU-ennOs7EznZQ"}
-
Submit a new
MirusSourceConnector
configuration to the REST API with the namemirus-quickstart-source
> curl localhost:8083/connectors/mirus-quickstart-source/config \ -X PUT \ -H 'Content-Type: application/json' \ -d '{ "name": "mirus-quickstart-source", "connector.class": "com.salesforce.mirus.MirusSourceConnector", "tasks.max": "5", "topics.whitelist": "test", "destination.topic.name.suffix": ".mirror", "destination.bootstrap.servers": "localhost:9092", "consumer.bootstrap.servers": "localhost:9092", "consumer.client.id": "mirus-quickstart", "consumer.key.deserializer": "org.apache.kafka.common.serialization.ByteArrayDeserializer", "consumer.value.deserializer": "org.apache.kafka.common.serialization.ByteArrayDeserializer" }'
-
Confirm the new connector is running
> curl localhost:8083/connectors ["mirus-quickstart-source"]
> curl localhost:8083/connectors/mirus-quickstart-source/status {"name":"mirus-quickstart-source","connector":{"state":"RUNNING","worker_id":"1.2.3.4:8083"},"tasks":[],"type":"source"}
-
Create source and destination topics
test
andtest.mirror
in your Kafka cluster> cd ${KAFKA_HOME} > bin/kafka-topics.sh --zookeeper localhost:2181 --create --topic 'test' --partitions 1 --replication-factor 1 Created topic "test". > bin/kafka-topics.sh --zookeeper localhost:2181 --create --topic 'test.mirror' --partitions 1 --replication-factor 1 Created topic "test.mirror".
-
Mirus should detect that the new source and destination topics are available and create a new Mirus Source Task:
> curl localhost:8083/connectors/mirus-quickstart-source/status {"name":"mirus-quickstart-source","connector":{"state":"RUNNING","worker_id":"10.126.22.44:8083"},"tasks":[{"state":"RUNNING","id":0,"worker_id":"10.126.22.44:8083"}],"type":"source"}
Any message you write to the topic test
will now be mirrored to test.mirror
.
See the documentation for Kafka Connect REST API.
Mirus shares most Worker and Source configuration with the Kafka Connect framework. For general information on configuring the framework see:
Mirus-specific configuration properties are documented in these files:
-
Mirus Source Properties These can be added to the JSON config object posted to the REST API
/config
endpoint to configure a new MirusSourceConnector instance. In addition, the Kafka Consumer instances created by Mirus Tasks can be configured using aconsumer.
prefix on the standard Kafka Consumer properties. The equivalent KafkaProducer options are configured in the Mirus Worker Properties file (see below). -
Mirus Worker Properties These are Mirus extensions to the Kafka Connect configuration, and should be applied to the Worker Properties file provided at startup. The Kafka Producer instances created by Kafka Connect can also be configured using a
producer.
prefix on the standard Kafka Producer properties.
By default, Mirus checks that the destination topic exists in the destination Kafka cluster before
stating to replicate data to it. This feature can be disabled by setting the
enable.destination.topic.checking
config option to false
.
As of version 0.2.0, destination topic checking can also support topic re-routing performed by the
RegexRouter Single-Message Transformation.
No other Router
Transformations are supported, so destination topic checking must be disabled in order
to use them.
To preform a release: mvn release:prepare release:perform
Questions or comments can also be posted on the Mirus Github issues page.
Paul Davidson and contributors.
This project uses the Google Java Format.