Skip to content
zhangst edited this page Jun 15, 2021 · 81 revisions

If you have a question that isn't listed below, please create an issue and we will do response and then add in this document if necessary.

Q: How to get current MongoShake version?

A: Run ./collector -version.

Q: Which logs need to be processed by the users?

A: Error log[EROR] and critical log[CRIT] must be solved by users while [DEBG], [INFO] and [WARN] can be ignore.

Q: What account permissions does MongoShake require?

A: For full-sync, MongoShake needs the reading permission of every database. For incremental, MongoShake needs reading permission of local database and writing permission of mongoshake database.

Q: How to solve the "Oplog Tailer initialize failed" error?

A: If the error is about syncer error, please check whether source database can be connected by mongo command. If the error is about worker error, please check your tunnel configuration.

Q: How to solve the "Oplog Tailer initialize failed: no reachable servers" error?

A: First, you should check your MongoDB is reachable. If you only configure single node in your mongo_urls, this error also happens. We highly recommand to configure whole MongoDB address that includes primary, secondary and hidden no matter replicaSet or Sharding in your mongo_urls, but if you insist on doing that, please set mongo_connect_mode = standalone which has been added since v2.0.6.

Q: How to solve the "error type[*mgo.QueryError] error[no such cmd: applyOps]" error?

A: applyOps in DDL is not supported for sharding.

Q: How to solve the "Oplog Tailer initialize failed: no oplog ns in mongo" error?

A: This is usually a problem with insufficient account permissions, so, please check your permission of oplog table. If the source is sharding, the account should be added into each shard because there is no local database in mongos. When source is sharding, the mongo_urls should be the shards address split by semicolon(;) like: mongo_urls: mongodb://user1:[email protected]:20011,10.1.1.2:20112;mongodb://user2:[email protected]:20011,10.1.2.2:20112. Since v2.0.6, MongoShake doesn't throw this error when sync mode is full sync(sync_mode = document).

Q: How to solve the "target mongo server connect failed: no reachable servers" error?

A: Firstly, users should check whether the target mongodb can be connected. Secondly, the target mongodb role must be primary, while hidden, secondary or other can not work. Please note: password shouldn't contain '@', username shouldn't contain ':'.

Q: How to solve the "oplog syncer internal error: current starting point[6672853605600460800] is bigger than the newest" error?

A: It means the newest oplog timestamp is smaller than given. The start point is given by uint64 type, use 6672853605600460800 >> 32 to compare this timestamp to the newest oplog. Generally speaking, the value of context.start_position in your configuration is too big.

Q: How to solve the duplicate key error?

A: Set replayer.executor.insert_on_dup_update and replayer.executor.upsert to true. When the target mongodb is sharding, we do not suggest to enable these two parameters which may raise an error.

Q: How to solve the "ns {xx yy} to be synced already exists in dest mongodb" error?

A: This error raised when the given collection already exists on the target MongoDB, users should check and remove the collection or enable replayer.collection_drop option to remove the target collection before full syncing. If checkpoint already exists which means MongoShake will run increase syncing only, users should also check whether the oldest oplog is bigger than the checkpoint, if not, MongoShake will still run the full syncing first so this error may be raised.

Q: How to solve the "An upsert on a sharded collection must contain the shard key"

A: Please refer duplicate key question.

Q: How to solve the Reserved characters such as ':' must be escaped according RFC 2396. An IPv6 address literal must be enclosed in '[' and ']' according to RFC 2732 when running the comparison script?

A: The mongodb address(--src or --dst) should start with mongodb://, e.g., --src=mongodb://username:password@primaryA,secondaryB,secondaryC.

Q: Does MongoShake support MongoDB version 2.4?

A: No. MongoDB version under 3.0 is not supported. MongoDB 4.0 without transactions has already been supported, in the next MongoShake version(v1.6.0), we will support syncing transactions.

Q: Does MongoShake support syncing data between different version and different MongoDB types like replicaSet and sharding?

A: Both yes. But the shard key should be added on the target side when MongoDB type is sharding.

Q: Does MongoShake support syncing views?

A: No, system.views table will be filtered.

Q: The sync.mode is all, will MongoShake run full sync again after restart?

A: If checkpoint exists and valid which means the oldest oplog is less than the checkpoint, the MongoShake will only run increase sync. If not, MongoShake will run full sync again, after that, increase sync will be run. So, If users still want to run full sync but the checkpoint exists, checkpoint(default is mongoshake.ckpt_default) should be deleted manually.

Q: What does "CheckpointOperation updated is not suitable. lowest [0]. current [xxxx]. reason : no candidates ack values found" log means?

A: This usually happens when MongoShake just starts and there is no Oplog generated on the source MongoDB.

Q: What does "Conf.Options check failed: replication worker should be equal to count of mongo_urls while multi-sources (shard)" mean?

A: The shards number must equal to the worker number, after v2.0.4, this is only warning so users can ignore it.

Q: What does "Conf.Options check failed: storage server should be configured while using mongo shard servers" means?

A: context.storage.url should set as mongodb-cs address when source type is sharding.

Q: How to solve the "oplog syncer internal error: get next oplog failed. release oplogsIterator, invalid cursor" error?

A: It usually means the cursor that used to scan the source oplog collection is likely time out, so mongoshake will release this cursor and try to rebuild cursor again. Make sure the source MongoDB and networking status are OK and then try to restart mongoshake again.

Q: Will MongoShake synchronize the admin collection?

A: No. Both the "admin" and "local" will not be synchronized.

Q: If MongoShake encounters an error oplog, will it skips this oplog and continue to write the post oplog?

A: No. This log will always be retried and thrown the error until success.

Q: Doest MongoShake support sync sharding?

A: Yes. But balance must be closed at the source database before syncing to prevent data to transfer between different shards.

Q: Does MongoShake support sync DDL?

A: Yes, see replayer.dml_only option. But DDL is not an idempotent operation and oplog maybe replay once failed, so enable DDL may have problem in recent version. We will improve this later.

Q: Does MongoShake support resuming from breakpoint? For example, if MongoShake exists abnormally, will some data lost after restart?

A: Yes, MongoShake supports resuming from breakpoint bases on checkpoint mechanism, every time it starts, it reads the checkpoint which is a timestamp marks how many data have ready been replayed. After that, it pulls data from the source begin with this timestamp. So it won't lose data when restart.

Q: Why data synchronization is slow?

A: In the previous version, MongoShake will send the data once the fetcher.buffer_capacity is full. In v1.4 version, MongoShake adds flush mechanism so that if no data fetched in syncer.reader.buffer_time seconds, the sending buffer will be flushed. In this way, MongoShake can solve the problem that data is inserted in the source MongoDB but received later in the target. However, there still exists a problem that if the data continues to be written, but the write rate is very slow, for example, only 2, 3 are inserted in syncer.reader.buffer_time seconds, the synchronization speed is still slow. There're two ways to solve this problem:

  1. Decrease fetcher.buffer_capacity.
  2. Decrease syncer.reader.buffer_time to 1.

Q: Where does MongoShake fetch oplog? Master or slave?

A: MongoShake fetches oplog from slave by default, so it's better to add all connection including master and slave into mongo_urls.

Q: Should I enable upsert when the target mongodb is mongos?

A: If upsert is enabled, the oplog must include shard_key, otherwise, mongos will broadcast oplog among different shards if shard_key not exists but _id does.

Q: I find the single oplog delay is near 5 to 6 seconds which is a bit big, how to solve this problem?

A: We release the flush strategy in v1.4.2 so that user can set the flush interval which is in syncer.reader.buffer_time configuration. The problem is that there is a bug in mgo drive that set timeout is useless, so we add another external timeout strategy. One thing should be mentioned that if you set syncer.reader.buffer_time bigger than mgo's default timeout, you can't get what you want because the final timeout is min{$syncer.reader.buffer_time, mgo's default timeout}

Q: Does MongoShake support full backup?

A: No. MongoShake only fetches oplog and do sync which is an incremental replication. So if want replicates all data, users need to make a full backup when some early oplogs have already lost. After that, starting MongoShake to run incremental replication. Here come the example steps:

  • check whether early oplogs have already lost. If not, setting context.start_position to an early enough time like 1970-01-01T00:00:01Z which is the earliest oplog fetching time expressed in UTC time, then starting MongoShake.
  • If early oplogs have lost, do a full backup just like mongodump and mongorestore commands offered by the official. As an example, assuming the full backup time is from 2018-09-04T12:13:14Z to 2018-09-04T18:00:00Z, users need to start MongoShake with context.start_position equal to or less than 2018-09-04T12:13:14Z(it's acceptable to set start_position at any time because DML is idempotent).

Q: What does oplog collection capped error, users should fix it manually error mean?

A: It means collection capped error happens when syncing, so mongoshake won't sync anymore to guarantee the correctness of data, users must solve it themselves. Generally speaking, it happens when oplog collection size is too small or the mongoshake reading speed is less than oplog generating speed.

Q: What does "[CRIT] [oplog.Hash:84] Hash object is UNKNOWN. use default value 0" log means?

A: In mongoshake, the oplog will be hashed based on different _id or namespace, if the type is not in bson.ObjectId, string and int, mongoshake will use default value 0 to hash oplog. As a result, this error message will be printed. This does not affect the data synchronization result but all these unknown oplogs will go to the same worker when shard_key is ObjectId which may decrease the performance.

Q: What does "smallest candidates is zero" log means?

A: It means one of the workers return ack 0, this only happens when no data sync or MongoShake just entered incremental synchronization. If this error happens for a long time, something must be wrong so users should check the MongoShake status.

Q: What does "syncer default-0 load checkpoint queryTs[Timestamp(1582304516, 305)] is less than oldTs[Timestamp(1582304698, 152)], this error means user's oplog collection size is too small or document replication continues too long]" means?

A: This error usually happens when MongoShake just finished full-sync and begin incremental-sync, the oplog was lost on the source MongoDB. For example, the MongoShake full-sync start time is A, finish time is B, and then incremental-sync starts, it will try to fetch oplog start from A. So once A is purged on the source MongoDB oplog collection(local.oplog.rs), this error happens. Users can check use rs.printReplication() to check. The way to solve this problem is increasing the oplog collection size. Users can also enable full_sync.oplog_store_disk to store oplog on local disk in full sync stage since v2.4.

Q: Does mongoshake support strict consistency of oplog?

A: No, when shard_key is auto/collection, mongoshake supports sequential consistency which means in the same namespace(ns), the sequence can be guaranteed. If shard_key is id, mongoshake supports eventual consistency.

Q: How can I configure checkpoint?

A: There have several variables in the configuration file(collector.conf) star with context:

  • context.storage: the location type of checkpoint position. We offer two types: database and api. database means MongoShake will store the checkpoint into a database, while api means MongoShake will store and fetch the checkpoint from the given http interface which should be offered by users.
  • context.storage.url: if the source MongoDB type is sharding, the checkpoint will be stored into this MongoDB address. For replicaSet, this variable is useless.
  • context.address: the collection name of the checkpoint and the database name is mongoshake when context.storage is database.
  • context.start_position: when starting for the first time, MongoShake fetches the checkpoint from the given address. If no checkpoint found, MongoShake will fetch oplog start with this value.

Let me give an example based on the default configuration to make more clear. Here comes the default configuration:

context.storage = database
context.address = ckpt_default
context.start_position = 2000-01-01T00:00:01Z

When starting for the first time, MongoShake checks the checkpoint in the mongoshake.ckpt_default collection which is definitely empty. So MongoShake starts syncing begin with the time: 2000-01-01T00:00:01Z. After 3 minutes, MongoShake updates new checkpoint into mongoshake.ckpt_default collection, assume the time is 2018-09-01T00:00:01Z. Once MongoShake restarts, it'll check the checkpoint again, this time MongoShake will start syncing data begin with the time 2018-09-01T00:00:01Z.

Q: I find the ack(lsn_ack.time) keeps going while the checkpoint(lsn_ckpt.time) isn't, how to solve this problem?

A: The lsn_ack.time keeps increasing means the target writing is OK, the checkpoint is updated periodically, so if the checkpoint hasn't been updated for a long time, there must be some problems. Users should go through the log to find the problem, e.g., this happens if the user does not have write permission of checkpoint:

[WARN] [collector.(*OplogSyncer).checkpoint:64] CheckpointOperation updated is not suitable. lowest [6769220508375318558]. current [6769212060174647296]. reason : not authorized on mongoshake to execute command { update: "ckpt_default_singapore_12_12", writeConcern: { getLastError: 1 }, ordered: true, $db: "mongoshake" }

Q: If I both have the checkpoint(stores in mongoshake.ckpt_default by default) and context.start_position, which one will be used?

A: context.start_position only works when the checkpoint isn't exists.

Q: How to connect to the hidden node in MongoDB directly?

A: In MongoShake, we use secondaryPreferred option to reading from the source database. When users want to read from hidden node only, please check out this issue.

Q: How to monitor the MongoShake?

A: MongoShake supplies restful api(default is 9100 in the configuration named http_profile) to monitor internal status in serveral aspect:

  • worker: display internal worker status including worker_id, jobs_in_queue, jobs_unack_buffer, last_unack, last_ack, count.
  • sentinel: display sentinel configuration: OplogDump(dump oplog journal, "0" means no journal, "1" means sampling, "2" means dump all), DuplicatedDump(write duplicate oplog into log if enable), Pause(the whole mongoshake synchronization will be paused if enable), TPS(control the speed of data synchronization).
  • repl: display overall status: logs_get(how many oplog we get), logs_repl(how many oplog we replay), logs_success(how many oplog we replay successfully), lsn(last sent), lsn_ack(the minimum ack value in all worker queue except 0), lsn_ckpt(checkpoint), now, replset, tag, who.
  • conf: display configuration.

User can use curl command to visit this port. Besides, we offer mongoshake-stat script to monitor MongoShake through the restful api in the following realtime way.

vinllen@ ~/code/mongo-shake-github/mongo-shake/bin$ ./mongoshake-stat --port=9100
|---------------------|---------------------|---------------------|---------------------|---------------------|---------------------|---------------------|---------------------|
|        logs_get/sec |       logs_repl/sec |    logs_success/sec |            lsn.time |        lsn_ack.time |       lsn_ckpt.time |            now.time |             replset |
|---------------------|---------------------|---------------------|---------------------|---------------------|---------------------|---------------------|---------------------|
|                none |                none |                none | 2018-07-29 23:10:41 | 2018-07-29 23:10:41 | 2018-07-29 23:10:41 | 2018-07-31 11:55:03 |zz-mgset-source-20180711 |
|---------------------|---------------------|---------------------|---------------------|---------------------|---------------------|---------------------|---------------------|
|                   0 |                   0 |                   0 | 2018-07-29 23:10:41 | 2018-07-29 23:10:41 | 2018-07-29 23:10:41 | 2018-07-31 11:55:04 |zz-mgset-source-20180711 |
|---------------------|---------------------|---------------------|---------------------|---------------------|---------------------|---------------------|---------------------|
|                   1 |                   0 |                   0 | 2018-07-29 23:10:41 | 2018-07-29 23:10:41 | 2018-07-29 23:10:41 | 2018-07-31 11:55:05 |zz-mgset-source-20180711 |
|---------------------|---------------------|---------------------|---------------------|---------------------|---------------------|---------------------|---------------------|
|                   0 |                   0 |                   0 | 2018-07-29 23:10:41 | 2018-07-29 23:10:41 | 2018-07-29 23:10:41 | 2018-07-31 11:55:06 |zz-mgset-source-20180711 |
|---------------------|---------------------|---------------------|---------------------|---------------------|---------------------|---------------------|---------------------|
|                   0 |                   0 |                   0 | 2018-07-29 23:10:41 | 2018-07-29 23:10:41 | 2018-07-29 23:10:41 | 2018-07-31 11:55:07 |zz-mgset-source-20180711 |
|---------------------|---------------------|---------------------|---------------------|---------------------|---------------------|---------------------|---------------------|
|                   0 |                   0 |                   0 | 2018-07-29 23:10:41 | 2018-07-29 23:10:41 | 2018-07-29 23:10:41 | 2018-07-31 11:55:08 |zz-mgset-source-20180711 |
|---------------------|---------------------|---------------------|---------------------|---------------------|---------------------|---------------------|---------------------|

The result will be clear every second.

  • logs_get: how many oplog we get in one second.
  • logs_repl: how many oplog we replay in one second.
  • logs_success: how many oplog we replay successfully in one second which is TPS.

Q: How to debug the MongoShake?

A: MongoShake enables pprof port which can be used to debug, the default port is 9200. Check out this post. Here I just give some common commands to debug:

curl http://127.0.0.1:9200/debug/pprof/goroutine?debug=2 # use curl to fetch the routine status
go tool pprof http://127.0.0.1:9200/debug/pprof/profile # use go tool command to fetch the profile
go tool pprof -top http://127.0.0.1:9200/debug/pprof/heap # use go tool command to fetch the heap status

Q: How to build active-active replication(双活) in the current opensource version without gid support?

A: User can use filter(filter.namespace.white and filter.namespace.black) to fulfill this function. Currently, the granularity of filter is collection.
For example, I have three databases in one mongodb replicaSet named a, b and c. Assume source replicaSet is source-mongo and target replicaSet is target-mongo, so we build two MongoShakes to fetch oplog from source-mongo and target-mongo respectively. And in the first MongoShake we only filter database name is equal to a or b while in the second MongoShake we only filter c. I draw a figure to make explanation clearly.
active_active Users can use their own proxy program to distribute data so that writing of a and b will go to the source database while c goto the target.

Q: How to verify that the replication result is correct?

A: User can use comparision.py script to do verification which fetches and compares data from the source database and the target database. But pay attention, it only compares the outline information like database number, collection number, documents number, same "_id" exist on both sides. So, if there is an entry {"_id":1, "a":1} in the source database and another entry {"_id":1, "a":2} in the target database, this comparison code is unable to verify.
In the coming version, maybe v1.6.0, we will offer the exact match program that can solve the above question and do the incremental comparison. Here we give the basic dataflow:
incremental_comparison

Q: How to connect to different tunnel except direct?

A: In 1.4.0 version, we offer receiver program(locates in bin/receiver after running the build script) to connect to different tunnels like rpc, tcp, file, mock and kafka. Before using it, users should modify the receiver configuration(locates in conf/receiver.conf) based on different needs. The dataflow is mongoshake(collector)=>tunnel=>receiver=>user's platform. Users can start receiver just like collector: ./receiver -conf=../conf/receiver.conf -verbose. Here comes the brief introduction about receiver configuration

  • replayer's number must equal to the worker number in the collector.conf in order to keep concurrency.
  • rpc tunnel: the address is receiver socket address.
  • tcp tunnel: the address is receiver socket address.
  • file tunnel: the address is the filename of collector writing file.
  • mock tunnel: the address is useless. MongoShake will generate random data including "i", "d", "u" and "n" operations like reading from MongoDB.
  • kafka tunnel: the address format should be topic@broker1,broker2,..., the default topic is mongoshake and we only use one partition which is 0 by default. The default kafka reading strategy is reading the oldest offset which also means if the program crashes and then restarts later, the receiver will read from the beginning so that some data is read more than once which may not as expect. A better way to solve this problem is moving kafka offset forwarding once receive ack from the receiver, but we don't offer this code in current open source version.

All the above tunnel address in the receiver should equal to the collector. Users can add logical code in the handler function in receiver/replayer.go file to do something after receiving data. For a better explanation, I will analyze this function code:

func (er *ExampleReplayer) handler() {
	for msg := range er.pendingQueue {
		count := uint64(len(msg.message.RawLogs))
		if count == 0 {
			// may be probe request
			continue
		}

		// parse batched message
		oplogs := make([]*oplog.PartialLog, len(msg.message.RawLogs), len(msg.message.RawLogs))
		for i, raw := range msg.message.RawLogs {
			oplogs[i] = &oplog.PartialLog{}
			bson.Unmarshal(raw, &oplogs[i])
			oplogs[i].RawSize = len(raw)
			LOG.Info(oplogs[i]) // just print for test
		}

		if callback := msg.completion; callback != nil {
			callback() // exec callback if exist
		}

		// get the newest timestamp
		n := len(oplogs)
		lastTs := utils.TimestampToInt64(oplogs[n - 1].Timestamp)
		er.Ack = lastTs

		// add logical code below
	}
}

pendingQueue is the receiver queue so that we fetch data from it and do the following steps. At first, we judge whether the length is equal to 0 which means a probe request if so. After that, we parse the batched oplogs into an array named oplogs, the reason we do this is several oplogs gather together before sending. As an example, we just print the message LOG.Info(oplogs[i]) for the test. Then, we execute the callback function if exist, the callback function is set in the different reader tunnel. The next step is calculating the newest ack, so that collector can know receiver receive and replay this data successfully, then the new oplog will be sent. At last, users can add their logical code just like reading oplogs array and do whatever they want.

Q: How to improve QPS?

A: There are several ways to improve QPS like:

  • Deploy MongoShake close to target MongoDB. It's because the mgo driver writing performance is not as well as reading, so reduce the writing IO delay is necessary.
  • Increase the worker number. As we said in the detailed document, increase the work number can increase the concurrency.
  • Increase the host performance like add more CPU, memory.
  • Make collection distribute evenly. The performance won't be good if some collections are quite big while others are small.

Q: I found the synchronization is too fast to affect the normal request of the source database or target database, how to solve this problem?

A: Users can limit the MongoShake pulling speed(TPS field) by restful API:

  • set TPS to 1000: curl -X POST --data '{"TPS": 1000}' 127.0.0.1:9100/sentinel/options.
  • check TPS: curl 127.0.0.1:9100/sentinel.
  • pause the link: curl -X POST --data '{"Pause": true}' 127.0.0.1:9100/sentinel/options

Q: MongoShake uses a lot of memory, how to reduce it?

A: Decrease the worker number or reduce the worker.batch_queue_size, it can reduce memory but at the same time reduce the synchronization performance.

Q: MongoShake crashed because of OOM(Out Of Memory), how can I estimate memory usage?

A: The below picture is the partial inner modules of MongoShake which can be used to estimate the maximum memory usage. memory_usage MongoShake has some queues inside, the memory will hit the maximum if all queues are full, this situation usually happens when writing speed less than reading. Here is the default configuration in v1.4.4 version:

  • FetcherBufferCapacity = 256
  • AdaptiveBatchingMaxSize = 16384. Since v2.0.7, we set the default value to 1024 to lower the memory usage. If the tunnel is direct, choose a small value won't decrease the performance a lot. But for others tunnels like tcp, rpc, kafka, set a big value will improve transmission performance.
  • WorkerBatchQueueSize = 64
  • Worker = 8

My estimation is divided into the following two parts, and we assume the average oplog size is about 300 bytes:

  • For replica set: the calculation formula is:
    PendingQueue: FetcherBufferCapacity * PipelineQueueLen * 4 = 256 * 64 * 4 = 65536
    LogsQueue: equal to PendingQueue: 65536
    MergeBatch:AdaptiveBatchingMaxSize * FetcherBufferCapacity = 1024 * 256 = 262144
    BatchGroup: Worker * FetcherBufferCapacity = 8 * 256 = 2048
    WorkerQueue: WorkerBatchQueueSize * FetcherBufferCapacity * Worker = 131072
    Totoal memory: (PendingQueue + LogsQueue + MergeBatch + BatchGroup + WorkerQueue) * 300 = 0.15 Gbytes

  • For sharding, assume we have 3 shards/workers:
    PendingQueue: FetcherBufferCapacity * PipelineQueueLen * 1 = 256 * 64 * 3 = 49152
    LogsQueue: equal to PendingQueue: 49152
    MergeBatch:AdaptiveBatchingMaxSize * FetcherBufferCapacity = 1024 * 256 * 3 = 786432
    BatchGroup: Worker * FetcherBufferCapacity = 3 * 256 = 768
    WorkerQueue: WorkerBatchQueueSize * FetcherBufferCapacity * Worker = 64 * 256 * 3 = 49152
    Totoal memory: (PendingQueue + LogsQueue + MergeBatch + BatchGroup + WorkerQueue) * 300 = 0.26 Gbytes

Based on the above numbers, users can adjust the configuration based on different needs. It should be emphasized that users also need to consider golang garbage collection that will increase memory usage to about 1.5 times.

Q: How to sync data from one MongoDB to several target MongoDB?

A: You can use kafka tunnel to receive data from Kafka and then send to several targets. Also, you can start several mongoshake to fetch from the source DB, but it'll cost more resources. In this way, the checkpoint position should be modified to prevent being covered by others.

Q: For some special reasons, I write data into "admin" database, however, "admin" database can't be synced, how to sync collection under "admin" on the source MongoDB to another collection on the target MongoDB?

A: Since v2.0.7, MongoShake add filter.pass.special.db so that "admin" database can also be synced, but users should be very careful. Here is our recommendation, assume the user wants to sync "admin.source" to "users.target":

  1. set filter.pass.special.db = admin to let "admin" database passed.
  2. set filter.namespace.white = admin.source to let "admin.source" passed while others namespace filtered. For the current version v2.0.7, "users" should also be added into the white list.
  3. set transform.namespace = admin.source:users.target to let "admin.source" from the source to "users.target" on the target.
Clone this wiki locally