Memory Broker API is a superset of Broker HTTP API. It includes the following additional APIs.
GET
/api/v3/version
HTTP 200
0.3.0
This is not a stable API and should only be used for debugging.
GET
/api/v3/metadata
HTTP 200
{
"version": "mem-broker-0.1",
"global_epoch": 0,
"clusters": {},
"all_proxies": {},
"failed_proxies": [],
"failures": {}
}
Restore all the metadata.
PUT
/api/v3/metadata
{
"version": "mem-broker-0.1",
"global_epoch": 0,
"clusters": {},
"all_proxies": {},
"failed_proxies": [],
"failures": {}
}
HTTP 200
HTTP 409 { "error": "INVALID_META_VERSION" }
HTTP 409 { "error": "RETRY" }
GET
/api/v3/clusters/info/<cluster_name>
HTTP 200
{
"name": "cluster_name",
"node_number": 8,
"node_number_with_slots": 8,
"is_migrating": false
}
HTTP 404 { "error": "CLUSTER_NOT_FOUND" }
POST
/api/v3/clusters/meta/<cluster_name>
{
"node_number": 8
}
cluster_name
- 0 < length <= 30
- only contains alphabetic and numeric ascii or '@', '-', '_'
node_number
should be the multiples of4
.
HTTP 200
HTTP 400 { "error": "INVALID_CLUSTER_NAME" }
HTTP 400 { "error": "INVALID_NODE_NUMBER" }
HTTP 409 { "error": "ALREADY_EXISTED" }
HTTP 409 { "error": "NO_AVAILABLE_RESOURCE" }
HTTP 409 { "error": "RETRY" }
DELETE
/api/v3/clusters/meta/<cluster_name>
HTTP 200
HTTP 400 { "error": "INVALID_CLUSTER_NAME" }
HTTP 404 { "error": "CLUSTER_NOT_FOUND" }
HTTP 409 { "error": "RETRY" }
PATCH
/api/v3/clusters/nodes/<cluster_name>
{
"node_number": 8
}
node_number
should be the multiples of4
.
HTTP 200
HTTP 400 { "error": "INVALID_CLUSTER_NAME" }
HTTP 400 { "error": "INVALID_NODE_NUMBER" }
HTTP 404 { "error": "CLUSTER_NOT_FOUND" }
HTTP 409 { "error": "ALREADY_EXISTED" }
HTTP 409 { "error": "NO_AVAILABLE_RESOURCE" }
HTTP 409 { "error": "MIGRATION_RUNNING" }
HTTP 409 { "error": "NODE_NUMBER_CHANGING" }
HTTP 409 { "error": "RETRY" }
This API is idempotent compared to the previous one.
PUT
/api/v3/clusters/nodes/<cluster_name>
{
"cluster_node_number": 8
}
node_number
should be the multiples of4
.
HTTP 200
HTTP 409 { "error": "NODE_NUM_ALREADY_ENOUGH }
HTTP 400 { "error": "INVALID_CLUSTER_NAME" }
HTTP 400 { "error": "INVALID_NODE_NUMBER" }
HTTP 404 { "error": "CLUSTER_NOT_FOUND" }
HTTP 409 { "error": "ALREADY_EXISTED" }
HTTP 409 { "error": "NO_AVAILABLE_RESOURCE" }
HTTP 409 { "error": "MIGRATION_RUNNING" }
HTTP 409 { "error": "NODE_NUMBER_CHANGING" }
HTTP 409 { "error": "RETRY" }
DELETE
/api/v3/clusters/free_nodes/<cluster_name>
HTTP 200
HTTP 400 { "error": "INVALID_CLUSTER_NAME" }
HTTP 404 { "error": "CLUSTER_NOT_FOUND" }
HTTP 409 { "error": "FREE_NODE_NOT_FOUND" }
HTTP 409 { "error": "MIGRATION_RUNNING" }
HTTP 409 { "error": "NODE_NUMBER_CHANGING" }
HTTP 409 { "error": "RETRY" }
POST
/api/v3/clusters/migrations/auto/<cluster_name>/<node_number>
For scaling out, this API will first add nodes and
wait for all the newly added proxies have their metadata synced
and finally start migration.
For scaling down, this API will just shrink the slots and will NOT remove the nodes.
HTTP 200
HTTP 400 { "error": "INVALID_CLUSTER_NAME" }
HTTP 404 { "error": "CLUSTER_NOT_FOUND" }
HTTP 409 { "error": "MIGRATION_RUNNING" }
HTTP 400 { "error": "INVALID_NODE_NUMBER" }
HTTP 409 { "error": "NO_AVAILABLE_RESOURCE" }
HTTP 409 { "error": "NODE_NUMBER_CHANGING" }
HTTP 409 { "error": "RETRY" }
Note that you need to call Add nodes to cluster
beforehand.
POST
/api/v3/clusters/migrations/expand/<cluster_name>
HTTP 200
HTTP 400 { "error": "INVALID_CLUSTER_NAME" }
HTTP 404 { "error": "CLUSTER_NOT_FOUND" }
HTTP 409 { "error": "FREE_NODE_NOT_FOUND" }
HTTP 409 { "error": "MIGRATION_RUNNING" }
HTTP 409 { "error": "NODE_NUMBER_CHANGING" }
HTTP 409 { "error": "RETRY" }
Note that this will not delete the nodes.
You still need to call the Delete Unused nodes in a cluster
API after migration is done.
POST
/api/v3/clusters/migrations/shrink/<cluster_name>/<new_cluster_nodes_number>
HTTP 200
HTTP 400 { "error": "INVALID_CLUSTER_NAME" }
HTTP 400 { "error": "INVALID_NODE_NUMBER" }
HTTP 404 { "error": "CLUSTER_NOT_FOUND" }
HTTP 409 { "error": "FREE_NODE_FOUND" }
HTTP 409 { "error": "MIGRATION_RUNNING" }
HTTP 409 { "error": "NODE_NUMBER_CHANGING" }
HTTP 409 { "error": "RETRY" }
PATCH
/api/v3/clusters/config/<cluster_name>
{
"compression_strategy": "disabled" | "set_get_only" | "allow_all"
}
HTTP 200
HTTP 400 { "error": "INVALID_CLUSTER_NAME" }
HTTP 404 { "error": "CLUSTER_NOT_FOUND" }
HTTP 409 {
"error": "INVALID_CONFIG",
"key": "compression_strategy",
"value": "xxxx",
"message": "xxxx"
}
HTTP 409 { "error": "RETRY" }
POST
/api/v3/proxies/meta
{
"proxy_address": "127.0.0.1:7000",
"nodes": ["127.0.0.1:6000", "127.0.0.1:6001"],
"host": "127.0.0.1" | null
}
HTTP 200
HTTP 400 { "error": "INVALID_PROXY_ADDRESS" }
HTTP 409 { "error": "ALREADY_EXISTED" }
HTTP 409 { "error": "RETRY" }
DELETE
/api/v3/proxies/meta/{proxy_address}
HTTP 200
HTTP 404 { "error": "PROXY_NOT_FOUND" }
HTTP 409 { "error": "IN_USE" }
HTTP 409 { "error": "RETRY" }
PUT
/api/v3/clusters/balance/<cluster_name>
HTTP 200
HTTP 400 { "error": "INVALID_CLUSTER_NAME" }
HTTP 404 { "error": "CLUSTER_NOT_FOUND" }
HTTP 409 { "error": "RETRY" }
GET
/api/v3/epoch
HTTP 200
<integer>
Update all the epoch to the specified new epoch.
This should only be used when metadata is stale after failover
to make the metadata be able synchronized to server proxies again.
PUT
/api/v3/epoch/<new_epoch>
HTTP 200
HTTP 409 { "error": "EPOCH_SMALLER_THAN_CURRENT" }
POST
/api/v3/resources/failures/check
Empty hosts_cannot_fail
means we still have enough resources for handling failures.
If hosts_cannot_fail
is not empty, we should add more server proxies.
HTTP 200
{
"hosts_cannot_fail": ["host1", "host2", ...],
}
PUT
/api/v3/config
{
"replica_addresses": ["127.0.0.1:17799", "127.0.0.1:27799"]
}
HTTP 200
GET
/api/v3/config
HTTP 200
{
"replica_addresses": ["127.0.0.1:17799", "127.0.0.1:27799"]
}