Author: Guanzhou Hu @ MIT Date: Oct 6, 2019
RustyNet is a network emulator over Docker containers written in Rust. It is motivated by these two network emulators:
- Mininet: a great SDN emulator, but only supports process-level isolation and cannot scale beyond a single host.
- Yans: a very basic prototype of doing network emulation over Docker containers in Python.
RustyNet's main goals lie in the following three aspects:
- Verify the feasibility of using the Rust language to conduct network experiments.
- Verify the feasibility of doing network emulation over Docker containers.
- (future) Scale such kind of emulation onto large-scale cloud containers orchestration platforms (e.g., Kubernetes), and explore the advantages and drawbacks of doing it this way.
Demonstration of how RustyNet works:
RustyNet requires the following prerequisites:
- Rust toolchain - lastest stable version
- (optional in the future) Vagrant & VirtualBox
- (currently not needed) Google Cloud CLI & Kubernetes access
RustyNet only supports execution on Linux and OS X platforms. Clone this repo and you are ready to use it.
The following is a temporary usage guide based on local Vagrant environment. RustyNet's ultimate goal is to deploy such emulation onto cloud Docker containers in the future.
Under the project folder, do the following:
# Bring the Vagrant VM up from the provided Vagrantfile
$ vagrant up
$ vagrant ssh
# Go into synced working directory.
$ cd /rustynet
# Compile in 'release' mode.
$ cargo build --release
# Build the Docker image for RustyNet nodes, from the provided Dockerfile.
$ cd docker-env
$ sudo docker build -t rustynet/node .
$ cd ..
Initial topology is generated from a YAML config file <topo-name>.yml
. Put it under topolib/
folder. (Currently this prototype always used the minimal.yml
example topology.)
Run RustyNet in root user privilege:
$ sudo ./target/release/rustynet
Using RustyNet CLI runs in basically three phases:
- Docker containers & networks will be created based on the specification in the given topology config file.
- Then, you will enter the RustNet Shell, where you can do interactively manage your network components, and do experiments among them, e.g.,
RustyNet> docker ps
: show all living nodes;RustyNet> h1 ping h2
: commands starting with a node's name will automaticallyexit be interpreted asdocker exec -it [CMD]
, i.e., runs within that node's shell;
- Cleaning up phase.
-
Prototyping on local Docker containers - Tweak resource & bandwidth limits in Docker
- Publish the Docker image online & doing pull instead of real-time build
- Verify the effect of using host bridges as links
- Better way of emulating "routers" using containers
- More example topologies
- Taking in command line arguments to set logging level / clean
- Scale it up onto cloud platforms