-
Notifications
You must be signed in to change notification settings - Fork 28
Motivation
This is a short(ish) note summarising the key technical aspects of Vanadium that I believe set it apart from other systems. Another way of putting it is that these are the specific goals I set for each component to justify its existence to myself - i.e. things I either really wanted to have or things to avoid that had really annoyed me in the past. Everything I describe below is implemented, you can look at and use the code to verify for yourself whether the goals have been met!
Be self describing at no cost as compared to a shared stub system. This is achieved via ‘type dictionaries’ that can be compiled, communicated out of band, or sent in-band. The resulting system is now faster that the equivalent protobufs in go. I learned the importance of self description when working on Google’s n-th year birthday project to relaunch an index from as early in Google’s history as possible. Binary protobufs without stubs are pretty hard to decode - just integers, no names etc. Idiomatic and boilerplate free support in each target language. We worked hard to design the encode/decode apis to allow for idiomatic support in languages such as javascript and java in addition to go. No more thousands of lines of has_x calls. Support for reflection/introspection to allow for shared services (e.g. storage) to access the full type information and thus allow for querying of stored data. Type dictionaries are an essential aspect of this since the type information does not need to be compiled into the storage code and yet need only be stored once either in memory or on-disk.
Vrpc uses state-of-the-art authentication and encryption by default. It’s secure, but also an annoyance since if the user does nothing then clients and servers can’t talk to each other. The auth protocol is designed to use the minimum number of round trips possible and has been formally verified. The wire format supports proxying but with end-to-end authentication. A third party can run a proxy without having any involvement in the auth handshakes between the clients and servers using that proxy, though both client and server will authenticate with the proxy itself. The proxy support allows for easy and transparent deployment behind firewalls. Vrpc supports lame duck mode since experience tells us that it’s an essential feature for any production service that needs to be updated - i.e. everything! Versioning is built into the wire protocols. Service evolution is built into the vdl service description. Vrpc supports ‘dapper-like’ functionality but with dappers central weakness addressed: namely, vrpc will store the last n-rpc’s worth of tracing and allow it to be sent to a client based on a server side decision. Dapper always suffered from the limitation that clients initiate traces or that traces are sampled which makes it hard to capture ‘interesting events’. In vrpc, interesting events are easy to trace. RPCs are multiplexed onto one or more underlying transport streams with flow control to prevent one large RPC blocking all others.
Fine grained delegation (blessings with caveats) is the defining feature of this work - it can express requests of the form ‘give joe access to the play method on the devices in my living room for the next 2 days’, or ‘give fred access to the screen of my desktop so long as we are both within 6 feet of each other’. No other security implementation that I am aware of offers this degree of flexibility. Fine grained permissioning and acls that are easy to manage. Using blessings and caveats for ephemeral permissions and acls for permanent ones is a much easier to maintain model. Support for offline and federated operation. The security model’s operation and correctness, except for revocation, does not rely on being online. This also means that federated ‘islands’ can be easily built either in enterprises or embedded systems/applications. These properties follow from the cryptographic properties of the certificate chains we use to represent actors in the system. Dynamic discovery of services is fraught with privacy and authentication concerns which are inherently addressed. A choice of policies is available including ‘mutual private authentication’ whereby two parties can determine if they are willing to communicate with each other without each one disclosing their identity to the other first (requires a third party to mediate). The combination of these features allow for supporting a variety of sharing models in a principled and easy (well maybe not easy, but at least possible) to reason about manner.
I’ve always described the computation model as ‘method invocations on names’ - i.e. myTv.Play(movieX). Indeed the code looks very much like this. There is a hierarchical naming model and distributed implementation that supports it. It uses the same security model as the rest of the system and can similarly be used in a federated manner - i.e. there are libraries that implement the naming service so it can be embedded in any service or server. This enables rendezvous and discovery as outlined below. The only essential difference between Vanadium names and URLs is that Vanadium names support multiple levels of name-to-address resolution, whereas URLs inherently only support one (DNS). This is was motivated by the observation that just about every web site now supports multiple levels of resolution via some front-end that rewrites and redirects URLs based on a complex set or rules, this is true both for large sites as well as small ones when they are hosted rather than being run standalone. In Vanadium there is a simply a hierarchy of name servers with resolution (and client caching) of addresses taking place incrementally. The net effect is the same, but the implementation complexity of the Vanadium model is massively lower than the current state of URL redirection.
I believe that clients need to discover available services dynamically based on their current environment. If you walk into a room, your phone or laptop, should be able to discover all of the devices available to you - tv, projector, thermostat etc. The naming service above is a key enabler for this since it can present itself as a directory of all available services. In a local or personal area network offline operation, low power and multiple networking technologies come together to create a very complicated and inherently unreliable environment. We’ve hidden much of this complexity behind vanadium security, rpc and naming so that application developers need not deal with the details.
Storage is hard. Distributed storage harder. Storage synchronised across all of your devices and computers harder yet! However, if it can be made to work, then application developers enjoy a massively simplified environment and often times will no longer need to run dedicated backends or use application hosting or cloud services. Syncbase provides a peer-to-peer synchronised store that can be run on all devices, ios, android, laptops and server. The peer-to-peer aspect enables both off-line operation and massively faster synchronisation times than always routing through a central server. It leverages the discovery mechanisms and naming services to do so. Syncbase provides a query language that relies on vom’s inherent self description and type dictionaries to query over any data stored in it. Syncbase provides authentication and authorisation at an appropriate level - not super fine grained, but also not coarse either. With more work a finer level of authorisation may be achievable if developers find they require it. Open source We decided to open source it from the start of the project and it’s available and documented on vanadium.github.io. The documentation is about to be overhauled.
This work started later than the infrastructure work, but I was fortunate enough to be able to hire some excellent people to get it started. Namely Elizabeth Churchill and Jeff Nichols. There are two projects focused on UI toolkits and IDEs for multi-device scenarios (Baku) and on machine learning and generation from/to UIs (Luma).