Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Shared context for co-located/interdependent service instances #6445

Open
eeyun opened this issue Apr 17, 2019 · 5 comments
Open

Shared context for co-located/interdependent service instances #6445

eeyun opened this issue Apr 17, 2019 · 5 comments
Assignees
Labels
Focus :Plan Build Focus:Supervisor Related to the Habitat Supervisor (core/hab-sup) component Priority:Low Stale Type:Additional Discussion Type: Feature Issues that describe a new desired feature

Comments

@eeyun
Copy link
Contributor

eeyun commented Apr 17, 2019

A problem we've seen in Habitat for a while (and attempted to solve once with the "composites" feature) revolves around the usecase of colocated (and interdependent) services. Some of the most common examples of these types of applications are static site deployments and php apps, though, there are examples of rails apps and others that have similar needs.

The pain revolves around attempting to deploy an app that expects separate processes to share content/data. One clear example is apps that require deployment of a static content, a webserver (perhaps nginx), and an app server (node.js or php) that must needs be colocated on a filesystem. Due to the way our services are isolated, giving three services the ability to access the same files on disk is complicated and less than delightful. It also means a complete inability to allow those three services to choreograph config changes amongst themselves. This problem has unfortunately bitten us several times over the last couple years and even today a community member was running into this pain.

In the past we attempted to solve this problem by creating a sort of metaplan we called composites but it didnt reach to the heart of the problem and we have since deprecated and removed the feature.

In discussion with @smacfarlane and @fnichol I think we might have touched on a different pattern that could solve this issue (and a couple others) that would require fewer (potentially zero) format changes to the planfile. Effectively the idea is a sort of shared hab /var filesystem in the hab system path.

In concept the idea would be a new directory under /hab that colocated service instances could "bind" against (perhaps a new unique binding or export; could be this is the only planfile change that would be necessary) that packages could write data to that could be shared across service instances. I think we would need the same safety nets and guarantees on this shared filesystem as we promise on service bindings, in order to avoid creating a veritable use-all heap for every service running on a system.

What I imagine is:

  • A /hab/var fs
  • A way for a package to expect to need to write there
  • A way for a package to expect to need to read from there
  • A way for consumption of this new primitive that is scoped into some kind of namespace. (Could be as simple as naming a var fs binding which creates the head of a directory structure within that var fs)
  • A way for services that bind to the namespaced var fs to trigger restarts when content in the directory changes.

I could be missing things here so hopefully people will add some commentary in short order.

@eeyun eeyun changed the title Shared FS for co-located service instances Shared context for co-located/interdependent service instances Apr 17, 2019
@eeyun eeyun changed the title Shared context for co-located/interdependent service instances Shared context for co-located/interdependant service instances Apr 17, 2019
@christophermaier christophermaier changed the title Shared context for co-located/interdependant service instances Shared context for co-located/interdependent service instances Apr 17, 2019
@st-h
Copy link

st-h commented Apr 17, 2019

Just adding my specific use-case here, as a real world example might be helpful in figuring out details and it might be a good example of usage of this feature.

We have a single page app that is built using ember. The files that are served to clients are bundled within a habitat package with core/nginx as a dependency. We are using this for quite some time without issues.

Ember fastboot is a node.js server that uses the same files (which are served by Nginx) and basically renders a static page by getting needed data from the backend. In the browser this static page gets rehydrated into the full single page app and decreases the initial loading times of a spa dramatically (as the browser does not need to load, compile and execute all the javascript in order to render the initial page) and allows crawlers to parse the page.

I tried to pass the {{pkg.path}} as binding to the FastBoot service, so it would have the location of the static files and get notified about a new deployment. As long as it has access to these files, no new deployment of this service is needed. However I found that this only passes {{pkg.path}} as a string without resolving it. One could would not necessarily need the static files with Nginx, as they are actually served by the fastboot server, however this would allow to implement a reasonable fallback when the fastboot server is unavailable.

@smacfarlane
Copy link
Contributor

For context, the original idea for this came from postgresql that has a build time requirement of a known, mutable location at runtime for plugins to drop libraries into. This isn't the service directory, as configuration plans for Postgres wouldn't necessarily have the same service path. Additionally, the package that has a need for a mutable component may not necessarily be a service.

I've also got a gut feeling that there's a related concept of "Application" for these co-located services. In my head, it feels like there's a subtle but distinct difference between "this package has a mutable location that other packages can write into (in the case of plugins)" and "these packages are related to each other and have a requirement of shared data (that is potentially mutable)"

I'm in agreement that it certainly feels like there's a missing primitive or component in our package descriptions and installations, and think we should explore it more. I think however we approach this though, we'd want to be careful that usage of this shared context has a well-defined contract/api for hab, plan-build, etc. to use so we can continue to provide reasonable safety guarantees.

@st-h
Copy link

st-h commented Apr 19, 2019

Just remembered, that I had another use-case for such a feature quite a while ago:

I was packaging letsencrypt certs into a habitat package (actually using a gitlab scheduler to issue new certs when they are due). I just wanted to be able to deploy this package to the frontend servers, where the server should take notice of the new package and reload with the new location of the certificate. I really did not want to have to create a new server/application package when the certs were updated/expired (as this could lead to other problems, like deploying an older version could deploy an expired cert etc.).

In this case I was able to work around limitations by using chef to install the latest package and pass the package path down to the server configuration. This was quite ok, as it really does not matter if we have to wait for the next chef run to deployment a new cert - something one usually does not want to do when deploying a new release of an application.

@stale
Copy link

stale bot commented Jun 4, 2020

This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. We value your input and contribution. Please leave a comment if this issue still affects you.

@stale stale bot added the Stale label Jun 4, 2020
@christophermaier christophermaier added Focus:Supervisor Related to the Habitat Supervisor (core/hab-sup) component and removed A-supervisor labels Jul 24, 2020
@stale stale bot removed the Stale label Jul 24, 2020
@christophermaier christophermaier added Type: Feature Issues that describe a new desired feature and removed C-feature labels Jul 24, 2020
@christophermaier christophermaier removed their assignment Jan 11, 2021
@stale
Copy link

stale bot commented Aug 13, 2022

This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. We value your input and contribution. Please leave a comment if this issue still affects you.

@stale stale bot added the Stale label Aug 13, 2022
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
Focus :Plan Build Focus:Supervisor Related to the Habitat Supervisor (core/hab-sup) component Priority:Low Stale Type:Additional Discussion Type: Feature Issues that describe a new desired feature
Projects
None yet
Development

No branches or pull requests

7 participants