The Twelve Factor App
(henceforth 12fa) pattern, described and advocated by Adam Wiggins
, Founding CTO of Heroku, has been hugely influential in providing a philosophical underpinning for the "git push", Platform-as-a-Service deployment pattern that Heroku spearheaded.
Resin.io applies the "git push" pattern to embedded Linux devices like the BeagleBone and the Raspberry Pi. Over the last 9 months we've been working with developers all over the world, getting their apps working on Resin, and improving our alpha release. One thing that's been clear is that there's still a need for a mental framework, a set of best practices, essentially what the Twelve Factor App manifesto offers to Heroku.
12 factor apps, IoT style
An embedded linux app differs from a cloud app in several ways:
- Each individual device matters. Whereas a cloud can treat each server as a disposable cog, failure of a device in the embedded world is a serious event that needs attention.
- The device's characteristics matter, also. A device comes loaded with sensors, interfaces, connectivity peripherals, and provisioning processes that are deeply app-specific and hard to reproduce on a developer's machine. Also, it quite often is based on a different CPU architecture than the developer's machine.
- A device can also be offline or have spotty connectivity, and the end user often expects it to work just as well, or at least gracefully degrade.
Considering the above, we need to re-examine the 12 factors under the light of the specific needs an embedded application has.
The main thrust of this principle remains solid in the embedded world. As a result of the differences listed above, each device can be best thought of as hosting a deploy of its own, and each app deploy is responsible for the continued functioning of that device. This means that each app instance is tied to that device, as opposed to the ephemeral relationships that exist in the cloud. These differences require us to reinterpret the 12 factors if we are to carry them across to the embedded world. With this caveat, the 1st factor can be said to carry very well into the embedded world.
On it's face, this principle should fly straight through. Of course dependencies like npm packages are isolated. But what about operating system packages? An embedded 12fa should have the packages themselves defined fairly strictly, and if possible, the userspace itself.
Going deeper however, embedded applications often depend on kernel modules, usually to manage some fairly obscure piece of hardware that is attached to the device. This means that in order to defend full isolation of dependencies, we should allow the user to define which kernel modules to load
into the kernel via configuration stored with their project.
However, kernel modules need to be compiled with the exact kernel source that runs on the device. This is not a sustainable requirement if we're to keep the device running, so something like DKMS
may indicate the way forward.
Not only is this third factor meaningful in the embedded world, but it's arguably even more meaningful than in the cloud. An embedded 12fa should not only allow users to define environment variables for configuration purposes, but go even further, allowing those variables to vary per-device. For instance a developer may want to let each device know its location, its unique ID, or perhaps configure its runtime behaviour in some sense, think feature flags. This is once more a result of the fact that each device can be thought of as a separate deploy in 12-factor terms.
4. Backing Services
Here, things get very very interesting. Once again, there is a completely innocent but also uninteresting reading of the 4th factor. If your database is in the cloud, your device app should access it like any other 12-factor app. however, things get more intriguing if we think about backing services hosted on the device. The page dedicated to this factor reads
The code for a twelve-factor app makes no distinction between local and third party services.
Here, the cloud bias of the 12fa pattern shows. Whereas in the cloud, what is "local" doesn't really matter, on a device, what is "remote" could possibly be unavailable at any time, so what is local matters indeed. It makes the difference between "can be depended on" and "cannot be depended on". If we envision a device as hosting multiple containers, e.g. one container with a database, then we could also have device-local, but third-party backing services. Once more however, the substance of the principle can survive, adjusting for the context.
5. Build, release, run
This factor remains an iron-clad characteristic of the "git push" workflow, and as such can be clearly seen in the embedded world. Due to the fact that devices can have a different architecture than the developer's machine, and also the build machine, there will need to be some kind of compensation mechanism employed in 12fa platforms. Whichever that is, users still need to be aware that the environment that the code gets built in may not have access to the same devices or kernel modules that the device has.
The sixth factor shows one of the biggest divergences in the embedded world. Since the device can be offline, and the app is responsible for the continued functioning of the device, it becomes necessary for some state to exist locally. A cloud-backed database just won't cut it in all cases.
In the embedded world we can cut that specific gordian knot
by bind-mounting a default data volume to each app container. As a result, successive versions of the app can expect that volume to be present and carry data from previous version(s). In this case, the embedded app simply doesn't have to deal with the ephemerality of the data centre. The file system can be counted on to exist every single time and not lose state. This however still doesn't truly violate the spirit of the factor, especially if we consider the data volume a "default backing service" accessed through the filesystem interface. This definitely bends, but in our view does not break the sixth factor.
7. Port Binding
Here embedded apps work exactly as expected, but once more with a twist. Apps are indeed expected to be self-contained, so we don't inject any components. A web server hosted on a device can listen to its port of choice, and the platform will pass requests on that port to the app. Since however devices matter, embedded 12fa platforms can go a step further and offer, optionally, a unique URL for each device. In this way, we can allow the user to access the device not only locally, but also from the web, putting it in reach of tablets and mobile phones which are so important for interfacing with users in the Internet of Things.
The eighth factor is one that is hardest to conceptualise and implement in the embedded world, or at least we haven't seen it yet. Scaling doesn't exist in the embedded world in the same way it does in the cloud, with the only possible analogue being the one-deploy-per-device approach. In that sense, embedded apps scale by adding deploys, which carry with them their own hardware, rather than processes. If scaling is needed to better take advantage of the local resources offerred by the device, this is left solely to the app.
It is fun to think about a platform that automatically scales an IoT installation by automatically provisioning new devices, or even decommissioning them when they're no longer needed. Depending on the application, this could be easy or impossible, but nonetheless may point to the next step of evolution, where production/deployment/decommissioning of devices is simply another automated process.
Disposablility is just as valuable in the embedded world, and for all the same reasons, except scaling. But disposablility is important for another reason. Where a user is involved, they may expect fast startup, but also may power the device off at unexpected times. As such, apps should be built with this eventuality in mind. Even if the user doesn't shut the device down, power supply isn't as granted as it is in a data centre.
It is important to note here that while apps should expect to be shut down at any time, the platform itself may allow the app to veto an update or termination if the time is not opportune. For instance, updating or reconfiguring a drone app mid-flight is probably not advisable. This veto should of course be overrideable if the developer demands it, e.g. in case bad code has been pushed which prevents future updates.
10. Dev/prod Parity
This principle poses a real problem for embedded development. Devices don't naturally lend themselves to parity with a developer's machine. The ideal solution here is allowing a developer to work on a device during development in the same way a web developer works with a browser. Files the developer works with should be instantly available to the execution context, and the developer should have the equivalent of a "refresh" button. Realising this pattern should bring us close to development nirvana: feedback cycles measured in milliseconds as opposed to today's situation where embedded development feedback cycles can be measured in hours or even days.
Here, embedded apps return to 12fa orthodoxy. The app does not need to concern itself with managing the log stream, which is collected on the device and streamed to the platform, from where it is made available to the user.
12. Admin processes
The application of this factor on embedded devices isn't quite clear. Once more, the fact that each device is effectively a deploy of its own makes things harder. As such, depending on the specific admin process, the applicable pattern changes. For one, an embedded 12fa platform can allow users to spawn a terminal inside their running app. Using it, users can run one-off experiments or scripts in this way, or invoke the REPL of the language their app runs.
However, when things like migrations come up, given the fact that each device may have local services and/or special state on the data volume, these migrations need to be run on each and every device. As a result, quite possibly the best way to handle these kinds of issues is for the app to evaluate the existing state and attempt to migrate it to its desired structure at the start of each run. An app structured in this way will be able to keep functioning even if a device gets updated after missing several intermediate versions (e.g. due to being offline) and also the migration of each and every device is guaranteed, as and when the device updates.
A possible additional improvement is to give the user the ability to run a command on multiple devices (perhaps those that are online at the time) and present the results to the user in some sort of aggregation format.
It is truly fascinating how many insights have come up while trying to implement a "git push"-driven workflow for embedded devices. It is also truly gratifying to be able to document them as comments on the original 12 factor app manifesto, putting all the variations in a unified context. We hope this document can act as the succinct description of the mental model behind resin.io, and why certain choices were made, allowing our users and partners to understand not only what we are building, but also the how and why.
Any questions? or you'd just like to say hi, come find us on our community chat