You know, truth be told, there’s more than a couple of IoT geeks on our team here at Monterail. In late 2015, we set up a Raspberry Pi-based smart office system at our Wrocław offices, a process that one of my co-workers described in detail in a separate blog post. Since nearly three years have passed since the installation of the system, and given that we’ve grown a lot during these two years, we’ve decided that it was time to introduce a couple of upgrades to the system and make our office even smarter.
Although the system implemented back in 2015 has been working up to now quite well, we’ve nevertheless been introducing small revisions and modifications over time, just to make the system more user-friendly and get it to operate more smoothly. One particularly critical area we took care of was reducing the number of write operations down to zero, because SD card life is mainly dependent on the amount of write operations performed on it. Now, the system runs entirely in RAM on each Raspberry Pi, while the SD card works more or less like a Live Boot CD.
A small recap—our smart office suite comprises the following features:
- lighting control,
- conference and call room occupancy signalling,
- handling access codes for the main doors,
- audio system with a wireless music streaming option,
- separate audio experience in the restroom,
- managing the content displayed on five separate TV screens,
- kitchen LED lamp color management.
And on top of that, the entire suite can be controlled via a dedicated Web app.
Before we dove into implementing new features, we decided to first document what we already have. And let me tell you, it wasn’t as easy as it may have seemed at first glance—mostly because the smart office team has changed in the meantime, and the code structure we were left with wasn’t all that clear to all of us.
The suite is hosted on seven machines (Raspberry Pis) sitting in our office. The suite’s feature set is naturally divided into a couple of modules (e.g. music control in the restroom, TV display control, lighting control, etc.), with some machines hosting more than one module. In the original implementation, each machine in this small distributed system was running one main process that combined multiple modules. The arrangement, however, produced some real problems—we couldn’t switch off or restart individual modules without disabling others and it was hard to test new features or debug old ones. Sometimes, I even found it difficult to identify which machine does what exactly.
As I noted before, each Raspberry Pi runs its operating system from a live image, so we needed to stitch the production code inside. That caused us yet another problem. While development and testing were safe (each reboot restores the original code), deployment was extremely long, which was quite annoying, at least in my opinion.
So what did we decide to do about it?
Smart Office 2.0
We decided to abandon Python and pick a framework that was more familiar and a better fit for us. In the end, we settled on rewrite existing features to Node.js.
We didn’t want to give up live systems, but we decided to improve the deployment a bit. So we've separated the persistent fragment of the memory (cache) on each machine, to keep our smart office repository in there, as well as latest build and installed dependencies.
The update script should work as follows: if the
master branch has changed, the new build is created with
npm run build. If, in addition, the dependencies have been changed,
the npm install is launched. The entire installation takes place in the RAM and after that, the script synchronizes the new version of the application with the persistent cache using
Modularity and module flow
To provide modularity, encapsulation, and a clear division of tasks, each feature group is now closed into its own module and each module is running as a separate process. It's worth pointing out that multiple modules can work on one machine, but the new system is designed to treat each module in the same way.
A distributed connected system
Each module implements its own set of functionalities, but the main purpose of the system is to serve these functionalities to the user. To achieve this, the user can run a Web app that is the interface (http API) for the main controller module. The controller module is a special module that maintains a connection (websocket) with all the remaining modules. This allows the controller (invoker) to dispatch commands registered by modules (executors). The controller is also responsible for tracking the online status of each module.
To limit the number of writes to the machine's memory, each module keeps its state in a cloud database. This not only allows the modules to read their state after a crash, but also gives the Web app the ability to update the state in real time.
Controller and Modules
Each module in this distributed system (except the controller) is called an executor. Executors maintain a connection with one special module, the invoker, at all time. They register commands invoked by the controller module, the only invoker in the system. It's nothing more than command pattern adapted for a distributed system.
The whole effort took us around three weeks. In the course of these three weeks, we also managed to hammer out a unique programming style. Even though we wanted to use our favorite text editors and tools to code with, the reality was that we could activate and test particular modules only using Raspberry Pi computers. So we decided to code on our own laptops and configured automatic code synchronization via SSH and the automatic activation thereof triggered by its presence on the target machine. Because of that, it felt just like running it locally on a laptop.
As a result, we now have a fully controllable, easy-to-monitor, expandable smart office system. We are now able to review system logs remotely using the Web application or locally, on each machine. Replacing one of the Raspberry Pis after system or hardware failure is no longer painful and no longer requires long and arduous configuration. The system’s modularity allows us to shift responsibility for a given function onto a different machine or to add new features as new modules. Plus, adding a new machine to the system requires simply installing the right modules.