April 30, 2020
I have an app in Ruby on Rails dating back to 2011 that hasn’t gotten any new features in the past five years. It’s slow and barely able to serve our growing user base. Can you help us deal with it?
This is one of the more common scenarios I hear from clients at Monterail. Legacy code that’s hard to maintain and that has security vulnerabilities can be a nightmare both for businesses that have to use it and for developers (like us) who have to deal with it. In my decade or so as a software engineer, I’ve had the opportunity to observe successful—and unsuccessful—technology shifts undertaken by developers seeking to update legacy code in web apps. This could mean, for instance, changing from version two to version six of a framework, or from Ruby to Python, or from a monolith app to a microservices architecture, or from manual builds to continuous delivery. To pull off a painless (or, at least, far less painful) update, you have to determine if change is really warranted, decide on the approach most appropriate for you, and commit for the long term.
This article was originally published in the Increment Magazine.
Poor performance is one reason to make a change in technology. The gradual—or sudden—drop-off in popularity of the technology you use is another. After all, if fewer developers on the market can support your work, then you risk your technology becoming hermetic. Someone who built their app with Backbone in 2010 is today struggling to solve model-view-controller problems, while everyone else is working with component-based frameworks like React or Vue. The stakes are even higher if your framework of choice is losing active support. Remember AngularJS? In July 2018, it entered long-term support, meaning that Google is no longer merging new features or fixes that would require even a minor breaking change.
And when your technology is too inefficient and too expensive in terms of human or machine costs, it’s not only a good reason to initiate change—it’s probably your last chance to fix the technology before it becomes irreparably broken. You don’t ever want to reach a point where creating a new feature is totally infeasible.
Struggling to scale quickly with their monolithic PHP app, the European e-commerce company Zalando wasn’t able to deliver new features in a fast or efficient way. This pushed them to switch from a monolith to microservices in 2016, enabling separate teams to deliver features with much greater speed.
Once you’re confident that the product you’re working on needs an upgrade, it’s time to make an informed decision about the direction of the change. Code becomes legacy as soon as it’s written, and there’s no guarantee that your current technology won’t lose support or become obsolete. (Rest in peace, AngularJS.) Change should therefore support future flexibility. Here are some options:
The first and most obvious option is a Big Bang rewrite: changing the codebase from scratch and cutting over all users in a single conversion. But total rewrites are extremely time-consuming and bound to incur considerable cost. You may also end up with an app that’s unfit for release for months or even years, and you won’t be able to see the final result until the process is finished. Plus, the larger an application is, the harder it may be for developers to provide maintenance and add new features while the rewrite is in progress. With documentation and know-how, adding new features to a large codebase can be a walk in the park. Without it, it’s really hard.
A second option is to add new features built with new technology to the existing codebase. Ideally, you shouldn’t touch the old technology and should instead keep all new features separate. Unfortunately, such a pristine outcome is relatively rare: new features almost always need to be integrated with old ones. This also requires a detailed plan, as the matter is complicated to get right. Creating new components within a complex architecture requires loads of information about what is and isn’t working in the legacy app. You need a wide perspective to see the technology from both old and new angles.
With a monolithic application, you can create new features in a new codebase that is deployed separately, while using a single database that interacts with both new and legacy code for storing data. This solution appears easy, but its long-term success depends on your commitment to your monolith. (Something to consider especially if your monolith suffers from machine performance or concurrency issues.) If, for example, your monolithic app starts gaining more users, a single database can become a bottleneck. (Databases in the cloud, on the other hand, can be scaled up.)
Because of the long-term fragility of the original codebase, especially with regard to potential security vulnerabilities or bugs, this architecture isn’t built to last. In practice, leaving outdated code as-is means you’re waiting for it to eventually fail for good.
Rewriting your entire codebase is a drastic and often ill-considered idea. Tacking new features on to an old codebase is more workable, but can bring with it serious side effects, such as security issues if your legacy code is based on an older framework version. So, is there anything else you can do that isn’t so costly or so risky? What are your other choices?
I recommend a hybrid approach. This option, like the Big Bang rewrite, requires changing out the whole of your old code. But unlike the first option, the rewrite should be spread out over a period of time—say a couple of years—to minimize technical debt and financial cost. This incremental approach will be based on your vision, which is crucial here. You need to know what to change first: Some features are core and business-critical, while others take more of a supporting role. Working on multiple layers will be easier with a clear goal in mind. For instance, are you still planning to ship a new feature in the midst of this changeover? If so, you need to account for that in your planning. Without a clear roadmap, your code will end up really messy, which will make it harder for developers to decipher down the line.
For me, this approach is all about the future and scalability—and it’s most easily accomplished using microservices. Assume that your legacy codebase is an element in a new microservice ecosystem. Naturally, it’s too big and complex to be a true microservice, but it can communicate with the new features just like a microservice does. To handle this arrangement, you’ll need to create interfaces like APIs or bridges to allow your legacy code to communicate with your new technology.
As you add new features in the form of separate microservices, they’ll eat the business logic of your legacy code piece by piece. While you could achieve something similar by adding new features to a legacy monolith app, you risk creating technical debt and losing flexibility.
Almost all solutions have side effects, including this approach. But we need to know how to minimize them. For the frontend of a web app, a reverse proxy can smooth over changes. Using this method, you can even replace the logic serving a single URL in your web app without touching legacy software. But this technique has its own requirements—for example, if you have users logged in, you should maintain state between pages. We always need to store state, but in this solution we need to either move or share it between two apps. This is hard to maintain, but you get more elastic infrastructure.
More complex changes require improvements in the infrastructure, such as creating a layer of frontend servers, from which you could render fragments of your application from different sources, like the microservices example above. The XML-based markup language ESI (Edge Sides Includes) might be appropriate for this task, while Varnish or Nginx can implement it. Then, to ensure your app remains performant as your user base grows, create load balancers and separate databases based on context, which can be used separately in micro or macroservices.
Creating a flexible environment capable of supporting this kind of arrangement can also be a challenge. You’ll only need to invest in your infrastructure once when you move to microservices, but you will need to support the maintenance of this architecture further down the line. Still, it will likely remain much cheaper than rewriting all your code.
If your primary objective is to create an easy-to-maintain ecosystem (rather than focusing on performance first and maintenance second), you’ll also need to identify the key elements of your system in the development process and create a roadmap for making changes to or around them. Introduce some continuous integration and deployment magic into the process—where CI and CD processes can take place automatically, without the help of QAs or developers—and you’ll wind up with a mature piece of software with a clear architecture that’s easy to modify and adjust.
Of course, this hybrid approach isn’t the only viable option available or in use in the world. But the incremental change of your codebase, culminating in a complete rewrite, which leaves you with working code so your business stays safe, alongside microservices, which enable different teams to deliver new and different features independently, offers both a process and an architecture designed for longevity.
You might look at my preferred solution and think, “Well, that’s a little overengineered,” or, “I don’t have a team in place for this kind of job,” or, “It’s too complex for my platform,” or even, “This isn’t pure microservices architecture!” I don’t disagree with any of these statements, but I do think that upgrading your technology should force you to think about the long term.
What I offer isn’t a quick fix. Instead, the hybrid approach provides you with new technologies on top of a working base. Incremental change via gradual shifting to microservices allows you to easily update your application and take advantage of up-and-coming frameworks, all without forcing you to compromise on reliability. So, are you ready to rearchitect your software?