Legacy Apps on Kubernetes
Why use Kubernetes to modernize Legacy Apps?
Although legacy applications and on-premise platforms are the foundations of many companies, they are difficult to update and expensive to scale. There is an inevitable need for businesses to move towards the direction of cloud-based technologies so as to create modern experiences for customers and partners. More importantly, such technologies provide flexibility and cost efficiency for businesses to grow and expand.
The use of a container has been widely used and supported by developers in cloud computing because it is easy to package and ship. Containers are lightweight, portable, and can run nearly anywhere. Kubernetes is a powerful workload and service management tool to use in conjunction with containers to modernize legacy apps into a cloud based format to make them serviceable and more manageable.
To fully understand the advantages of converting to Kubernetes let’s take a look at Great Pizza LLC. They are a growing pizza company that started with a single location. When they started they just needed to solve technical issues like their point of sale system in an affordable way. Now they are growing and expanding into additional locations and they are running into issues scaling.
Looking at the following diagram you can see they have a point of sale server. As it stands now they need a server at each location they want to open. Each register currently connects to that location’s local server that houses all their systems requirements.
As you can see the server controls everything, the database, backups, logs, security, active directory, performance details, notifications, and the application.
So first let us take a look at the challenges of this current system and discuss what solutions we can solve with something like Kubernetes.
Currently Great Pizza LLC has been running for 3 years as their logs and database continue to grow. One day a manager logs into the server and sees an alert on the screen that storage is running out.
First you might think to get a larger hard drive but this requires the data from the existing hard drive to be backed up and installed onto the larger hard drive. This will result in downtime and the work cannot be done after hours as Great Pizza LLC just started running 24 hours a day. The owner is understandably not satisfied with this solution.
Kubernetes is a solution the owner will be happy with. Let’s discuss how kubernetes would prevent this from being an issue.
- The all around architecture in Kubernetes would force independent services. This decouples the many individual components that lead to the storage being filled much faster then the owner anticipated.
- Changing storage on the fly becomes very easy with kubernetes, often just requiring a single variable to be updated. As shown below.
Session Management (often on disk for older apps)
In a legacy application there is no guarantee that there is good or sufficient logging throughout the entire application.
One day the registers start to say they have an error connecting to the server, so one of the employees calls the company that setup the system. The company tells them they need to get access to the server to check the logs.
When something doesn’t look right or things have gone wrong in a system, logging can be crucial for trouble-shooting. It plays a critical role in diagnostics and debugging.
Kubernetes logs are written to the default /var/log directory. Logs can be viewed live on the server, fetched/parsed/distributed by a logging agent, and distributed to a centralized location or collected by a sidecar container.
Basic logging viewing
– Output data to stand out (stdout) and stand error (stderr) streams
– name: pod5
args: [/bin/sh, -c,
‘while true; do echo “$(date)”; sleep 1; done’]
Run the pod by: kubectl apply -f pod5.yml
Fetch live log for a single pod: kubectl logs pod5
Fetch live-tail logs for many pods: kubetail my-app -s 15m
There might be times when a container crashed while we were gone for the day. The live logs mentioned earlier would not be useful. We will have to look at historical logs to figure out what went wrong and we can preserve them independently from a container’s lifecycle and send them to one or more locations. By using a logging agent like “fluentd”, we are able to collect, parse, and ship logs to endpoints like StackDriver Logging and ElasticSearch that are provided by Kubernetes. There are also third-party providers available like Elastic, Splunk, and Sumologic. (Some of them are free with low usage). Running a sidecar container can also be used for collecting logs. For this option each pod runs a container so it may require more resources. However, log storage is more flexible with a sidecar container and it can be configured separately for each pod. This may be a better fit for businesses with larger Kubernetes clusters.
Encryption of Traffic
When the POS was being created the owner wanted to move quickly and affordably. Protecting what you have is less of a priority when there isn’t data to worry about yet. With the current architecture it’s common to have the misconception that it is a local server so you don’t need to encrypt the traffic since it doesn’t touch the internet. With proper networking isolation this can be mostly true, but skilled hackers can still be a concern and once they get onto the network without encryption they can see everything.
The Kubernetes solution explained simply is encrypt everything! It probably sounds like that would be complicated but with tools that exist this becomes much easier because you essentially have a reverse proxy at every connection making sure the connections are encrypted.
Scaling is one of the largest challenges with the current legacy application. If a single location begins to overtax its local server it is probably a matter of adding resources to the server itself. Though Great Pizza LLC already has multiple locations and wants to expand more the data between locations isn’t currently shared at all. So the company’s rewards program has started to cause headaches. Customers end up ordering from different locations and the rewards aren’t shared and this frustrates the customer who wants to redeem their free pizza and can’t.
As much as scaling is one of the largest challenges for our scenario, scaling is also one of the things Kubernetes is designed to solve.
It takes a minimalist approach where you run your applications as very lightweight applications called containers. These containers can be scaled up and down depending on demand. So this is what allows the minimalist approach to work because what is running outside of peak business hours can be extremely low and if those containers are in the cloud then you are only paying for what is used, so you only pay peak prices during your peak business hours instead of always having them available just in case.
If you think of the single first location of the pizza restaurant it probably won’t make a large difference in cost, but when you have 1000 locations the cost differences will reach the stratosphere.
The legacy application has monitoring. Windows has pretty decent options to monitor its own resources. The complication though comes when they try and scale up. How do you make sure that when you have 1000 locations that they are all operating with similar performance. Or says you get random calls for your stores saying the system isn’t working as expected, being able to track down which locations are having issues and at what times.
In Kubernetes you have a great option to scale your monitoring with your applications, utilizing what is called a sidecar. If you think of a motorcycle with the sidecar attached, the container would be the motorcycle. The container is still responsible for the core functionality of your application and in no way depends on a sidecar to be present. A sidecar requires the motorcycle or it doesn’t really serve a purpose. To continue the analogy, if you go on a motorcycle ride with a bunch of friends, if you want consistency you all have the option to attach a sidecar or not.
Monitoring tools are similar. You can choose to use the side car or not on a per container basis so you have high flexibility. In the case of monitoring if you mount a sidecar to every container you can now have both high level information because you have visibility of the entire environment and you also can dig into specific containers for specific information. So in this case the pizza shop owner can monitor all the pizza shops and even be alerted when specific thresholds are met.
Scheduled Jobs / Tasks
Every week the manager of Great Pizza LLC exported sales transaction data in the form of csv files from their current software and imported these data into many spreadsheets in order to generate some financial reports to evaluate the performance of the sales operations. This repetitive manual spreadsheet process can be time consuming and error-prone.
Kubernetes’ Cron job is a useful tool for creating periodic and recurring tasks. Repetitive tasks like retrieving data from the Great Pizza LLC’s database can be scheduled on a regular basis with cron jobs. With the combination of fresh data, batched jobs, and business intelligence tools, financial reports can’t be more convenient to be generated and presented in a professional manner.
Contact us for a free one-hour consultation to discuss opportunities to layer on compliance, security, and scalability onto legacy applications as well as how to approach leveraging modern architectural patterns to incrementally rewrite/replace legacy systems while still delivering incremental value to your business.