While almost every business has at least some workloads in the cloud, many are still maintaining an on-site data center to support their legacy applications. Legacy on-site data centers are struggling to keep up with growing businesses. Many are now turning to a provider’s cloud to solve for this roadblock. The cloud has become a highly trusted environment, even for mission-critical applications. Many business leaders are looking to migrate and modernize the last of their legacy applications to the cloud, so they can finally shutter their on-prem data centers.
How do businesses typically migrate those legacy applications?
Before we jump into the pitfalls to avoid, let’s do a quick level-set on migration methodologies when it comes to legacy applications themselves. We can break these down into three different categories:
‘Lift and shift’
Lifting and shifting involves taking the legacy application “as is” and physically moving it to the cloud. Since a lot of legacy applications aren’t built for the cloud, this methodology should be deployed with extreme caution. However, if you have an application that can’t be rearchitected and you aren’t ready to replace it, lift and shift may allow you to get out of your legacy data center sooner rather than later. (An important consideration if you have hardware contracts that are set to expire.) We’ll touch on this again when we get into the pitfalls to avoid.
Refactoring means rearchitecting the legacy application to make it more suitable for the cloud and able to take advantage of the functionality of the platform. Refactoring is not a binary option. There are levels of refactoring that can be done to an application, and the amount of effort you need to put into it will be dictated by the application, the destination platform, and your organizational requirements, e.g., security and performance. Refactoring also assumes that you have some control over the architecture of the application. For many older packaged applications, refactoring may not be an option.
Replacing your application
Replacing your legacy application with one that is architected for the cloud is generally the “safest” option, and it makes the migration easier. This might be a newer version of the same application, or it could be an entirely different application altogether.
Unfortunately, it’s not always an easy option on the bank account. Replacement can also delay the closure of your legacy data center as you do your due diligence on replacement options. You should also be aware that although many legacy business application vendors offer a cloud-architected version of their solution under the same (or similar) name, the functionality of these applications may not be equivalent to your on-prem version.
You may also like: Top Cloud Migration Best Practices for Controlling Costs
What can go wrong when migrating those legacy applications?
You’ve probably heard the old saying, “Hope for the best. Plan for the worst.” When it comes to legacy data center migrations, hope is not a strategy. Proper planning is the only way to prepare yourself for all the issues, large and small, that can crop up. Here are nine pitfalls you’ll want to avoid when migrating legacy applications out of your legacy data center.
1. Choosing the wrong cloud
This is especially an issue when you’re using the lift and shift approach. If you’ve got a legacy application that was built in the mainframe days, you probably aren’t going to want to move it to a hyperscale cloud like AWS or Azure. Your best bet is most likely a private, hosted cloud that most closely resembles the application’s native environment and allows you to maintain the greatest control over the environment and the application.
You may also like: 6 Things to Expect When You Move to the Cloud
2. Migrating when you should refactor or replace
Lift and shift often looks like the path of least resistance compared to refactoring or replacing. Nevertheless, it still takes effort, and you may not be taking full advantage of the cloud. If your legacy applications are a legacy in every sense, it may be time to at least consider an upgrade or replacement. I hate to see organizations devote time and effort to migrating an application that’s an albatross around their neck simply because “it still works.”
3. Forgetting to review application licensing
While you’re focusing on the technical aspects of the migration, it’s easy to forget about your application licensing. You’ll want to review the fine print to ensure there are no issues with the way you’ll be accessing your application in the cloud. If you’re thinking of upgrading your legacy application to a cloud-native version, you’ll also want to review the way the application is priced. Newer, cloud-native applications often have a very different pricing structure than their on-prem predecessors.
4. Not sunsetting applications
Every organization has applications they’ve invested in that have become obsolete or are rarely used. Culling those applications can simplify your migration process. Start by doing an inventory of your applications and evaluating their value to the organization. Of course, you’ll want to get your business users involved in this process, so you don’t inadvertently sunset any applications, no matter how old, that are still vital to the business.
5. Forgetting interdependencies
Mission-critical business applications often allow for easy integration with point solutions and custom development, but that results in workload interdependencies spread across servers. If you migrate one without the other, you can break those connections and end up with extended downtime. We use a tool to evaluate interdependencies so we can group workloads and servers into migration phases.
6. Migrating messy data
If you’ve ever moved from a home that you’ve lived in for a while, you know how quickly old documents can pile up. When a colleague helped her parents move recently, she ran across tax documents from the 1960s! Your historical business records can pile up just as quickly, and some of them are just as useless. As with sunsetting applications, you can streamline your migration by cleaning up your data and archiving whatever doesn’t need to be migrated.
7. Not factoring in the amount of data when choosing your migration method
Your connectivity and bandwidth can create problems when migrating a lot of data, leading to excessive downtime. I had one client with so much data it would have taken months for them to migrate with the bandwidth they had available.
8. Underestimating your human bandwidth
For most organizations, migrating legacy applications and data centers to the cloud is no small undertaking. It takes time and a whole lot of planning, usually by people who already have a full plate. Furthermore, the migration itself tends to be done at night and on weekends. Often, our clients engage us in their migration efforts, not because they don’t know how to do it, but simply because they don’t have the human bandwidth to get it done right.
9. Failing to plan for disaster recovery
This pitfall is twofold. First of all, you need a plan for what to do if the migration goes awry somehow. An up-to-date disaster recovery (DR) plan (that’s been tested) is a must-have. Then, before you migrate a single workload, you’ll also want an update to your DR plan for your new environment as well. That plan should kick in as soon as the first phase goes live. Read our Strategic Guide to Disaster Recovery and DRaaS to learn more about DR planning.
Avoid the pitfalls with an experienced cloud provider
At TierPoint, we’ve helped clients migrate countless legacy workloads to a variety of different environments. This list of pitfalls just scratches the surface of what we’ve seen. If you’re unsure about migrating your legacy data center or want assistance with planning or execution, we’re here to help. You can reach out to one of our advisors or learn more about our migration services on our website.