An essential component of any Cloud migration is the actual code base that needs to be run and understanding how it operates within the ecosystem of your organisation. A good, detailed portfolio analysis (as covered in the first blog article) goes a long way to teasing out the primary migration challenges, but during the migration planning this needs to be refined with further detail.
A good example is latency; caused by the inconvenient fact that the speed of light (in a medium) is finite, as is the processing time taken to route those packets from one place to another. Latency which previously didn’t really exist in an on-premise data centre suddenly becomes a major issue between applications that have even moderate latency requirements: database queries timeout, reports don’t run, user interfaces become unresponsive. Trying to deal with the complexity of application and data interactions across multiple public Clouds can rapidly become a serious operational headache.
Teasing these out in detail requires a solid Cloud Migration Pipeline. This shines a light on the intricacies of an estate, clearly identifies specific challenges and helps in finding common solutions. The process you use to do this should be iterative and amenable to evolution as new challenges and opportunities arise. A good example of a new opportunity is AWS Outposts. How could your organization make best use of Outposts to solve the low latency problem, without burdening it with additional costs? Having a holistic view allows you to rapidly understand how you can take advantage of new technology.
Which brings us to the tooling part. Cloud migrations sometimes fail because the tools being used to migrate are hidden away from the people whose job it is to deliver code to the business, or the tools don’t meet their needs. One of the reasons the Hyperscale vendors have been so successful is that they provide easy to consume services, which enterprises often then remove the access to!
Platform as a Service (PaaS) offerings, such as OpenShift and Cloud Foundry, and containerisation in general, have had some success, but these have often only worked well for simpler Java-based applications and Grids, and have struggled to handle code that is older and not appropriately optimised for a Cloud.
A solution can be found by developing a common set of tooling that can be consumed both on-premise and on public Cloud, making them available to the end consumer – the developer. This tooling should cover the complete development lifecycle, through build, test, deploy and operate. By allowing developers to be able to gracefully deploy code at scale, in a hybrid environment, those agility costs can be spread across the business. Once developers have the ability to easily deploy code, they will find it much easier to engage with the Cloud Migration Pipeline and can help unblock issues, by rapidly testing solutions to problems identified during a migration.
We have architected over £1bn worth of Cloud transformations, by developing the right tooling and automation for software and cloud infrastructure and building a Cloud Migration Pipeline that works with it in a holistic manner, ensuring that problems are solved rapidly and repeatably. We have our own reference platform for cloud automation and have massively accelerated cloud automation and migration delivery for clients.
Part 3 in this blog series will be online at the start of next week. We’ll also be posting some more of our thoughts from AWS Re:Invent, following on from Marius’s hot take last week.