Analysis I have worked on SAP engagement for enterprise customers for almost 30 years. While SAP continued to develop application features and content during these years, generally the hosting of SAP fell into the same classic selection process as any application: what is the service level, what is the cost, who has the happiest customers?
However, with the advent of public cloud (IaaS), those tried and tested criteria no longer give customers an accurate evaluation of each option. As such, here are some of the key criteria SAP customers need to include in their evaluation of hosting options.
Of course, cost comes first in most scenarios. Nothing happens in an enterprise without a good business case. However, at first glance, negotiated costs can be deceiving. Enterprise agreements, short-term discounts, migration funding and more can all muddy the waters when it comes to getting a clear perspective of the pricing you are signing up for. In order to best predict what future costs will look like, it’s important to understand the hyperscaler’s attitude towards cost, and then extrapolate their pricing history.
Additionally, with hyperscaler infrastructure comes the great benefit of metered charging, where you only pay for what you use resulting in variable costs. While this is actually a very good thing in general, it can cause headaches for procurement and necessitate new processes for IT to properly manage these variable costs. When selecting a provider, you need to understand which hyperscaler/partner can best help you see and control ongoing metered costs.
Nowadays, we expect public cloud to be more resilient than on-premise. And, while this is generally true, not all clouds are equal – especially for applications such as SAP. You will need to evaluate the amount of downtime each hyperscaler has experienced over the last 12-18 months to get a sense of how they compare. SLAs are one thing – historic performance is a much better guide.
Publicly published statistics on hyperscaler downtime show that AWS fares far better than Azure and better or similar to Google Cloud Platform. SAP, as we know, is very sensitive to downtime – especially unplanned downtime. Choosing the most stable platform is an important part of the selection criteria for all your systems but particularly for SAP given its criticality to the business.
Speed of innovation
The best innovation is happening in the cloud these days and, as everything is or will be in the cloud eventually, innovation and speed to innovation needs to be an integral part of your IT road map for the next 10 years at least. Right now, AWS is the leader in getting new innovations and new ideas to the market quickly. Azure categorizes itself as “fast followers,” which is an important but safer position in the market. Google, while very good at what they do around data items and other categories, does not display the same customer obsession and innovation focus in its cloud capacities as its competitors.
Why does this matter? When looking at innovation, particularly the speed of innovation, you need to also consider the technology adoption cycle. This is the timeframe from when the new technology is introduced to when it is ultimately retired. When the adoption cycles of innovation among hyperscaler’s reach a one to two-year difference, this becomes a critical differentiator. Some would say that right now, AWS is already one to two years ahead of its competitors meaning that the technology that gets introduced by them will release, runs its cycle and be retired by the time it gets to other cloud providers. Selecting the most innovative platform is critical for any long-term strategic decisions.
AWS has always led the way on the most performant technology, both on storage and compute. What AWS has done recently is launch all of their instances based on their nitro hypervisor which takes all the hypervisor load off the VM and allows the workloads to get access to all of the resource on compute. Nitro was, in effect, an add-on component to every VM. This allows for unparalleled performance.
Additionally, AWS is innovating into its own chips and own chip design, and is releasing its G Class of families which have already shown to be not just cheaper but higher performing than other chip providers. This gives every indication that AWS will continue to lead the way on performance. When running SAP, one of the biggest complaints end users typically have is a lack of performance. Overall performance and performance when you need it is one of the biggest benefits IT departments can give to their customers so choosing the most performant platform for your systems is table stakes.
Another benefit of public cloud is that it has an open API. This means that it is a publicly available application programming interface so developers in offices (and garages and living rooms) all over the world are coding. This is an example of hyperscalers and their partner ecosystems adding a significant amount of additional innovation that their customers can access directly. As a result, we consistently see brand new use cases for BI, speech, chatbotting and other great technologies that can integrate very simply with public cloud.
This proves once again that public cloud is a platform best suited for future innovation. It also suggests that the amount of innovation is directly related to the number of partners that hyperscalers have as part of their ecosystem. AWS has, by far, the most, and that is very important if you want to have access to these third-party capabilities. You will probably find that they enable these capabilities for AWS before any other hyperscaler. The more customers that are on a platform, the more partners will do development there which means the more traction there is for the new customers. This “network effect” is something that AWS has done well for 10 years and is something that is very difficult for others to catch up on.
Ultimately, automation is the most important secret ingredient of them all. Automation not only allows you to do things automatically, remotely and quickly, but also with more quality. Quality builds for installed software systems, like SAP, are essential.
Classically run on-premise, most people spend their time trying to “keep the lights on” for SAP and maintaining and fixing things manually. The downside of that is people make mistakes. Manual steps are inherently risky, and you could end up with situations where Dev might have a different kernel patch version than COS, which might have a different version than Production. Suddenly, you get unexpected defects when you run workloads on production. The way to avoid this is through automation. Automation will remove the manual errors and ensure that there is a repeatable and reliable process for both the build and maintenance of the SAP landscape. This higher quality ensures that you can reduce the noise in the environment and reduce the amount of work and cost to maintain the system.
Another plus of automation is the agility it enables. Suddenly, you can do things faster. So, when users want a system refresh, or a restore from a backup, or to patch a system, these things can now be done much more quickly. And, with automation, you can surface it into a portal that will allow end users or project team members to self-serve on the maintenance of the landscape. This agility delivers satisfaction to the project team as they can try out new ideas quickly. This is, of course, the fundamental premise of innovation – the ability to try something quickly, fail at it fast or, if it does work, promote it quickly into production. If you want to innovate, you need to be agile, and if you want to be agile you must automate.
Interested in hearing industry leaders discuss subjects like this and sharing their experiences and use-cases? Attend the Cyber Security & Cloud Expo World Series with upcoming events in Silicon Valley, London and Amsterdam to learn more.