Moving any mission critical workload to the cloud can be daunting, but deploying HANA in the cloud requires a special understanding of the options. There is still some confusion and misconceptions around what it takes to implement and run HANA in a multi-tenant environment, while still being in compliance with business continuity requirements. It is important for businesses to understand the HANA deployment alternatives and architectural design before selecting the best fit for their business continuity requirements.
One major barrier to entry for HANA has been the considerable hardware investment required to support the platform. Memory heavy appliances can be quite expensive and architects must design with this in mind. There is always a balance between cost and the resiliency of the high availability and disaster recovery design. This is not unique to HANA, but paying $100,000 to over $1m per appliance necessitates that many alternatives are considered before deciding on a final design.
One HANA deployment alternative is virtualized HANA (vHANA). Leveraging vHANA can provide significant cost benefits to a customer. First, customers can eliminate the big upfront CAPEX investment of a HANA appliance. Next, it is also possible to take advantage of the vSphere tools for high availability and disaster recovery. This would eliminate the need for two dedicated HANA systems; one for high availability and one for disaster recovery. Bear in mind that the cloud provider must have the correct architecture in place and the additional vHANA capacity to accommodate any failover that may occur. Finally, depending on the cloud service provider, there may also be an option for consumption based billing, so customers will be billed based on the resources used instead of the resources allocated. Consumption based billing can be very beneficial to the customer considering that HANA often has a large amount of memory and CPU sitting idle.
Even with the expanding virtualization support, many HANA implementations will still require physical appliances. This is relevant for both scale-up and scale-out scenarios. For scale-out high availability scenarios the option is to add one or more standby nodes that can takeover for any active node that happens to fail. There are a couple of options for scale-up high availability implementations. One is similar to scale-out by utilizing GPFS or a storage adapter where, in the event the primary node fails, the standby node would take over the persistent layer and load data into memory. The other option for scale-up high availability is HANA System Replication (HSR). HSR will use a combination of snapshots and logs to replicate data to the target system. The benefit of HSR is a quicker recovery time objective (RTO) because data can be pre-loaded into memory. HSR and storage level replication are also relevant for DR to send data to a target system in a secondary data center.
It is critical for HANA architects to understand the various options and what impact each will have on cost and RTO/RPO. Working with a cloud service provider that has extensive experience in architecting HANA in the cloud is crucial not only in ensuring a successful HANA deployment, but also in ensuring that business continuity and disaster recovery requirements are met.