The most popular reason for businesses to migrate data and infrastructure to cloud solutions is to reduce costs.
Traditional data centres are expensive to run. In addition to the capital costs of buying and replacing equipment, there are ongoing staffing costs to manage infrastructure, not to mention the communication and power costs to keep data moving and ensure that the lights stay on.
Using a public cloud service like Azure helps you reduce many of these costs. Without the need to buy and maintain servers, and the ability to reduce the amount of technical support required mean that the impact on bottom line is quite significant – that’s before you take into account power and networking costs which become part of the subscription rather than being a separate, expensive, line item.
With public cloud services, there are also multiple opportunities to strip out costs associated with backups and disaster recovery. Azure is resilient, operating across multiple data centres which means that data can be backed up in real time, which significantly reduces the risk of outage or data loss.
In spite of these advantages, the fact is that many organisations don’t get the level of savings that they anticipated, and in some cases, without proper management, subscription costs for Azure can quickly get out of control.
Why do cloud costs increase?
There are multiple reasons why cloud subscriptions can increase. Some of these are simply due to the way that needs evolve over time. If the initial specification wasn’t correct for the application needs, new or higher specification Virtual Machines may be required – similarly, the amount of data that a business stores can grow exponentially – storage costs are a fundamental part of the cloud.
That said, the foundation of rising costs can often be tied to the way a business first moved to the cloud.
One of the most common methods for specifying a cloud solution is simply to replicate what you already use in a physical data centre. Within Azure, it is straightforward to provision VMs based on what you already have – the same cores, the same processor speed and the same amounts of memory and storage.
This can be a big mistake – choice when it comes to physical servers is often constrained by what a manufacturer produces, whereas Azure offers much more nuanced choices. Simply replicating existing hardware in a cloud environment may mean creating new infrastructure that is over or under what is required. Costs can be higher, or performance lower than needed.
The entry level Azure Virtual Machine has a simple, basic specification:
- Single Core
- 7GB RAM
- 20GB Storage
- £8 Per Month (based on 744 hours’ continuous use over 1 month)
This configuration (A0) is designed for testing simple applications. There is insufficient memory or processing to enable serious use, and a more realistic basic Virtual Machine in a production environment might be a D3 level machine which costs 34x as much and includes the following specifications:
- Quad Core
- 14GB RAM
- 200GB Storage
- £270 Per Month (based on 744 hours’ continuous use over 1 month)
What we often see is that infrastructure was initially specced too low – sometimes to hit a pricing point – and that once real world usage levels are reached, the basic specification is too limiting, and as a result, wholesale changes to the architecture are required that immediately push up the costs.
This issue arises when proper analysis of requirements isn’t carried out at the start of the project and budgets are not appropriate to needs.
Too Much Power
We also encounter occasions where the opposite is true – too much power is specified from the outset.
This usually happens when cloud infrastructure is planned in the same way as physical machines. With a physical data centre, you don’t have the ability to scale how many servers you have in real time, whereas with Azure and AWS you do. If the cloud is approached in the same way as a physical data centre, you will often find that specification is based on peak levels of demand to maintain application performance when in truth it is possible to create a more variable level of resource that switches up and down depending on need and reduces the amount of redundant capacity.
How ‘Cloud Control’ helps
In 2010 when igroup began working with Microsoft Azure in 2010 as a hosting platform for SharePoint, we discovered that costs were often variable over time.
As a resource intensive application, SharePoint requires multiple VMs to handle the different parts of the platform – a physical SharePoint farm will usually have separate servers to handle:
- Document Storage
- Application Hosting
- Database Hosing
- Search Indexing
- Active Directory
- User Management
Each of these machines had a different specification, and usage patterns varied over time meaning that capacity wasn’t always fully utilised.
We developed Cloud Control to automate some aspects of resource management – adding machines or power when needed and scaling back when not. It gave our clients much more cost effective cloud solutions and meant that they could better control their consumption to reduce costs without hampering performance.
Over time, we added more functionality to Cloud Control to handle additional management tasks including backups, Active Directory and OS level patches, but the core functionality remained the same – helping our clients to reduce their costs by around 30% compared to buying direct from Microsoft.
Over the longer term, cloud costs still have a tendency to rise – the amount of data you need to store will grow, and the number of staff you have accessing cloud hosted applications will increase, however by running management software and correctly specifying your machines based on need, you will not be paying for capacity that you don’t use, or be saddled with infrastructure that doesn’t meet your needs.