Visiting a data-center recently, and experiencing the deafening howl of air conditioning fans and rattling servers, took me back in time to a generation I never thought I would be revisiting. I started my IT career in 1994, in a world of AS/400 and tape drives and floor-to ceiling hardware, and it all felt so very normal, with lots of beige boxes each one tapping out a disco of status lights. Data centers were “a thing”, every company had one of their own or shared them with other organizations, and both the buildings and the kit they contained were the crown jewels of a company’s IT landscape.

But these data centres were incredibly expensive to run. Most had dual power supplies direct to the national grid, connectivity to multiple internet providers (or more likely leased lines to offices), high-capacity air conditioning, security, staff, generators, and batteries – and of course servers! Often hundreds and hundreds of servers. And whilst the buildings lasted for years, pretty much everything else had to be accounted for on a hardware refresh cycle.

This had a massive impact on a company’s bottom line, but it was “the norm” and it was accepted as the cost of doing business. And let’s not forget the risk and responsibility that comes with managing your own hardware estate.

And then…along came “the cloud”. Yes, there were early adopters, particularly to AWS, but many companies were either reluctant to embrace the cloud or had tied up so much capital in their data centers that it made no financial sense to simply move to a cloud offering until such time as more CapEx investment was required anyway, such as a hardware refresh or when a building lease was up for renewal.

But slowly, and surely, companies made the leap. Over time, they realized it made complete sense to embrace the cloud – through AWS, Azure, or some of the smaller operators. And here’s why.

The cloud is no different to a company’s data center that they could gaze at periodically; it is simply a collection of servers, generators, batteries, etc. But the crucial difference is that they belong to someone else, and ownership rests with the cloud provider. What does this mean for you? It means that the cost of refreshing all that hardware goes away overnight! I’ve heard IT people describe “the cloud” to non-IT people as “simply using someone else’s hardware”, and they are right.

Cloud providers build massive data centers, geographically dispersed, capable of being leveraged by hundreds or thousands of companies, each treating it as though it was their own. There are now hundreds of these around the world. This approach ensures that the cost to each company is significantly reduced when compared to the old data center model, yet each company can safely assume the data center is theirs exclusively. Yes, there’s a monthly fee for any services consumed, but companies have a huge a la carte menu of offerings to choose from and they can choose what they need, for as long as they need it for; a company can choose a server spec, pay a monthly fee, and happily run it for many years more than they would traditionally trust to a physical beige box.

It’s very easy to look at the cost of a typical server offering from Microsoft or AWS, perhaps a few hundred dollars per month, and think, “wow, that’s expensive.” But when you factor in your data center building, generators, staff, and every-five-year hardware refreshes, it’s really a very compelling offering. And to repeat, let’s not forget the risk and responsibility that comes with managing your own hardware estate.

Cost is just one aspect of moving to the cloud. A big reason why many businesses are migrating is the ability to bring all their data together and make faster, better decisions. If you haven’t already, please read my blog on Microsoft’s “secret sauce” that helps connect different systems in your business.