The Hosting Evolution that led to the Cloud

Service bureaus of the 1960s, 70s and 80s were the early hosters

Back in the past, there were service bureaus that provided computing power. They hosted their customers’ applications on their mainframes or minicomputers supplied by IBM, Digital Equipment, Sperry, Univac, Burroughs or whatever the technology giants were called back in the days. Access was made through punch cards or terminals. In retrospect, this was perhaps the very early days of hosting.

Then came the personal computer revolution in the 1980s, led by IBM and Apple with their respective approaches. A computer on every person’s desk in Western Europe and North America soon became the norm.

The 1990s saw the rise of PCs being connected to each other

But data needed to be shared and carrying floppy disks was not practical if you wanted to foster collaboration. So then came various types of networks, serverless small ones like peer-to-peer where drives were shared between users, and server based with a larger computer that had lots of hard drives for storing data and sharing it to users together with handling printer queues.

Some of the more successful pioneers were Novell Netware, Banyan Wines, IBM, Apple and a few more. Microsoft made their entry in 1995 with Windows NT 3.51 but it took a few years until they got it right with NT 4.0 that became a worthy competitor to then market leader Novell. Physical networks were most often either Ethernet, AppleTalk, or Token Ring.

Communication to the outside world was expensive and slow but the launch of ISDN was the break-through that people had been waiting for.

How cloud computing evolved from interconnected PCs to today

Lower cost for data communications in the new Millennium made remote computing possible

When data communications became affordable and could transfer larger volumes of data, it became possible to locate servers further away from the users.

Thanks to the collaboration of Citrix and Microsoft, where the former integrated their thin-client technology in Windows Server NT 4.0 so that Windows Terminal Server could be launched in 1998, corporations could in an efficient way have their users access servers far away.

Corporations began to place their servers at various co-location providers and web hotels saw the light where their customers rented website capabilities in various forms.

Virtualization of servers gave us lower cost and less hardware dependencies

Terminal, or thin-client, services created the foundation for the early days of hosting Windows servers. The final component arrived with VMware creating a platform for virtualization. Thanks to VMware it became possible to have multiple virtual servers sharing the same physical server which significantly lowered the cost for hardware and encouraged manufacturers to invent servers with better performance and reliability than ever before.

We still saw a mix of virtual and physical servers due to both performance and licensing challenges.

Phenomenal growth in hosting before the cloud became the norm

The first two decades of the Millennium was phenomenal for the growth of hosting services and hosting providers flourished all over the globe.

The more successful ones embraced Microsoft Office 365 instead of seeing it as a threat and helped their customers with the transition from dedicated Exchange and Sharepoint to shared services offered by Microsoft.

Segmentation of the hosting market led to ‘mass-volume-hosters’ and ‘high-end-hosters’ with the latter having deep understanding of how a corporation’s IT should work and taking part of the wellbeing of mission-critical applications.

It took time before cloud computing got acceptance for production workloads

The launch of the public cloud was led by Amazon with AWS (Amazon Web Services) and then followed by Microsoft’s Azure and others. Initial workloads were often development environments and web servers, but over time we started to see companies successfully moving their more mission-critical workloads to the cloud.

And for companies that were born in the era of the cloud, investing in servers on-premises became an unknown, so for all startups the cloud became their natural choice.

Traditional hosting was always way superior compared to customers running their own server rooms with less desirable physical security, backup power and redundancy in data communication lines, but it had some weaknesses compared to what Microsoft, Amazon and Google could offer.

The main weaknesses were related to scale, even large hosting providers had problems with redundancy inside a datacenter and at best they only had a few additional datacenters that could be used for fail-over when needed. Smaller and medium sized hosters often faced problems with resources available for incident management and some had issues with the implementation of ITIL processes.

Another disadvantage for hosters, was that they didn’t have datacenters spread across the globe as their operations were often focused on a certain region. With the public cloud, a customer can pick the countries where he would like his workloads to be installed in, which helps both from a compliance perspective but also when it comes to better serve the people using the applications.

Since two years, cloud computing on Azure cost less than traditional hosting and uptime is superior

A game changer came two years ago when the cost for running workloads in Azure came down so that it became less expensive than traditional hosting. This revelation really opened the eyes of many, and we had several conversations with Microsoft when we shared our findings. Microsoft were pleased, but at the same time a bit surprised, that this shift came so early.

The transition for hosters has just started as it takes time to decommission legacy datacenters in a financially responsible way so the move to Azure for them is a multi-year project.

We realize that most customers and partners know that eventually everything will be in the cloud, but at the same time there is a need to entertain hybrid scenarios where you move gradually over time. The easiest approach is to make sure that you don’t invest in legacy – and only invest in workloads running in the cloud.

We realize that the issues are often related to applications that need to be hosted in the traditional way until they’re decommissioned. Sometimes the cost for moving older and larger applications is simply too high for it to make sense and that’s one of the better reasons why we need to embrace hybrid scenarios for the foreseeable future.

Data servers resting on cloud - adoption of cloud computing

Idenxt takes care of managing everything that lives in the Microsoft Cloud

Regardless of where your workloads are hosted, you need skilled people with great tools to take care of them. This will not change when you move from traditional hosting to hosting in the cloud – that’s why we started Idenxt as we saw a need and a growing market! Our notion was to manage it in pro-active way to avoid ‘break & fix’ as much as we possibly can.

We love hosting as that is where we come from, and we love having Azure powering our customers’ workloads because it makes sense!

By working through Microsoft’s partner ecosystem, we can together deliver a superior experience for our customers and that’s what we’re here for!

Contact Us

Europe

  • Stockholm – Blekholmstorget 30 F, 111 64 Stockholm
  • +46 8 502 485 30

US West Coast

  • WA - 8201 164th Ave NE, Suite 200, Redmond, WA
  • +1 833 891 0648

US East Coast

  • FL – 657 Soliel Dr., Naples 34110, FL
  • +1 833 891 0648

Get In Touch

Fill out the form and our team will get back to you right away.