by Hira Saeed
The concept of High-performance Computing (HPC) in the cloud has taken a massive leap forward these past years. While the idea of using High-Performance Computing (HPC) services like storage, servers, databases, networking and software applications etc over the cloud isn’t new, what is new is the speed and commitment companies have placed in the cloud. However, with the expansion of business, some companies look for on-premise options. Why? Because the benefits of High-Performance Computing (HPC) in the cloud are manifold but it is also countered by some drawbacks that are leading organizations to look at other options. Let’s have a contrasting discussion on both!
Supercomputing in the cloud
Cloud computing is explained by PCMag in the simplest way possible as storing, managing and accessing data and programs over the internet instead of on machine. The best examples we are all familiar with include Google Drive, Google Docs, Microsoft OneDrive and Dropbox as cloud storage applications and more sophisticated cloud service offered is High-performance Cloud Computing that include Amazon Web Services, Microsoft Azure, Google, and IBM.
Cloud computing can provide multiple features to an average user, like instant availability of resources, availability of large capacity for storage and processing, flexibility at the application level and a bare minimum level of performance guaranteed by the provider. However, the user for HPC generally deviates from all these features and presents a tailored requirement for their specific application. A hardware fine-tuned to the needs of the application is the dream of such users; they are not here to run generic applications.
They often try to get rid of the OS formalities and start talking to the hardware directly. The cloud OS “nanny” might not allow you to get too direct with her baby, whereas, HPC applications need to bypass the OS kernel a lot.
When is it ideal to get cloud based HPC?
Most applications are very sensitive about the networks interconnect; the data needs to flow at speeds to match the high-performance demands. A virtual cluster is limited to the rules defined by the kernel and many high-performance network loads need to manage the connection and data transfers ‘on the wire’, which is very hard to visualize on a virtual scheme. Along with it comes the requirement of design specific storage system. A strong I/O system is next on your requirement list, without that you are likely facing bottlenecks, backlogs and unnecessary queuing most of the times.
A cloud-based HPC is a very good bargain while working with rudimentary tasks, maybe for a startup, small business ventures or an on-demand test facility with a limited influx of tasks. But deeper and more elaborate discussion would go into the decision of using cloud-based HPC against on-premises for anything bigger than that.
When you think of supercomputers, the first image that streams through your mind is one of massive rows of mainframe computers filling an entire room with lots of noise and huge cooling pipes circling around it.
That was true at least half a century ago. Today, supercomputers (referred to as High-Performance Computers) can perform all your high demand computing tasks running advanced applications and manage large data sets with advanced network management tools in much more compact servers or clusters. Clusters of HPCs share the workload by dividing the tasks into parts and feeding them to the parallel processing units of a supercomputer (as opposed to serial processing of a normal computer).
Advances in technology mean that today’s supercomputers come in compact designs (in a sleek 1U and 2U size casing) with less maintenance and resource demands. For example, the PowerServe HPC by PSSC Labs 16 to 72 total cores of Intel® Xeon processors with as much as 1024 GB of high-performance memory packed in a 1/2U blade chassis, versatile network connectivity options and supporting all the latest operating systems. Even better, their unique design means a 90% energy efficient power supply. So having an on-premises supercomputer is not so ‘super’ difficult at all.
Why are companies choosing on-premise HPC over the cloud?
Amir Michael, a former hardware engineer at Google and former hardware and data center engineer at Facebook, founder, and CEO of Coogan, says “Surprisingly, a lot of people are thinking about off-boarding from the cloud and trying to figure out when the right time might be to do that. Other customers are pretty big in co-location and they are wondering if they should build their own data centers.”
This calls for a good night long debate. Companies today see on-premise HPCs as a liability because of the high purchasing costs and associated maintenance costs. Maintenance also comes with the potential need to hire more personnel to maintain the infrastructure. So naturally, outsourcing this to AWS, Google, Microsoft, etc. seem like the best way to go about it. However, this step can be shortsighted and vitally dangerous. In addition, putting your entire business (data, analytics and most importantly intellectual property) on the cloud means you are giving up control of the lifeblood of your business.
Reasons why on-premise HPC is taking the lead
Let us consider some of our own parameters and see how that puts our computing needs into perspective:
With your own HPC infrastructure (whether just a rack server or a cluster), you achieve a much better performance per dollar per hour as compared to any other generic server. Since the hardware you have is to meet your specific application requirement, you are achieving the optimum level on the cost-performance chart. By going with a dedicated on-premise option you can design your hardware that complies with your exact needs.
The main reason cloud computing has soared in terms of adoption is the belief that outsourcing your computing needs to the cloud is cheaper than doing it yourself. The answer isn’t always so clear-cut. When scoping out Total Cost of Ownership (TCO), factors may arise that aren’t in the original calculation which leads to a higher long-term TCO for trying to do HPC in the cloud. As many companies are realizing who are moving off AWS to go in-house, the promise of lower cost through the cloud isn’t quite so clear-cut.
Access to Data:
Continuing on the last note, every time you want to access your own data (which you are keeping at Azure server for as low as $0.02), you have to pay a price to retrieve that data. On-premises HPC grants access to your data any time you need. A popular solution for startups is to use NAS (Network Attached Storage) solutions by vendors such as Seagate. But for more complex computing projects, you may need a scalable block and object storage platform like the Surestore by PSSC Labs.
AWS and others have made strides here but it was not long ago that a simple keystroke error brought down nearly 30% of the websites on the east coast. Companies need to evaluate putting their livelihood into someone else’s hands versus the peace of mind of having critical HPC functions close at hand and under your control.
Pulsepoint Saves $300K With PSSC Labs
Learn how leading ad tech company, Pulsepoint, is leveraging PSSC Labs to grow their business and lower their TCO.
Loading a Time Series Database at 100 Million Points Per Second
There are many use cases for time series data, and they usually require handling a decent data ingest rate.