Z by HP Boost lets workstations put idle GPUs to work on AI

Computer maker HP is making it easier for customers to train and develop language models and other data-intensive AI workloads by sharing graphic processors across different machines to take advantage of spare computing power.

The aim is to get around the significant cost of processing AI workloads on cloud platforms where Nvidia’s high-performance chips are employed, but where the energy costs and service fees involved can lead to big bills.

HP, with a long history of producing high-performance workstations, is taking a different tack. With Z by HP Boost teams can get instant and direct access to GPU (graphics processing units) on workstations that may be on the other side of the room, or in a different country. For instance, you could have a bog standard laptop and send your data-intensive workloads to an HP laptop or desktop workstation overnight when your colleagues aren’t using it.

The target market for Z by HP Boost is teams of data scientists who need access to masses of computing power for model training. The bigger the model, the more parameters are involved.

”Users can be using a mobile workstation or just a laptop, and they run across a big job that they want to do in 20 seconds instead of 20 minutes. They look at the shared pool and say, boom, I'm going to use that [GPU],” says Jim Nottingham, senior vice president and division president advanced compute solutions at HP, told Tech Blog.

Data scientists working with AI workloads are the target market for Z by HP Boost

“We've never actually seen a case where they're, like, 100% utilised all the time. We give them control of defining available GPUs. Which ones do you want to share based on your utilisation?”

HP has a line of workstations optimised for AI providing instant access to GPU resources and optimising enterprise efficiency.

“The majority of the use cases are going to be people working kind of behind the company firewall, and they can be in different locations, but they're in the company network,” says Nottingham. 

HP’s AI Studio platform, released earlier this year, allows available GPU capacity to be made available to workloads on other computers. HP has been working with universities and companies using its workstations for AI projects. 

“So this platform really helps,” says Nottingham. 

Mobile and desktop workstation GPUs can be easily shared across a network, with AI Studio managing the process.

“It gives people a common workspace working on any infrastructure. It enables better collaboration with all the Nvidia tools, fully integrated with the pre-trained models. It streamlines the workflows and makes it easier to manage the data.”

The intense demand for AI applications has seen hyperscale public cloud providers invest billions in high-capacity GPUs and scramble to source enough power to run their data centres. The costs are passed onto customers, who typically pay in tokens for access to the processing power that sits behind everything from ChatGPT to the Udio music-generating app.

But bill shock is common as the tokens rack up. The shareable GPUs in AI workstations could be particularly well suited to our cash-strapped universities where data scientists are working on experimental AI applications, and to enable AI startups to develop their offerings cost-effectively.

“The most precious resource is a scarce GPU,” says Nottingham. “And I'm like, no, the most precious resource is this freaking expensive data scientists and AI developers that are sitting around waiting for GPUs.”

HP is undertaking a staged rollout of Z by HP Boost and AI Studio with availability beginning first in the UK and the US.  Nottingham said an Australasian rollout was also on the cards. No pricing has been revealed yet.

Nottingham adds that the GPU-sharing workstations are a key part of the future of hybrid computing and a way to manage the cost and intensive computing needs that go with AI development.

“They can't do everything with a workstation, but in the future the world is converging to a model that we call hybrid compute. From client and workstation to data centre, computer clusters to cloud, and even supercloud, all of them have advantages.”

Previous
Previous

Navigating Mental Health in Challenging Times

Next
Next

The United Nations has a plan to govern AI – but has it bought the industry’s hype?