Let’s face it: everyone these days wants access to their applications and computing resources on the go. And I mean everyone—including users running graphics heavy applications such as 3D rendering.
"By moving the workstation into the datacenter, you shorten the path between the application and its data"
How do you enable power users to be mobile, while securing their data in your datacenter when they typically have a workstation sitting below their desk?
It’s important to know that you can store your corporate data in your corporate datacenter, and provide all types of users with remote access and the performance they need. New approaches, backed by advancements in workstation and hypervisor technology have made it possible to migrate even the most GPU intensive applications into the datacenter. Of course, more advanced technology brings with it more ways to get the performance you need. Let’s explore your options.
1. Hosting Workstations with Dedicated Hardware
You may have an application that requires the beefiness of an entire workstation. If the application requires the power and performance of dedicated hardware, don’t fight it.
Some applications and users require significant power. In these cases, build your environment using high-power workstations, such as those available from HP, Dell, or Amulet Hotkey.
Often, the data for complex applications is hosted on separate servers that are already located in the datacenter. By moving the workstation into the datacenter, you shorten the path between the application and its data.
Not surprisingly, workstations can be pricey. The goal when using dedicated hardware is to mitigate the cost and maximize its usage. You can achieve this by sharing the applications among users and by tracking usage to ensure you do not over build your datacenter.
2. Hosted Desktop Infrastructure with HPE Moonshot System
Designing a hosted desktop environment that shares compute, storage, and networking resources is a good fit for users that require less power, but still desire the performance and persistence of dedicated hardware.
The HPE Moonshot System infrastructure is designed to address speed and scale. It provides a variety of servers, which HPE designates as cartridges, and which are purpose built for different workloads. For hosted desktop infrastructure workloads, the HPE Moonshot System uses the HP ProLiant m700 and m701 Server Cartridge.
The HP ProLiant m700 Server Cartridge features four AMD Opteron X2150 APUs for hosted desktop infrastructure workloads and the m701 provides even more oomf. Because each user has an independent CPU, NIC, RAM, SSD storage, and GPU, the high-density HPE Moonshot System delivers a fully functional PC desktop experience to each user. Users receive consistent, reliable performance, and high-quality service running varied individual workloads.
3. Pass-Through GPU
The latest in hardware and hypervisor technology enables another option for hosting graphic-rich applications, pass-through GPU. Known by various names by various vendors, pass-through GPU simply means that each physical GPU in the workstation is passed through to its own virtual machine. The virtual machines are hosted on the hypervisor that is installed on the workstation. If your workstation has two GPUs, you can host two virtual machines; 4 GPUs is equal to 4 virtual machines.
With pass-through GPU, the operating system on each virtual machine has full and direct access to a dedicated GPU and can use the native graphics driver loaded in the VM. Now, each physical workstation hosts multiple operating systems, improving the density in your datacenter without compromising performance.
4. Virtualized GPU
If you add advancements in GPU hardware to your datacenter, you have yet another path for hosting graphics-rich applications: virtualized GPU. In this architecture, each physical GPU is shared by multiple virtual machines. (Again, the virtual machines are hosted on the hypervisor that is installed on the workstation.) The hypervisor provides additional technology that gives the virtual machine operating system direct access to the GPU, giving the performance of pass-through GPU while allowing greater density. Note that the virtual machines do share the resource of the GPU processing power.
And, I should add, if you need to host applications on a Linux operating system, this option is not yet for you. To date, only Windows operating systems support virtualized GPU.
Don’t Forget, You Need to Connect the Users
With any of these solutions, you have your data securely hosted in your datacenter, but that’s only useful if you can connect your users to it. A hosted solution for graphic-rich applications requires a high-performance display protocol, such as HP RGS or Teradici PCoIP, to ensure that users get at-their-desk performance for applications that run in the datacenter. In some cases, the performance may even be better.
Unless you want your end users to memorize IP addresses or hostnames, you need a connection broker to offer out and connect users to their resources. A connection broker manages user assignments and connections to resources in the datacenter and makes it easy for end-users to connect to a desktop. Plus, it allows you to pool your resources, and share applications between users to maximize resource utilization.
No matter what solution you go with, the key is to give your end users the resources and power they need. That may mean physical hardware or perhaps a virtualized approach; it may even be a combination of both. Learn more about all of your options, and then build the datacenter that works best for you.