Skip to main content
Arrow Electronics, Inc.
Electronic circuit board close up
Article

Arrow Quick Hit: HPE GreenLake and NVIDIA AI Enterprise

September 30, 2022 | Russ Braden

HPE and NVIDIA offer an integrated as-a-service solution that can streamline both the development and deployment of AI workloads. It is now available directly through Arrow!

What is it?

HPE and NVIDIA have come together to help businesses unlock the power of AI by delivering an end-to-end enterprise platform optimized for AI workloads on a consumption-based model using HPE GreenLake and NVIDIA AI Enterprise. This solution is deployed on industry-leading NVIDIA-Certified HPE ProLiant DL380 and DL385 servers running VMware vSphere® with Tanzu™; it is designed to accelerate the speed at which developers can build AI and high-performance data analytics.

HPE GreenLake enables customers to acquire NVIDIA AI Enterprise on a pay-per-use basis with the flexibility to scale up or down and be tailored to their needs. The software is fully supported by NVIDIA, ensuring robust operations for enterprise AI deployments.

Click on image to view a larger version.

Why should you care?

According to IDC, by 2024, 60% of G2000 companies will use AI/ML across all business-critical functions. The opportunity couldn’t be greater right now!

The challenge*

  • AI infrastructure remains on the of the most consequential but the least mature of infrastructure decisions that organizations make as part of their future enterprise. High upfront costs remain the biggest barrier to investments leading many to cut corners. People, processes and technology remain the three key areas where challenges lie and where organizations must focus their investments for greater opportunities.
  • Dealing with data is the biggest hurdle for organizations as they invest in AI infrastructure. Businesses lack the time to build, train and deploy AI models. They also lack the expertise or the ability to prepare data, leading to a new market for pre-trained AI models. And model sizes are also growing, making it challenging for them to run on general-purpose infrastructure.
  • AI infrastructure investments are following familiar patterns in terms of compute and storage technologies on-premises, in the public cloud, and at the edge. For many businesses, on-premises is and will remain the preferred location. On-prem GPU-accelerated compute and HPC-like scale-up systems are top requirements for on-premises/edge and cloud-based compute infrastructure for AI training and inferencing.

*Source: IDC’s InfrastructureView 2021 research

The solution

The HPE GreenLake and NVIDIA AI Enterprise solution provides an end-to-end AI stack—balancing power, security and performance requirements in an accelerated system with tested and proven configurations. It offers deep learning streamlined from conception to production at scale and features key capabilities, such as:

Data prep:

  • Reduces data science processes from hours to seconds
  • 70x faster performance than similar CPU configuration
  • 20x more cost-effective than similar CPU configuration

Train at scale:

  • Train, adapt, optimize models in hours vs. months
  • Open-source ML frameworks optimized for GPU
  • Integrated with NVIDIA RAPIDS to simplify development

Optimized for inference:

  • Maximize throughput for latency critical apps with compiler and runtime
  • Optimizes every network (CNNs, RNNs and transformers)
  • Optimizes use of GPU memory bandwidth

Deploy at scale:

  • Fast and scalable AI to applications
  • Diverse query types in real-time, offline batch and ensembles
  • Up to 226x performance increase over CPU only
  • Triton with FIL backend delivers best inference performance for tree-based models on GPUs

Click on image to view a larger version.

How does it work?

The software in the NVIDIA AI Enterprise suite includes infrastructure optimization software, cloud native deployment software, and AI and data science frameworks. The AI and data science frameworks are delivered as container images. Containerized software can be run directly with a tool, such as a Docker.

Click on image to view a larger version.

  • AI and data science tools and frameworks:
    • Data preparation: NVIDIA RAPIDS™
    • AI training at scale: Py Torch, TensorFlow, NVIDIA TAO Toolkit
    • Optimized for inference: NVIDIA TensorRT®
    • Deploy at scale: NVIDIA Triton™ Inference Server
  • Cloud-native deployment: NVIDIA GPU Operator, NVIDIA Network Operator
  • Infrastructure optimization: NVIDIA vGPU, NVIDIA Magnum IO™, NVIDIA CUDA-X AI™
  • NVIDIA enterprise support for the NVIDIA AI: Enterprise software suite provides access to NVIDIA AI experts, comprehensive software patches, updates, upgrades and technical support

Features and benefits

  • An integrated platform offers the best-in-class NVIDIA AI Enterprise suite that is optimized and exclusively certified for the industry’s leading virtualization platform VMware vSphere® with Tanzu™ and factory-integrated HPE ProLiant DL380/DL385 NVIDIA-certified system.
  • A pay-per-use, consumption-based model frees up capital for financial flexibility with no upfront costs. HPE provides a reserve amount of capacity, measures how much they use, and charges based on that usage.
  • Scale up or down, easily and quickly with an installed buff¬er of capacity that is actively monitored and managed, as well as proactively deployed when needed. If customers need more, HPE will proactively provision more, and they only pay for what is used.
  • Accelerated performance with NVIDIA AI Enterprise contains built and optimized software stacks to run on NVIDIA-certified systems, which ensures high performance, reduced development times and cost-e¬ffective computing.
  • Centralized control and insights let customers manage resources, costs, capacity, compliance and more across on-premises and cloud environments. Secure, self-service provisioning and management is included via a common control plane.
  • Rapid deployment of an as-a-service model offers the cloud experience, including self-service functionality to quickly deploy resources, such as virtual machines, containers and machine learning operations (MLOps) projects. HPE owns and manages the equipment—storage, servers, compute—for customers at their site. HPE delivers and installs the equipment, including a buffer of capacity, and can help with integrating and supporting cloud services.
  • Free up your customers’ IT resources with on-premises installation, configuration and validation with the option to have HPE’s IT Operations Centers monitor and manage on-premises infrastructure. They can act as an extension of your customer’s IT team; fill any gaps in areas such as security, migration, and performance; or even manage their entire hybrid environment for you.

Differentiation in the market

The edge is driving a great expansion in AI inference. According to IDC, fifty-five billion devices will be connected worldwide by 2025. And Gartner notes that 50% of data will be created and processed outside of the traditional data center or cloud.

HPE ProLiant DL380 and DL385 servers are optimized and certified with NVIDIA AI Enterprise software, VMware vSphere® with Tanzu™ and NVIDIA A100 and A30 Tensor Core GPUs to deliver performance that is on par with bare metal for AI training and inference workloads.

Click on image to view a larger version.

HPE can right-size the platform, no matter the workload in your customer’s environment. For local edge deployments, the HPE ProLiant DL360 Gen10 server, combined with the NVIDIA A2 GPU, drives inferencing with minimal power requirements.

Customers can select from predefined packages for training or inference workloads. Packages include NVIDIA AI Enterprise software, NVIDIA Ampere architecture GPUs, VMware vSphere with Tanzu, as well as all setup, installation and configuration services.

Click on image to view a larger version.

How do you position and sell

To help you identify opportunities for this HPE GreenLake and NVIDIA AI Enterprise solution, here are some qualifying questions that you can use to identify the stages of your customer’s AI journey, their AI deployment methods, and the services/workloads supported by IT.

For the IT administrator/infrastructure expert:

  • Has your company deployed or are they thinking about deploying AI in the future?
  • Do they need to support multiple AI projects across different business teams and priorities?
  • How many departments are working on AI projects?
  • How many data scientists do you support?
  • Do your IT teams have SLAs with other departments for reliability/up-time?
  • How do you manage software lifecycles?
  • How does your organization provide infrastructure support for AI projects today?
  • Do your IT teams need long-term support (LTS) to stabilize development cycles?
  • Are you interested in a platform that can grow with you as your AI projects expand?

For the AI practitioner:

  • How many AI projects do you have?
  • Does your team need high-performance, compute resources?
  • Do you use open-source AI tools? What are the tools?
  • Is it important to you that you have access to the latest innovations in AI development?
  • What container orchestration tools do you prefer? Kubernetes, OpenShift, Tanzu?
  • Is a scalable platform with proven frameworks, tools and libraries of interest to you?

For the Line of Business (LOB) leaders:

  • Does your organization have an AI strategy?
  • Are multiple business functions/departments leveraging AI?
  • Would you like to accelerate your organization's adoption of AI?
  • Would you consider your AI deployments successful?
  • Are you experiencing any challenges with your existing AI environment?

For more information

Arrow's dedicated teams can support your every need. Get started today!