Quantcast
Channel: DBA Consulting Blog
Viewing all articles
Browse latest Browse all 117

Data Processing Unit - How is helps in ExaScale Computing

$
0
0

 

Data Processing Unit: What the Heck is it?

Are you an IT Professional or technology enthusiast looking to learn more about data processing units (DPU)? Then this blog is for you!

In it, we’ll explore what a DPU is, how it is used in data processing, and its importance in the tech world. Learn more to stay abreast of the latest advancements in data processing technology.

WHAT IS A DATA PROCESSING UNIT (DPU)?

A DPU, or data processing unit, is a new class of programmable processor that combines an industry-standard, high-performance, software-programmable multi-core CPU, a high-performance network interface, and flexible and programmable acceleration engines. It is designed to move data around the data center and do data processing. DPUs are a new pillar of computing that joins CPUs and GPUs as one of the three processing units used in data centers.

The Data Processing Unit

WHAT IS THE DIFFERENCE BETWEEN A CPU, GPU AND A DPU?

A CPU, GPU, and DPU are all processing units used in data centers, but they differ in their functions.

A CPU is the central processing unit that executes instructions encompassing a computer program. A GPU is for accelerated computing and performs parallel operations rather than serial operations.

A DPU moves data around the data center and does data processing. A DPU is a new class of programmable processor that combines an industry-standard, high-performance, software-programmable multi-core CPU, a high-performance network interface, and flexible and programmable acceleration engines. Here are the differences between them:

CPU:

Executes instructions encompassing a computer program

Flexible and responsive

The most ubiquitous processor

GPU:

Performs parallel operations rather than serial operations

Intended to process computer graphics but can handle other types of complex workloads such as supercomputing, AI, machine learning

Accelerates computing

DPU:

Moves data around the data center

Does data processing

Combines an industry-standard, high-performance, software-programmable multi-core CPU, a high-performance network interface, and flexible and programmable acceleration engines

Introduction to Developing Applications with NVIDIA DOCA on BlueField DPUs

WHAT ARE THE BENEFITS OF DPU’s?

Data Processing Units (DPUs) offer numerous benefits that can elevate computing systems to perform at optimal levels and effectively manage diverse workloads. DPUs enhance system efficiency, scalability, security, and flexibility.

DPUs help drive a more efficient and accelerated data processing pace across the entire network.

They improve workload management by offloading CPU-intensive tasks to allow CPUs to handle other workloads.

DPUs enable faster data transfers as they alleviate network congestion while reducing latency.

DPUs enhance security by isolating essential services, improving threat detection, and isolation in case of a breach.

They offer greater flexibility in designing custom networks that cater to individual computing requirements due to their programmable nature.

DPUs accelerate machine learning operations and accelerate GPU-based analytics while reducing overall costs of such operations through increased efficiency.

Moreover, DPUs eliminate network bottlenecks and improve data center performance reliability through distributed processing. By leveraging this advanced technology, corporations can realize digital transformation by executing critical workload management with remarkable innovation and effectiveness.

NOTE

A true fact — According to data released on July 2020 from Market Research Future (MRFR), the global DPU market is expected to grow at an outstanding rate during the forecast period of 2020–2027.

Why wait for the future when you can process data like a pro today, thanks to Data Processing Units (DPUs)!

WHAT ARE SOME POTENTIAL DRAWBACKS OR LIMITATIONS OF USING DPU’S IN A DATA CENTER ENVIRONMENT?

DPUs have many benefits in modern data centers, but there are also some potential drawbacks and limitations to consider. Some of these include:

Cost: DPUs can be expensive, and organizations may need to invest in new hardware to support them. This can be a significant upfront cost that may not be feasible for all organizations.

Integration: DPUs need to be integrated into the existing data center infrastructure, which can be a complex process. This may require additional resources and expertise to ensure a smooth integration.

Compatibility: Not all applications and workloads are compatible with DPUs. Organizations need to carefully evaluate their use cases and determine whether DPUs are the right fit for their needs.

Security: While DPUs can improve security by offloading security tasks from the main CPU, they can also introduce new security risks if not properly configured and managed.

Scalability: DPUs can help improve scalability by offloading network and communication workloads from the CPU, but they may not be able to keep up with rapidly growing workloads in all cases.

Overall, while DPUs offer many benefits in terms of performance, security, and scalability, organizations need to carefully evaluate their use cases and consider the potential drawbacks before investing in this technology.

From Petascale to Exascale Computing


WHAT ARE SOME POTENTIAL SECURITY RISKS ASSOCIATED WITH USING DPU’S?

DPUs can improve security by offloading security tasks from the main CPU, but they can also introduce new security risks if not properly configured and managed. Some potential security risks associated with using DPUs are:

Misconfiguration: DPUs need to be properly configured to ensure that they are secure. Misconfiguration can lead to vulnerabilities that can be exploited by attackers.

Lack of visibility: DPUs can make it more difficult to monitor and detect security threats because they operate independently of the main CPU. This can make it harder to identify and respond to security incidents.

Attack surface: DPUs can increase the attack surface of a system because they introduce new hardware and software components that need to be secured. This can make it more difficult to ensure that all components are secure.

Compatibility issues: Not all security tools and applications are compatible with DPUs. This can limit the ability of organizations to use their existing security tools and may require them to invest in new tools that are compatible with DPUs.

Complexity: DPUs can add complexity to a system, which can make it more difficult to manage and secure. This can increase the risk of human error and make it harder to ensure that all components are properly secured.

WHEN WILL DATA PROCESSING UNITS BE AVAILABLE?

Data processing units (DPUs) will be available in the market in the coming months. DPUs are programmable processors specifically designed to process large amounts of data efficiently. Due to their high processing power, they are increasingly being used in data centers, cloud computing, and AI applications to accelerate data transfer and improve overall system performance.

DPUs are becoming an essential component for companies managing large amounts of data due to their ability to handle a diverse range of workloads. As such, many technology firms have started developing their own DPUs.

However, industry experts predict that there will be a shortage of DPUs when they initially launch on the market. This shortage can be attributed to the ongoing semiconductor chip shortage that has impacted various industries worldwide. It is expected that by 2022, the availability of DPUs will increase as more semiconductor foundries produce these chips.

NOTE

According to a report by MarketsandMarkets™, the global DPU market size is projected to grow from USD 1.5 billion in 2020 to USD 4.9 billion by 2025 at a CAGR of 26.7%. This growth is fueled by increased demand for advanced computing capabilities driven by cloud computing adoption, shift towards AI workloads, and the rapid growth of Big Data analytics.

Ready to upgrade your data game? Here’s how to get your hands on the ultimate processing power tool.

DPUs, or Data Processing Units, are specialized processors designed to offload networking and security tasks from the main CPU of a computer or server. They are a new class of reprogrammable high-performance processors combined with high-performance network interfaces and flexible and programmable acceleration engines.

DPUs can improve performance, reduce latency in network applications, and support virtualization and containerization. They can also help organizations create architectures to support next-gen reliability, performance, and security requirements. DPUs are expected to play an increasingly important role in modern data centers and cloud computing environments as the demand for high-performance, secure, and scalable networks continues to grow.

While DPUs offer many benefits, they can also introduce new security risks if not properly configured and managed. Overall, DPUs are a promising technology that can help organizations optimize their network performance and security while reducing costs.

2021 ECP Annual Meeting: ADIOS Storage and in situ I/O


FREQUENTLY ASKED QUESTIONS

Q: What is a Data Processing Unit (DPU)?

A: A Data Processing Unit (DPU) is a specialized hardware component used for accelerating and optimizing data processing tasks in computer systems.

Q: How does a DPU work?

A: A DPU works by offloading compute-intensive and data-intensive tasks from the CPU to a specialized hardware accelerator, resulting in increased performance and efficiency.

Q: What are some common use cases of DPUs?

A: DPUs are commonly used in data center applications, including big data processing, machine learning, and artificial intelligence. They can also be found in edge computing devices, such as routers and gateways.

Q: What are the benefits of using a DPU?

A: The benefits of using a DPU include increased performance, reduced latency, improved energy efficiency, and enhanced security.

Q: Can a DPU be added to an existing computer system?

A: Yes, it is possible to add a DPU to an existing computer system, provided that the system is compatible with the DPU and has the necessary hardware expansion slots.

Q: What are some examples of DPUs on the market?

A: Some popular DPUs on the market include NVIDIA’s Tensor Core GPU, Google’s Tensor Processing Unit (TPU), and Intel’s FPGA-based Arria 10 GX DPU.

What Is A DPU (Data Processing Unit)? 

Data processing units, commonly known as DPUs, are a new class of reprogrammable high-performance processors combined with high-performance network interfaces that are optimized to perform and accelerate network and storage functions carried out by data center servers. DPUs plug into a server’s PCIe slot just as a GPU would, and they allow servers to offload network and storage functions from the CPU to the DPU, allowing the CPU to focus only on running the operating systems and system applications. DPUs often use a reprogrammable FPGA combined with a network interface card to accelerate network traffic the same way that GPUs are being used to accelerate artificial intelligence (AI) applications by offloading mathematical operations from the CPU to the GPU. In the past, GPUs were used to deliver rich, real-time graphics. This is so because they can process large amounts of data in parallel making them ideal for accelerating AI workloads, such as machine learning and deep learning, and other artificial intelligence workloads.

DPU Accelerated Servers will become extremely popular in the future thanks to their ability to offload network functions from the CPU to the DPU, freeing up precious CPU processing power, allowing the CPU to run more applications, and run the operating system as efficiently as possible without being bogged down by handling network activities. In fact, some experts claim that 30% of CPU processing power goes towards handling network and storage functions. Offloading storage and network functions to the DPU frees up precious CPU processing power for functions such as virtual or containerized workloads. Additionally, DPUs can be used to handle functions that include network security, firewall tasks, encryption, and infrastructure management. 

DPUs will become the third component in data center servers along with CPU (central processing units) and GPUs (graphics processing units) because of their ability to accelerate and perform network and storage functions. The CPU would be used for general-purpose computing. The GPU would be used to accelerate artificial intelligence applications. The DPU in a DPU equipped server would be used to process data and move data around the data center. 

Overall, DPUs have a bright future thanks to the ever-increasing amount of data stored in data centers, requiring a solution that can accelerate storage and networking functions performed by high-performance data center servers. DPUs can breathe new life into existing servers because they can reduce the CPU utilization of servers by offloading network and storage functions to the DPU. Estimates indicate that 30% of CPU utilization goes towards networking functions, so moving them to the DPU will provide you with extra CPU processing power. Thus, DPUs can extend the life of your servers for months or even years, depending on how much of your system’s resources are being used for network functions. 

WHAT IS A DPU?! | Server Factory Explains

What Are The Components Of A DPU? 

A DPU is a system on a chip that is made from three primary elements. First, data processing units typically have a multi-core CPU that is software programmable. The second element is a high-performance network interface that enables the DPU to parse, process, and efficiently move data through the network. The third element is a rich set of flexible, programmable acceleration engines that offload network and storage functions from the CPU to the DPU. DPUs are often integrated with smart NICs offering powerful network data processing. 

Nvidia is leading the way when it comes to DPUs, recently releasing the Nvidia Bluefield 2 DPU, which is the world’s first data infrastructure on chip architecture, optimized for modern data centers. The Bluefield 2 DPU allows data center servers to offload network and storage functions from the CPU to the DPU, allowing the DPU to handle mundane storage and network functions. 

Nvidia DPUs are accessible through the DOCA SDK, enabling a programmable API for DPU hardware. DOCA enables organizations to program DPUs to accelerate data processing for moving data in and out of servers, virtual machines, and containers. DPUs accelerate network functions and handle east-west traffic associated with VMs and containers and north-south traffic flowing in and out of data centers. That said, where DPUs shine is in moving data within a data center because they are optimized for data movement.  

Furthermore, Nvidia states that DPUs are capable of offloading and accelerating all data center security services. This is so because they include next-generation firewalls, micro-segmentation, data encryption capabilities, and intrusion detection. In the past, security was handled by software utilizing x86 CPUs; however, security can be offloaded to DPUs, freeing up CPU resources for other tasks. 

2021 ECP Annual Meeting - ADIOS User's BOF

What Are The Most Common Features Of DPUs? 

DPUs have a ton of features, but here are the most common features that are found on DPUs: 

  • High-speed connectivity via one or multiple 100 Gigabit to 200 Gigabit interfaces 
  • High-speed packet processing 
  • Multi-core processing via ARM or MIPS based CPUs (8x 64-bit Arm CPU Cores)
  • Memory controllers offering support for DDR4 and DDR5 RAM 
  • Accelerators 
  • PCI Express Gen 4 Support 
  • Security features 
  • Custom operating system separated from the host system’s OS 

What Are Some Of The Most Common DPU Solutions? 

Nvidia has released a DPU known as the Nvidia Mellanox BlueField 2 DPU and the BlueField 2X DPU. The BlueField 2X DPU has everything that the BlueField 2 DPU has, plus an additional Ampere GPU, enabling artificial intelligence functionality on the DPU. Nvidia included a GPU on its DPU to handle security, network, and storage management. For example, machine learning or deep learning can run on the data processing unit itself and be used to identify and stop an attempted network breach. Furthermore, Nvidia has stated that it intends to launch Bluefield 3 in 2022 and Bluefield 4 in 2023.  

Companies such as Intel and Xilinx are introducing some DPUs into the space. That said, some of the offerings from Xilinx and Intel are known as SmartNICs. SmartNICs from Xilinx and Intel utilize FPGAs to accelerate network and storage functions. Smart NICs work the same way as do data processing units in that they offload network functions from the CPU to the SmartNIC, freeing up processing power by intelligently delegating network and storage functions to the SmartNIC. FPGAs bring parallelism and customization to the data path because of the reprogrammable nature of FPGAs.  

For example, Xilinx offers the ALVEO series of SmartNICs with various products, and Intel and its partners offer several FPGA-based SmartNIC solutions to accelerate data processing workloads in large data centers. Intel claims that its SmartNICs “boost data center performance levels by offloading switching, storage, and security functionality onto a single PCIe platform that combines both Intel FPGAs and Intel Xeon Processors.” Intel offers a second newer Smart NIC solution known as the Silicom FPGA SmartNIC N5010, which combines an Intel Stratix 10 FPGA with an Intel Ethernet 800 Series Adapter, providing organization with 4x 100 Gigabit Ethernet Ports, offering plenty of bandwidth for data centers. 

The U.S. Exascale Computing Project

Why Are DPUs Increasing In Popularity? 

We live in a digital information age where tons of data is being generated daily. This is especially true as the number of IoT devices, autonomous vehicles, connected homes, and connect workplaces come online, saturating data centers with data. So, there is a need for solutions that can enable data centers to cope with the ever-increasing amount of data moving in/out of data centers and the data moving through a data center. 

DPUs contain a data movement system that accelerates data movement and processing operations, offloading networking functions from a server’s processor to the DPU. DPUs are a great way for extracting more processing power out of a server, especially when considering that Moore’s Law has slowed down, pushing organizations to use hardware accelerators to gain more performance from their hardware, reducing an organization’s total cost of ownership since more performance can be extracted from existing hardware, allowing a server to perform more application workloads. 

Data processing units and FPGA SmartNICs are gaining popularity, with Microsoft and Google exploring bringing them to their data centers to accelerate data processing and artificial intelligence workloads. Moreover, Nvidia has partnered with VMware to offload networking, security, and storage tasks to the DPU. 

What Are Some Other Performance Accelerators? 

We will now discuss some of the other performance accelerators that are often used in data centers. The performance accelerators that we will discuss include GPUs (graphics processing units), computational storage, and FPGA (field-programmable gate arrays). 

1. Graphics Processing Units (GPUs) 

Graphics processing units are often deployed in high-performance servers in data centers to accelerate workloads. A server will often offload complicated mathematical calculations to the GPU because the GPU can perform them faster. This is so because GPUs employ a parallel architecture, which is made from many smaller cores than CPUs, enabling them to handle many tasks in parallel, which allows organizations to extract more performance from servers.

Source Credit (Nvidia) 

For example, the average CPU has anywhere between four to ten cores, while GPUs have hundreds or thousands of smaller cores that operate together to tackle complex calculations in parallel. As such, GPUs are different from CPUs, which have fewer cores and are more suitable for sequential data processing. GPU accelerated servers are great for high-resolution video editing, medical imaging, artificial intelligence, machine learning training, and deep learning training. 

GPUs installed on data center servers are great for accelerating deep learning training and machine learning training which require a lot of computation power that CPUs simply do not offer. GPUs perform artificial intelligence tasks quicker than CPUs because they are equipped with HBM (high bandwidth memory and hundreds or thousands of cores that can perform floating-point arithmetic significantly faster than traditional CPUs.  

For these reasons, organizations use GPUs to train deep learning and machine learning models. The larger the data set and the larger the neural network, the more likely an organization will need a GPU to accelerate the workloads. Although CPUs can perform deep learning training and machine learning training, it takes them a long time to complex computations. There are situations where deep learning training takes a few hours; however, performing the same task using only a CPU may take a few days to a few weeks instead of just a few hours. 

Moreover, adding GPUs to data center servers provides significantly better data throughput and offers the ability to process and analyze data with as little latency as possible. Latency refers to the amount of time required to complete a given task, and data throughput refers to the number of tasks completed per unit of time. 

2. Computational Storage Devices (CSD) 

Computational storage has made its way into data centers as a performance accelerator. Computational storage processes data at the storage device level, reducing data moving between the CPU and the storage device. Computational storage enables real-time data analysis and improves a system’s performance by reducing input/output bottlenecks. Computational storage devices look the same as regular storage devices, but they include a multi-core processor that’s used to perform functions such as indexing data as it enters the storage devices and search the storage devices for specific entries. 

Source Credit (AnandTech)

Computational storage devices are increasing in popularity due to the growing need to process and analyze data in real-time. Real-time data processing and analysis is possible because the data no longer has to move between the storage device and the CPU. Instead, the data is processed on the storage device itself. Bringing compute power to storage media at the exact location where the data is located enables real-time analysis and decision making.  

What is CPU,GPU and TPU? Understanding these 3 processing units using Artificial Neural Networks.

3. FPGA (Field Programmable Gate Array) 

Source Credit (Xilinx)

An FPGA is an integrated circuit that is made from logic blocks, I/O cells, and other resources that allow users to reprogram and reconfigure the chip in different ways according to the specific requirements of the workload you want it to perform. FPGAs are gaining popularity for performing deep learning inference processing and machine learning inference. Additionally, FPGA-based SmartNICs are being used because of their ability to offload network and storage functions from the CPU to the SmartNIC. Network and storage functions can place a significant burden on a system’s CPU, so offloading these functions to a SmartNIC frees up precious CPU processing power to run the OS and other critical applications. FPGA based SmartNICs allow organizations to optimize the SmartNIC for the specific workload that’s going to be offloaded to the SmartNIC, providing customizability that’s difficult to find elsewhere. 

Bottom Line

At this point, it should come as no surprise that DPUs (data processing units) are gaining popularity in high-performance data center servers due to their ability to offload storage and network functions to the DPU, allowing the processor to focus on running the operating system and revenue generating applications. Premio offers a number of DPU servers that utilize DPUs to make servers more powerful by offloading data processing, network functions, and storage functions from the CPU to the DPU. Nvidia claims that a single BlueField 2 Data Processing Unit can handle the same data center services that would require 125 CPU cores, allowing DPU servers to work smart and not harder. So, if you’re interested in buying DPU servers, feel free to contact our DPU server professionals. They will be more than happy to assist you with choosing or customizing a solution that meets your specific requirements.

High Performance Storage at Exascale

What’s a DPU? 

Specialists in moving data in data centers, DPUs, or data processing units, are a new class of programmable processor and will join CPUs and GPUs as one of the three pillars of computing.

Of course, you’re probably already familiar with the central processing unit. Flexible and responsive, for many years CPUs were the sole programmable element in most computers.

More recently the GPU, or graphics processing unit, has taken a central role. Originally used to deliver rich, real-time graphics, their parallel processing capabilities make them ideal for accelerated computing tasks of all kinds. Thanks to these capabilities, GPUs are essential to artificial intelligence, deep learning and big data analytics applications.

Over the past decade, however, computing has broken out of the boxy confines of PCs and servers — with CPUs and GPUs powering sprawling new hyperscale data centers.

These data centers are knit together with a powerful new category of processors. The DPU has become the third member of the data-centric accelerated computing model.

“This is going to represent one of the three major pillars of computing going forward,” NVIDIA CEO Jensen Huang said during a talk earlier this month.

“The CPU is for general-purpose computing, the GPU is for accelerated computing, and the DPU, which moves data around the data center, does data processing.”

What's a DPU?

  • System on a chip that combines:
  • Industry-standard, high-performance, software-programmable multi-core CPU
  • High-performance network interface
  • Flexible and programmable acceleration engines
  • CPU v GPU v DPU: What Makes a DPU Different? 

A DPU is a new class of programmable processor that combines three key elements. A DPU is a system on a chip, or SoC, that combines:

An industry-standard, high-performance, software-programmable, multi-core CPU, typically based on the widely used Arm architecture, tightly coupled to the other SoC components.

A high-performance network interface capable of parsing, processing and efficiently transferring data at line rate, or the speed of the rest of the network, to GPUs and CPUs.

A rich set of flexible and programmable acceleration engines that offload and improve applications performance for AI and machine learning, zero-trust security, telecommunications and storage, among others.

All these DPU capabilities are critical to enable an isolated, bare-metal, cloud-native computing platform that will define the next generation of cloud-scale computing.

DPU vs SmartNIC vs Exotic FPGAs A Guide to Differences and Current DPUs

DPUs Incorporated into SmartNICs

The DPU can be used as a stand-alone embedded processor. But it’s more often incorporated into a SmartNIC, a network interface controller used as a critical component in a next-generation server.

Other devices that claim to be DPUs miss significant elements of these three critical capabilities.

 DPUs, or data processing units, can be used as a stand-alone embedded processor, but they’re more often incorporated into a SmartNIC, a network interface controller that’s used as a key component in a next generation server.

DPUs can be used as a stand-alone embedded processor, but they’re more often incorporated into a SmartNIC, a network interface controller used as a key component in a next-generation server.

For example, some vendors use proprietary processors that don’t benefit from the broad Arm CPU ecosystem’s rich development and application infrastructure.

Others claim to have DPUs but make the mistake of focusing solely on the embedded CPU to perform data path processing.

A Focus on Data Processing

That approach isn’t competitive and doesn’t scale, because trying to beat the traditional x86 CPU with a brute force performance attack is a losing battle. If 100 Gigabit/sec packet processing brings an x86 to its knees, why would an embedded CPU perform better?

Instead, the network interface needs to be powerful and flexible enough to handle all network data path processing. The embedded CPU should be used for control path initialization and exception processing, nothing more.

At a minimum, there 10 capabilities the network data path acceleration engines need to be able to deliver:

  1. Data packet parsing, matching and manipulation to implement an open virtual switch (OVS)
  2. RDMA data transport acceleration for Zero Touch RoCE
  3. GPUDirect accelerators to bypass the CPU and feed networked data directly to GPUs (both from storage and from other GPUs)
  4. TCP acceleration including RSS, LRO, checksum, etc.
  5. Network virtualization for VXLAN and Geneve overlays and VTEP offload
  6. Traffic shaping “packet pacing” accelerator to enable multimedia streaming, content distribution networks and the new 4K/8K Video over IP (RiverMax for ST 2110)
  7. Precision timing accelerators for telco cloud RAN such as 5T for 5G capabilities
  8. Crypto acceleration for IPSEC and TLS performed inline, so all other accelerations are still operational
  9. Virtualization support for SR-IOV, VirtIO and para-virtualization
  10. Secure Isolation: root of trust, secure boot, secure firmware upgrades, and authenticated containers and application lifecycle management

These are just 10 of the acceleration and hardware capabilities that are critical to being able to answer yes to the question: “What is a DPU?”

What is a DPU processor (Data Processing Unit)? What Use Cases ?

So what is a DPU? This is a DPU:

 What's a DPU? This is a DPU, also known as a Data Processing Unit.

Many so-called DPUs focus on delivering just one or two of these functions.

The worst try to offload the datapath in proprietary processors.

While good for prototyping, this is a fool’s errand because of the scale, scope and breadth of data centers.

Additional DPU-Related Resources

Defining the SmartNIC: What is a SmartNIC and How to Choose the Best One

Best Smart NICs for Building the Smart Cloud: PART I

Welcome to the DPU-Enabled Data Revolution Era

Accelerating Bare Metal Kubernetes Workloads, the Right Way

Mellanox Introduces Revolutionary SmartNICs for Making Secure Cloud Possible

Achieving a Cloud Scale Architecture with SmartNICs

Provision Bare-Metal Kubernetes Like a Cloud Giant!

Kernel of Truth Podcast: Demystifying the DPU and DOCA 

Learn more on the NVIDIA Technical Blog.

Securing Next Generation Apps over VMware Cloud Foundation with Bluefield-2 DPU

More Information:

https://www.hardwarezone.com.sg/tech-news-nvidias-new-bluefield-dpus-will-accelerate-data-center-infrastructure-operations

https://www.nvidia.com/en-us/networking/products/data-processing-unit/



Viewing all articles
Browse latest Browse all 117

Trending Articles