Zhongke Yushu: Redefining DPU
Advertisements
At a recent event held by Zhongke Yushou, the unveiling of their third-generation DPU attracted significant attentionProfessor Zheng Weimin, an esteemed faculty member from Tsinghua University and a member of the Chinese Academy of Engineering, initiated the discussion by emphasizing the world's shift towards an information-led economyThis evolution is characterized by increasingly diverse and complex application scenarios, creating an unprecedented demand for processing power, which serves as the foundation of the digital economyCentral to this is the chip, the heart of computational advancement.
Historically, both GPUs and CPUs have been critical components in data centers owing to their inherent advantagesHowever, as the internet has matured, an immense influx of data has necessitated accelerated data processing in these centers, giving rise to the Data Processing Unit (DPU) as a new solution to handle data ingress and egress efficiently.
A DPU, as a data processor, is tasked with various functions including data network transmission, storage, computation, and securityIt is often envisioned as the third major component in data centers, following the CPU and GPU.
Professor Zheng further articulated the significance of the DPU, asserting that it is not merely a new piece of technology, but a vital infrastructure that will play an integral role in future developments of data centers and intelligent computing hubsThese facilities require products with high bandwidth, low latency, and high throughput, which the DPU can provideIts role is increasingly recognized as crucial to initiatives such as the East Data West Computing project and the establishment of powerful computing networks, fueling competition among domestic and international companies alike.
MrCheng Ruiqi from Zhongke Yushou revealed that their research team had begun exploring the DPU concept quite some time ago, an area that has evolved dramatically in its definition and functionality over the years
Advertisements
The CEO, Yan Guihai, candidly acknowledged that the term DPU is not new; in fact, it could be likened to a network card, primarily functioning to transfer data from the internet to the CPUYet, he asserted that this technology has transformed in recent years, becoming one of the most important innovations in chip architecture.
Yan elaborated on the evolution of the DPU, stating that it transcends its traditional function of simply connecting to the internetRather, it serves to rewire high-bandwidth, low-latency networks that connect various computation nodesTo illustrate this analogy, he likened the development of DPU technology to the transition from a green train era to a high-speed rail era, fundamentally altering the approach to connectivity issues, while also addressing logical distance challenges between virtual machines and physical servers.
The company interprets the roles of various components in the computing landscape by likening the CPU to a brain, the GPU to muscle, and the DPU to the nervous systemThe CPU is responsible for the application ecosystem, determining the generalizability of computational systemsThe GPU provides high-density processing capacity and power, whereas the DPU facilitates efficient data flow between various CPUs and GPUs, ensuring that systems can work in unisonHaving matured over the years, the DPU is now poised to become an essential part of our computing infrastructure, offering foundational support in networking, data, storage, and security.
In essence, the DPU addresses gaps in CPU efficiency and overloading on GPUs, propelling it to a core role in system architectures designed for computational centersZhongke Yushou aims to deliver first-rate DPU products that cater to demands for elastic resource expansion, efficient resource interconnection, and acceleration of various computational tasksThe aims also encompass issues of unified operation and management.
To realize such ambitious objectives, three levels of focus were identified: first, enhancing architectural designs with cutting-edge chip architectures to redefine the DPU; second, solidifying software integration for maximum compatibility with client applications; and third, delivering a demonstration platform that incorporates the architecture and software, enabling clients to adopt DPU at minimal costs for operational validation.
"Achieving these goals means that the DPU is more than just a chip; it is a cohesive system that integrates chips, software, and platforms," Yan asserted, laying the groundwork for Zhongke Yushou's DPU technological foundation.
To discuss the technical foundation, one must first acknowledge the choices made in selecting a technological direction for the DPU
Advertisements
After considering their accumulated expertise and the current landscape, Zhongke Yushou determined that evolving the DPU was not merely about increasing core counts or production processes but necessitated a shift in architectural framework.
Architectural innovation is not optional; it is essentialThis clarion call was echoed by Lu Wenyuan, who underscored their focus on leveraging theoretical insights to navigate the complexities of chip designSignificant underlying innovations were introduced, starting with the self-developed KPU architecture aimed at specialized computing tasksThis architecture utilizes heterogeneous computing cores tailored to specific high-density application computations.
Such software-defined hardware technologies allow the mapping of upper-layer application computations to lower-layer processing cores dynamicallyThrough a data-driven operational approach, various calculation functions can be activated, yielding numerous advantages: customizability according to application characteristics, an exceptionally high degree of parallel processing, and optimized performance in data-heavy environments.
The second innovative architecture is the KISA instruction set architecture, tailored for DPU demands characterized by strong I/O, data-heavy processing, and agile heterogeneous operationsThis revolutionary step directs software-defined technology toward executable pathways, simplifying and enhancing efficient programming paradigms specific to domain-centric computations.
The first of its kind in the industry, KISA focuses on data rather than control, employing data streams as its fundamental operational unit instead of traditional byte-oriented processingIt also standardized heterogeneous processing core management through a unified instruction set.
Currently, the KISA instruction set encompasses the foundational architecture as well as specialized extensions for various processing applications, expanding its applicability across over 25 distinct scenarios with comprehensive validation of hundreds of use cases.
Zhongke Yushou's launch of the third-generation K2 Pro DPU exemplifies the fruits of these dual innovations
Advertisements
When summarizing the defining features of the K2 Pro, Lu emphasized the importance of practicality and mass production from the outset of the project, leading to a holistic system engineering approach rather than an isolated function-centric view.
Accordingly, six key areas saw intensive optimization: First, the K2 Pro is designed as an advanced, user-friendly network chip equipped with network offloading features, aiming to minimize end-to-end latency below 1.2 microseconds, dramatically filling the gap in low-latency network cards domestically.
Second, the K2 Pro enhances its role as a high-throughput data processing chip, specifically tuned for the characteristics of DPU applications which demand handling a variety of functions against tight latency requirements.
Thirdly, the K2 Pro emphasizes offloading capabilities, aiming to marginalize control tasks while elevating offloadable quantities from below 20% to more than 90%, significantly streamlining service governance scenarios.
Fourth, the chip is built to be flexible and scalable, with programmability allowing extensive upgrade support aligned with evolving protocols and increasing complexity of tasked workload.
This is supplemented by a self-developed high-speed interconnect solution designed explicitly for chip-level integration, achieving communication latency under 20 nanoseconds with certified bandwidth expandability.
Furthermore, it boasts seamless integration capabilities for external processing units, creating greater flexibility in scenarios such as finance where comprehensive protocol and algorithm execution is required.
Fifth, the K2 Pro serves as a stable management chip, responsible for various virtual machine operations and resource management, armed with stringent monitoring and recovery mechanisms for efficient fault management.
Finally, improved energy efficiency remains a focus, achieved through meticulous operational optimizations ensuring better performance metrics while lowering total system energy consumption.
Ultimately, because of these multifaceted considerations, Zhongke Yushou successfully unveiled its K2 Pro DPU, categorically elevating its status among the industry’s leading products.
The K2 Pro is present in three major product lines, with numbers serving different business necessities
Advertisements
Advertisements
Your email address will not be published.Required fields are marked *
Join 70,000 subscribers!
By signing up, you agree to our Privacy Policy