您的浏览器禁用了JavaScript(一种计算机语言,用以实现您与网页的交互),请解除该禁用,或者联系我们。[伯恩斯坦]:中国半导体:科技未来:AI数据中心网络入门 - 发现报告

中国半导体:科技未来:AI数据中心网络入门

信息技术2026-03-23-伯恩斯坦车***
中国半导体:科技未来:AI数据中心网络入门

Future of Tech: AI Datacenter Networking Primer As AI model sizes and computational requirements scale exponentially, single chips are nolonger sufficient. Modern AI workloads demand massive clusters of accelerators that mustoperate as a unified computing fabric, making AI datacenter (AIDC) networking a criticaldeterminant of system level efficiency. As a result, we see AIDC networking evolving intoone of the fastest growing domains that could last for multiple years. Qingyuan Lin, Ph.D.+852 2123 2654qingyuan.lin@bernsteinsg.com Stacy A. Rasgon, Ph.D.+1 213 559 5917stacy.rasgon@bernsteinsg.com Demand for AIDC networking chips is exploding due to the compound bandwidtheffect, with a TAM roughly estimated to reach USD ~100Bn in 2030 with ~30%CAGR. For multi-tier networking structures needed for large scale clusters, adding asingle accelerator increases not only point to point bandwidth, but also multiplies trafficacross higher tiers of the cluster and needs to add a lot more networking components.When the number of chips exceeds certain threshold, it is also required to add more layersof connections. This compounding behavior means that total network throughput risesexponentially relative to the number of chips deployed. As hyperscalers accelerate the buildout of 100k+ GPU clusters, networking components become the second-largest cost,positioning the AIDC networking TAM to grow faster than xPUs. Francis Ma+852 2123 2626francis.ma@bernsteinsg.com Zheng Cui+852 2123 2694zheng.cui@bernsteinsg.com Arpad von Nemes+1 917 344 8461arpad.vonnemes@bernsteinsg.com AIDC networking can be categorized into three major connection types.DC-DCconnectionsfocus on wide area bandwidth and reliability across multiple DCs;CPU-centric connectionsmanage data flows between CPU and accelerators/NICs/SSDetc.; andxPU-to-xPU connections(GPU/TPU/NPU) deliver the high bandwidth and lowlatency pathways needed to form large AI compute clusters. Within xPU-to-xPU,scale-upnetworks connect multiple chips and share the computing power so they perform logicallylike one chip (or ‘node’), whilescale-outnetworks stitch thousands of ‘nodes’ across acluster. These needs have given rise to multiple protocols—NVLink, PCIe, Ethernet, UALink,and China’s UB—each tailored to trade-offs in bandwidth, latency, cost etc. Alrick Shaw+1 917 344 8454alrick.shaw@bernsteinsg.com Competition in the scale-up networking domain remains intense and far from settled.Nvidia’sNVLinkstill sets the performance benchmark with tight hardware-software integration and proven performance, but industry players are pushing alternativeapproaches as it’s a closed system.UALinkand Ethernet based SUEarchitectures aim tochallenge Nvidia by promoting open ecosystems, reduced vendor lock-in, and lower coststructures.PCIecontinues to gain traction with cloud service providers seeking mature,inexpensive solutions for certain workloads. Meanwhile, China may pursue a distinct path:Huawei’s proprietaryUnified Bus (UB), designed as a unified architecture across multiplenetwork layers, reflecting a strategic emphasis on bigger cluster scale as single chipperformance is lagging. The competitive landscape is fluid, and the eventual winners coulddiffer meaningfully by region and workload type. For networking vendors, the sector offers strong industry beta and structurallyattractive margins.The technological and capital barriers in high-performanceinterconnects are immense, limiting new entrants and result in high margins.Nvidia/Huaweirepresent a closed ecosystem that benefits from a fully integrated end-to-endfabric across computing and networking.Broadcom/Marvellstands to gain share fromCSP adoption of open ecosystems (such as Ethernet).Montageand other vendors in Chinaare well positioned as domestic ecosystems localize core networking hardware. INVESTMENT IMPLICATIONS We rate Hygon and Cambricon Outperform, with TP at CNY 280 and CNY 2,000, respectively. NVDA (Outperform, $300 PT):The datacenter opportunity is enormous, and still early, with material upside still possible. AVGO (OP, $525 PT):A strong 2025 AI trajectory seems set to accelerate into 2026 and beyond, bolstered by software, cashdeployment, and superb margins & FCF. Table Of Contents Networking becoming a key pillar in AI infrastructure............................................................................................................................................... 3Sizing the TAM for AIDC networking chips............................................................................................................................................................. 3The compound bandwidth effect................................................................................................................................................................................6Architecture of AIDC network....................................................................................................................