Marvell: A Betrayal of Broadcom?

Advertisements

Savings News June 20, 2025

In the bustling world of technology, September 2023 bore witness to a significant event that rattled the stock prices of three major Silicon Valley giantsThis tumultuous destabilization stemmed from a simple announcement regarding Google's future in the realm of custom chips.

The essence of the announcement was straightforward: Google disclosed plans to gradually phase out its reliance on Broadcom's custom Tensor Processing Units (TPUs) in favor of a new collaboration with Marvell, aimed at transitioning to a new breed of AI chip by 2027. This news sent ripples through the industry, not merely affecting stock valuations but also hinting at a potential upheaval in the custom chip architecture landscape.

Broadcom and Marvell have long been rivals in the custom chip sector, and Google's decision to select one over the other holds considerable weightGiven that Google ranks among the largest consumers of custom chips, the domino effect of such a decision could lead to a seismic shift, possibly swapping the positions of the industry leader and the runner-upBroadcom's established presence in the market was suddenly juxtaposed with Marvell's ambitious plans.

This move comes at a time when the custom chip market is rapidly expanding, stimulated by other tech giants following Google's lead in developing proprietary chipsNotably, even NVIDIA has taken notice of this burgeoning sectorResearch from the firm 650 Group indicates that the custom chip market for data centers is projected to skyrocket to a staggering $10 billion in 2023, and potentially double by 2025. According to Needham analyst Charles Shi, the broader custom chip market could be valued at around $30 billion by the end of 2023, constituting approximately 5% of the global chip sales.

Broadcom, in the aftermath of the news, organized an Investor Day, elaborating on its semiconductor solutions—notably highlighting its custom XPU technology

Advertisements

Charlie Kawwas, president of the semiconductor solutions group at Broadcom, emphasized that their custom XPU outperforms NVIDIA's B200 by a significant margin of 50% in terms of High Bandwidth Memory (HBM) capabilitiesBroadcom confidently asserts that it can deliver superior, faster, and more energy-efficient performance than all its rivals.

Shortly after, Marvell did not remain idleThe company orchestrated an event centered around "Accelerated Infrastructure for the AI Era," where it articulated insights regarding the future prospects and growth drivers of data centers, including a spotlight on custom chips.

The intriguing question is: what gives Marvell the confidence to pursue custom chips with such fervor?

Marvell's AI Strategy

Marvell's foray into the Application-Specific Integrated Circuit (ASIC) space can be traced back to its $740 million acquisition of Avera from GlobalFoundries in May 2019. Just a year post-acquisition, Marvell announced its provision of custom ASIC System on Chip (SoC) services, allowing clients to utilize their existing Intellectual Property (IP) alongside Marvell's extensive IP library, significantly shortening the time required for chip developmentIt presents a win-win scenario for the clients, offering substantial benefits without substantial risks.

Marvell’s model allows clients to start with standard Marvell chips and subsequently integrate customized interfaces, accelerators, or algorithms tailored to specific workloadsTypically, the customization process is estimated to take between 12 to 18 monthsOnce the custom ASIC chip is designed, simulated, and validated, Marvell acts as the intermediary, ensuring timely delivery and managing licensing and royalty transactions, especially if clients opt for ARM-compatible CPU cores.

Interestingly, this innovative service failed to create immediate waves in the sector even though Broadcom had reliably customized generations of TPUs for Google with relatively limited revenue and profit margins.

However, Amazon, the world's largest cloud service provider, recognized the value of Marvell’s services

Advertisements

In December 2020, Amazon introduced a new machine learning training chip named Trainium, which promised a throughput increase of 30% and a reduction in single-instance costs by 45% compared to standard AWS GPU instancesMarvell’s custom division played a pivotal role in the design and deployment of Trainium, leading to the subsequent release of its successor, Trainium 2, in November 2023.

Further reinforcing its partnership, a report in June 2023 revealed that Marvell secured AI orders from AmazonThis partnership would see Marvell collaborating with Amazon on the design of the second-generation AI chip (Trainium 2), with specifications expected to take off in the latter half of 2023 and volume production slated for 2024.

The Marvell website highlights its strategic position as a vital supplier for AWSThe company not only provides cloud-optimized chips that meet AWS customer infrastructure demands but also delivers solutions across a spectrum of electronic optics, networking, security, storage, and custom design.

According to Marvell's fiscal report for the fourth quarter and full year ending February 3, 2024, the quarterly revenue was recorded at $1.427 billion, exceeding initial estimatesCEO Matt Murphy emphasized the significance of AI-induced revenue growth, highlighting a sequential revenue rise of 38% in the data center terminal market and an astonishing year-over-year growth of 54%.

In recent times, Marvell has emerged as the third major chip winner after NVIDIA and Broadcom, riding the wave of AI advancements.

Current Status of Marvell's AI Business

During a recent AI event, Marvell was quick to announce significant developments regarding custom chips tailored for large-scale clientsAccording to Needham analyst Quinn Bolton, besides already delivering training accelerators for a client that likely is Amazon, Marvell also aims to roll out an AI inference accelerator by 2025 for another client, probably Google.

Marvell has disclosed some client details, revealing that the first client is utilizing Marvell chips for its AI clusters and systems, with both companies jointly developing new custom AI interface accelerators

Advertisements

Furthermore, Marvell is also in the process of designing an Arm CPU for its second large-scale customer, which will be deployed within their cloud platform and AI infrastructure.

CEO Matt Murphy made headlines with an announcement pointing towards a third large AI customer—possibly Microsoft—where both entities collaborate to design an AI accelerator expected to enter production by 2026. The anticipated revenue from this particular customer might surpass the combined revenue from the first two clients, and Murphy highlighted that Marvell’s relationships with large-scale AI firms afford them a substantial edge over competitors, specifically Broadcom.

Interestingly, after introducing their custom chip capabilities, Marvell also shared insights on the market and opportunities surrounding data centersThey predict traditional general computing will grow at a modest compound annual growth rate of 3%, representing a negative growth rate when adjusted for inflationIn stark contrast, the market for accelerated computing is poised to surge with a 32% compound annual growth rate, while the custom segmentation market, buoyed by large-scale enterprises building their chips, is expected to expand at a remarkable 45% compounded annually, seizing a larger share of the accelerated computing market by 2028.

Data disclosed by Marvell indicates that their total revenue associated with AI could potentially triple next year, soaring beyond $1.5 billion by fiscal year 2025, equating to 30% of total revenue and representing a threefold increase from AI's share in the previous yearAmong the AI-related revenue, interconnect products account for about two-thirds, with custom computing comprising one-third.

Bolton noted that, in terms of interconnect, Marvell supported new data center growth exceeding $4 billionIn the field of switching, Marvell's roadmap has proven highly competitive, with the company capturing a larger share compared to Broadcom's Tomahawk and Jericho roadmaps.

In specifics, Marvell claims to lead in DSPs, drivers, and TIAs for front and back-end networking within data centers, having product features breakthrough applications in 8*200G 1.6T optical modules while concurrently building a comprehensive ecosystem for AEC DSP products, creating a new potential market size worth $1 billion.

Marvell's investments in switching have facilitated spectacular advancements in interconnect and switching technologies, with their data center switching R&D budget expanding 2.5 times

Products derived from Innovium, such as the Teralynx 10 51.2T, have already commenced mass production.

In this competitive landscape, Marvell is gearing up for head-to-head contests against Broadcom, intensifying its investments in AI cloud switching and achieving large-scale production of 12.8T switch chip productsBy 2024, they expect to start mass-producing 51.2T switch chips.

Additionally, Marvell elaborated on the necessity of interconnecting within AI infrastructure, covering requirements for chassis interconnections and data centers’ frontend and back-end networkingThey expect an accelerated demand for interconnect as cluster sizes increase, necessitating additional levels of switchingThe pace of generational shifts in interconnect technology is also quickening, with 2023 heralding unprecedented speeds—doubling from the prior four-year cycle to just two years—and pushing beyond the former thresholds of 400G towards speeds of 1.6T and 3.2T.

Moreover, Marvell highlighted that AI-scaled clusters require increasingly intensive optical interconnectsChatGPT 3.0 was trained on a cluster of 1,000 XPUs, necessitating 2,000 optical interconnects, while ChatGPT 4.0 utilized a 25,000 XPU cluster with a staggering requirement of 75,000 optical interconnectsAs AI models become more intricate, the need for XPU clusters may escalate to 100,000 or even a million-level clusters, necessitating sophisticated switching technologies to enable long-distance interconnections that span distances from below 2 kilometers to 10-20 kilometers.

Marvell asserts that AI training and inference present differentiated requirements, prompting new infrastructure developmentsTraining session demands larger clusters but fewer instances, while inference scenarios require smaller but numerous clustersBoth dynamics will drive increased demands for optical interconnectsAs AI evolves, there will also be a need for enhanced power consumption, better cooling solutions, and consequently, more data centers, tailored to various locations and further interconnections

Marvell has proposed that as these AI clusters develop, a transition toward long-distance transmission characteristics with coherent technology and similar experiences to PAM connections become necessary.

With this backdrop, Marvell isn't just equipped with switches; they boast a substantial optical product portfolio and DSP offeringsAs the age of AI dawns, Marvell stands to seize a larger market share through its interconnect products.

Furthermore, Marvell unveiled a 3D SiPho (SiliconPhoton) engine, a technology capable of placing 32 individual 200G links onto a single chip, paving the way for integration across applicationsThey discussed the prospective uses for these engines, foreseeing deployments in next-gen modular solutions and optical microchips.

It’s imperative to note that despite Broadcom’s earlier claim regarding a 51.2T co-packaged optical switch, Marvell indicated that its clients are not yet prepared for this leap while already possessing much of the intellectual property required for its developmentThey emphasized that with escalating speeds and lower-cost CW lasers being utilized, scaling SiPho innovatively becomes increasingly essential.

In statements to investors, Murphy acknowledged the current marginal profit rates for custom chips lagging behind that of traditional "commercial" chipsHe remarked, "We consistently say that the gross margins for custom business will be lower than that of commercial productsHowever, over time, operating profit margins are expected to align."

Broadcom had previously mentioned in March that their custom chip profit margin would align with the company average, implying that Marvell's custom business still exhibits room for maturation, or perhaps they charge lower customization fees compared to BroadcomWith the entry of significant clients like Google and Microsoft, there’s potential for an uptick in Marvell's custom profitability margins.

Final Thoughts

When juxtaposed against Broadcom, Marvell appears to be more invested in the evolving role of its products within the AI domain

Advertisements

Advertisements

Leave a Reply

Your email address will not be published.Required fields are marked *