就在谷歌宣布推出Axion人工智能芯片之后,Meta剛剛宣布將進一步進軍人工智能芯片競賽。這兩家公司都宣稱,它們的新型半導體模型是開發人工智能平臺的關鍵,也是它們和科技行業其他公司一直依賴的英偉達芯片的替代品,能夠為人工智能數據中心提供動力。
硬件正在成為人工智能的關鍵增長領域。對于擁有資金和人才的大型科技公司來說,開發自研芯片有助于減少對英偉達和英特爾(Intel)等外部設計商的依賴,同時還允許公司專門根據自己的人工智能模型定制硬件,從而提高性能并節省能源成本。
谷歌和Meta剛剛宣布推出的這些自研人工智能芯片,對英偉達在人工智能硬件市場的主導地位構成了第一個真正的挑戰。英偉達控制著超過90%的人工智能芯片市場,對其行業領先的半導體的需求只增不減。但如果英偉達最大的客戶轉而開始生產自己的芯片,那么其自年初以來飆升了87%的股價可能會受到影響。
科技咨詢公司Omdia的分析師愛德華·威爾福德(Edward Wilford)在接受《財富》雜志采訪時表示:“從Meta的角度來看……這為它們提供了與英偉達討價還價的工具。這讓英偉達知道,它們不是排他性的,而且還有其他選擇。它們制造的硬件針對其正在開發的人工智能進行了優化?!?/p>
為什么人工智能需要新芯片?
人工智能模型需要大量的算力,因為需要大量的數據來訓練背后的大型語言模型。傳統的計算機芯片根本無法處理構建人工智能模型的數萬億個數據點,這催生了人工智能專用計算機芯片市場,這些芯片通常被稱為“尖端”芯片,因為它們是市場上功能最強大的設備。
半導體巨頭英偉達主導了這一新興市場:英偉達價值3萬美元的旗艦人工智能芯片的等待名單長達數月之久,需求推動該公司股價在過去六個月上漲了近90%。
競爭對手芯片制造商英特爾也在努力保持競爭力。它剛剛發布了Gaudi 3人工智能芯片,與英偉達展開直接競爭。上至谷歌和微軟,下至小型初創企業,人工智能開發者都在爭奪稀缺的人工智能芯片,但受到制造能力的限制。
為什么科技公司開始制造自己的芯片?
英偉達和英特爾只能生產有限數量的芯片,因為它們和業內其他公司都依賴中國臺灣的制造商臺積電(TSMC)來實際組裝芯片設計。由于只有一家制造商參與其中,這些尖端芯片的制造周期長達數月。這是導致人工智能領域的主要參與者,如谷歌和Meta,自行設計芯片的一大關鍵因素。
咨詢公司弗雷斯特市場咨詢(Forrester)的高級分析師阿爾文·阮(Alvin Nguyen)告訴《財富》雜志,谷歌、Meta和亞馬遜等公司設計的芯片不會像英偉達的頂級產品那樣功能強大,但可能會在速度方面使這些公司受益。他說,它們將能夠在專業化程度更低的裝配線上生產這些產品,等待時間更短。
阮說:“如果你有產品性能差10%,但現在就能買到的東西,我每天都會買入?!?/p>
即使Meta和谷歌正在開發的原生人工智能芯片不如英偉達的尖端人工智能芯片功能強大,但它們可以更好地針對公司特定的人工智能平臺進行定制。阮表示,為公司自己的人工智能平臺設計的自研芯片可以通過消除不必要的功能來提高效率并節省成本。
阮說:“這就像買車一樣。好吧,你需要自動變速箱。但你需要真皮座椅,還是加熱按摩座椅呢?”
Meta發言人梅蘭妮·羅伊在給《財富》雜志的一封電子郵件中寫道:“對我們來說,這樣做的好處是,我們可以打造一款能夠更有效地處理特定工作負載的芯片?!?/p>
英偉達的頂級芯片每塊售價約為2.5萬美元。它們是極其強大的工具,而且設計用于廣泛的應用,從訓練人工智能聊天機器人到生成圖像,再到開發推薦算法,比如TikTok和Instagram上的算法。這意味著功能稍弱,但更有針對性的芯片可能更適合Meta這樣的公司。Meta在人工智能方面的投資主要是用于其推薦算法,而不是面向消費者的聊天機器人。
晨星研究公司(Morningstar)股票研究主管布萊恩·科萊洛(Brian Colello)告訴《財富》雜志:“英偉達的圖形處理器(GPU)在人工智能數據中心中表現出色,但它們是通用型的。在某些工作負載和某些模型中,定制芯片可能會更好?!?/p>
萬億美元的問題
阮表示,更專業的自研芯片可以憑借其集成到現有數據中心的能力帶來額外的好處。英偉達的芯片耗電量大、發熱量高、噪音大,以至于科技公司可能被迫重新設計或遷移其數據中心,以集成隔音和液冷系統。功能較弱的原生芯片能耗低、發熱量少,可以解決這個問題。
Meta和谷歌開發的人工智能芯片是長期賭注。阮估計,這些芯片的開發大約需要一年半時間,而大規模應用可能還需要數月時間。在可預見的未來,整個人工智能世界仍將在很大程度上依賴英偉達(其次是英特爾,依賴程度相對較?。﹣頋M足其計算硬件需求。事實上,馬克·扎克伯格最近宣布,Meta有望在今年年底前擁有35萬塊英偉達芯片(屆時該公司將在芯片上投入約180億美元)。但從外包算力轉向自研芯片設計,可能會打破英偉達的壟斷。
科萊洛表示:“這些自研芯片的威脅是英偉達估值達萬億美元。如果這些自研芯片顯著減少了對英偉達的依賴,那么英偉達的股票可能會因此下跌。這一事態發展并不令人意外,但未來幾年的執行情況是我們關注的關鍵估值問題。”(財富中文網)
譯者:中慧言-王芳
就在谷歌宣布推出Axion人工智能芯片之后,Meta剛剛宣布將進一步進軍人工智能芯片競賽。這兩家公司都宣稱,它們的新型半導體模型是開發人工智能平臺的關鍵,也是它們和科技行業其他公司一直依賴的英偉達芯片的替代品,能夠為人工智能數據中心提供動力。
硬件正在成為人工智能的關鍵增長領域。對于擁有資金和人才的大型科技公司來說,開發自研芯片有助于減少對英偉達和英特爾(Intel)等外部設計商的依賴,同時還允許公司專門根據自己的人工智能模型定制硬件,從而提高性能并節省能源成本。
谷歌和Meta剛剛宣布推出的這些自研人工智能芯片,對英偉達在人工智能硬件市場的主導地位構成了第一個真正的挑戰。英偉達控制著超過90%的人工智能芯片市場,對其行業領先的半導體的需求只增不減。但如果英偉達最大的客戶轉而開始生產自己的芯片,那么其自年初以來飆升了87%的股價可能會受到影響。
科技咨詢公司Omdia的分析師愛德華·威爾福德(Edward Wilford)在接受《財富》雜志采訪時表示:“從Meta的角度來看……這為它們提供了與英偉達討價還價的工具。這讓英偉達知道,它們不是排他性的,而且還有其他選擇。它們制造的硬件針對其正在開發的人工智能進行了優化?!?/p>
為什么人工智能需要新芯片?
人工智能模型需要大量的算力,因為需要大量的數據來訓練背后的大型語言模型。傳統的計算機芯片根本無法處理構建人工智能模型的數萬億個數據點,這催生了人工智能專用計算機芯片市場,這些芯片通常被稱為“尖端”芯片,因為它們是市場上功能最強大的設備。
半導體巨頭英偉達主導了這一新興市場:英偉達價值3萬美元的旗艦人工智能芯片的等待名單長達數月之久,需求推動該公司股價在過去六個月上漲了近90%。
競爭對手芯片制造商英特爾也在努力保持競爭力。它剛剛發布了Gaudi 3人工智能芯片,與英偉達展開直接競爭。上至谷歌和微軟,下至小型初創企業,人工智能開發者都在爭奪稀缺的人工智能芯片,但受到制造能力的限制。
為什么科技公司開始制造自己的芯片?
英偉達和英特爾只能生產有限數量的芯片,因為它們和業內其他公司都依賴中國臺灣的制造商臺積電(TSMC)來實際組裝芯片設計。由于只有一家制造商參與其中,這些尖端芯片的制造周期長達數月。這是導致人工智能領域的主要參與者,如谷歌和Meta,自行設計芯片的一大關鍵因素。
咨詢公司弗雷斯特市場咨詢(Forrester)的高級分析師阿爾文·阮(Alvin Nguyen)告訴《財富》雜志,谷歌、Meta和亞馬遜等公司設計的芯片不會像英偉達的頂級產品那樣功能強大,但可能會在速度方面使這些公司受益。他說,它們將能夠在專業化程度更低的裝配線上生產這些產品,等待時間更短。
阮說:“如果你有產品性能差10%,但現在就能買到的東西,我每天都會買入。”
即使Meta和谷歌正在開發的原生人工智能芯片不如英偉達的尖端人工智能芯片功能強大,但它們可以更好地針對公司特定的人工智能平臺進行定制。阮表示,為公司自己的人工智能平臺設計的自研芯片可以通過消除不必要的功能來提高效率并節省成本。
阮說:“這就像買車一樣。好吧,你需要自動變速箱。但你需要真皮座椅,還是加熱按摩座椅呢?”
Meta發言人梅蘭妮·羅伊在給《財富》雜志的一封電子郵件中寫道:“對我們來說,這樣做的好處是,我們可以打造一款能夠更有效地處理特定工作負載的芯片?!?/p>
英偉達的頂級芯片每塊售價約為2.5萬美元。它們是極其強大的工具,而且設計用于廣泛的應用,從訓練人工智能聊天機器人到生成圖像,再到開發推薦算法,比如TikTok和Instagram上的算法。這意味著功能稍弱,但更有針對性的芯片可能更適合Meta這樣的公司。Meta在人工智能方面的投資主要是用于其推薦算法,而不是面向消費者的聊天機器人。
晨星研究公司(Morningstar)股票研究主管布萊恩·科萊洛(Brian Colello)告訴《財富》雜志:“英偉達的圖形處理器(GPU)在人工智能數據中心中表現出色,但它們是通用型的。在某些工作負載和某些模型中,定制芯片可能會更好?!?/p>
萬億美元的問題
阮表示,更專業的自研芯片可以憑借其集成到現有數據中心的能力帶來額外的好處。英偉達的芯片耗電量大、發熱量高、噪音大,以至于科技公司可能被迫重新設計或遷移其數據中心,以集成隔音和液冷系統。功能較弱的原生芯片能耗低、發熱量少,可以解決這個問題。
Meta和谷歌開發的人工智能芯片是長期賭注。阮估計,這些芯片的開發大約需要一年半時間,而大規模應用可能還需要數月時間。在可預見的未來,整個人工智能世界仍將在很大程度上依賴英偉達(其次是英特爾,依賴程度相對較小)來滿足其計算硬件需求。事實上,馬克·扎克伯格最近宣布,Meta有望在今年年底前擁有35萬塊英偉達芯片(屆時該公司將在芯片上投入約180億美元)。但從外包算力轉向自研芯片設計,可能會打破英偉達的壟斷。
科萊洛表示:“這些自研芯片的威脅是英偉達估值達萬億美元。如果這些自研芯片顯著減少了對英偉達的依賴,那么英偉達的股票可能會因此下跌。這一事態發展并不令人意外,但未來幾年的執行情況是我們關注的關鍵估值問題?!保ㄘ敻恢形木W)
譯者:中慧言-王芳
Meta just announced it’s pushing further into the AI chip race, coming right on the heels of Google’s own announcement of its Axion AI chip. Both companies are touting their new semiconductor models as key to the development of their AI platforms, and as alternatives to the Nvidia chips they—and the rest of the tech industry—have been relying on to power AI data centers.
Hardware is emerging as a key AI growth area. For Big Tech companies with the money and talent to do so, developing in-house chips helps reduce dependence on outside designers such as Nvidia and Intel while also allowing firms to tailor their hardware specifically to their own AI models, boosting performance and saving on energy costs.
These in-house AI chips that Google and Meta just announced pose one of the first real challenges to Nvidia’s dominant position in the AI hardware market. Nvidia controls more than 90% of the AI chips market, and demand for its industry-leading semiconductors is only increasing. But if Nvidia’s biggest customers start making their own chips instead, its soaring share price, up 87% since the start of the year, could suffer.
“From Meta’s point of view … it gives them a bargaining tool with Nvidia,” Edward Wilford, an analyst at tech consultancy Omdia, told Fortune. “It lets Nvidia know that they’re not exclusive, [and] that they have other options. It’s hardware optimized for the AI that they are developing.”
Why does AI need new chips?
AI models require massive amounts of computing power because of the huge amount of data required to train the large language models behind them. Conventional computer chips simply aren’t capable of processing the trillions of data points AI models are built upon, which has spawned a market for AI-specific computer chips, often called “cutting-edge” chips because they’re the most powerful devices on the market.
Semiconductor giant Nvidia has dominated this nascent market: The wait list for Nvidia’s $30,000 flagship AI chip is months long, and demand has pushed the firm’s share price up almost 90% in the past six months.
And rival chipmaker Intel is fighting to stay competitive. It just released its Gaudi 3 AI chip to compete directly with Nvidia. AI developers—from Google and Microsoft down to small startups—are all competing for scarce AI chips, limited by manufacturing capacity.
Why are tech companies starting to make their own chips?
Both Nvidia and Intel can produce only a limited number of chips because they and the rest of the industry rely on Taiwanese manufacturer TSMC to actually assemble their chip designs. With only one manufacturer solidly in the game, the manufacturing lead time for these cutting-edge chips is multiple months. That’s a key factor that led major players in the AI space, such as Google and Meta, to resort to designing their own chips. Alvin Nguyen, a senior analyst at consulting firm Forrester, told Fortune that chips designed by the likes of Google, Meta, and Amazon won’t be as powerful as Nvidia’s top-of-the-line offerings—but that could benefit the companies in terms of speed. They’ll be able to produce them on less specialized assembly lines with shorter wait times, he said.
“If you have something that’s 10% less powerful but you can get it now, I’m buying that every day,” Nguyen said.
Even if the native AI chips Meta and Google are developing are less powerful than Nvidia’s cutting-edge AI chips, they could be better tailored to the company’s specific AI platforms. Nguyen said that in-house chips designed for a company’s own AI platform could be more efficient and save on costs by eliminating unnecessary functions.
“It’s like buying a car. Okay, you need an automatic transmission. But do you need the leather seats, or the heated massage seats?” Nguyen said.
“The benefit for us is that we can build a chip that can handle our specific workloads more efficiently,” Melanie Roe, a Meta spokesperson, wrote in an email to Fortune.
Nvidia’s top-of-the-line chips sell for about $25,000 apiece. They’re extremely powerful tools, and they’re designed to be good at a wide range of applications, from training AI chatbots to generating images to developing recommendation algorithms such as the ones on TikTok and Instagram. That means a slightly less powerful, but more tailored chip could be a better fit for a company such as Meta, for example—which has invested in AI primarily for its recommendation algorithms, not consumer-facing chatbots.
“The Nvidia GPUs are excellent in AI data centers, but they are general purpose,” Brian Colello, equity research lead at Morningstar, told Fortune. “There are likely certain workloads and certain models where a custom chip might be even better.”
The trillion-dollar question
Nguyen said that more specialized in-house chips could have added benefits by virtue of their ability to integrate into existing data centers. Nvidia chips consume a lot of power, and they give off a lot of heat and noise—so much so that tech companies may be forced to redesign or move their data centers to integrate soundproofing and liquid cooling. Less powerful native chips, which consume less energy and release less heat, could solve that problem.
AI chips developed by Meta and Google are long-term bets. Nguyen estimated that these chips took roughly a year and a half to develop, and it’ll likely be months before they’re implemented at a large scale. For the foreseeable future, the entire AI world will continue to depend heavily on Nvidia (and, to a lesser extent, Intel) for its computing hardware needs. Indeed, Mark Zuckerberg recently announced that Meta was on track to own 350,000 Nvidia chips by the end of this year (the company’s set to spend around $18 billion on chips by then). But movement away from outsourcing computing power and toward native chip design could loosen Nvidia’s chokehold on the market.
“The trillion-dollar question for Nvidia’s valuation is the threat of these in-house chips,” Colello said. “If these in-house chips significantly reduce the reliance on Nvidia, there’s probably downside to Nvidia’s stock from here. This development is not surprising, but the execution of it over the next few years is the key valuation question in our mind.”