OpenAI的多位知名度高的員工接連離職,讓外界質(zhì)疑負(fù)責(zé)人工智能安全的團(tuán)隊(duì)是否正在被逐步掏空。
在OpenAI任職近十年的首席科學(xué)家伊利亞·蘇茨克維爾宣布離開公司,不久之后,他的團(tuán)隊(duì)合作伙伴和《時(shí)代》周刊(Time)全球百大人工智能人物之一揚(yáng)·雷克也宣布辭職。
雷克在5月14日發(fā)帖稱:“我已經(jīng)辭職?!?/p>
在這兩人宣布離職之前,有媒體報(bào)道利奧波德·阿申布雷納因?yàn)樾孤┬畔⒍唤夤停つ釥枴た瓶扑苈逵诮衲?月離職,威廉·桑德斯在今年早些時(shí)候離職。
OpenAI的多位員工發(fā)帖表達(dá)了他們?cè)诼牭竭@些消息之后的失望心情。他們沒有回應(yīng)《財(cái)富》雜志的置評(píng)請(qǐng)求。
OpenAI的研究員卡羅爾·溫萊特寫道:“我很榮幸過去兩年半在OpenAI與揚(yáng)共事。為了保證通用人工智能的安全性和有益性,他付出了巨大努力,沒有人能夠與他相比。失去他之后,公司將表現(xiàn)得越來越糟糕?!?/p>
本周,中美兩國(guó)的高層特使在日內(nèi)瓦開會(huì),討論當(dāng)人類即將開發(fā)出通用人工智能,當(dāng)人工智能可以在許多任務(wù)上與人類競(jìng)爭(zhēng)時(shí),我們必須做些什么。
超級(jí)智能對(duì)齊
但科學(xué)家們已經(jīng)將目光轉(zhuǎn)向了下一個(gè)進(jìn)化階段——超級(jí)人工智能。
蘇茨克維爾和雷克共同負(fù)責(zé)在2023年7月成立的一個(gè)團(tuán)隊(duì)。該團(tuán)隊(duì)的任務(wù)是解決超級(jí)人工智能對(duì)齊所面臨的核心技術(shù)挑戰(zhàn),所謂“對(duì)齊”是為了保證人類保留對(duì)智力和能力都遠(yuǎn)超人類的機(jī)器的控制。
OpenAI曾經(jīng)承諾將為此投入現(xiàn)有算力資源的20%,目標(biāo)是在未來四年實(shí)現(xiàn)超級(jí)對(duì)齊。
但與開發(fā)尖端人工智能有關(guān)的成本卻變成了阻礙。
本月早些時(shí)候,奧爾特曼說,一方面他為了開發(fā)通用人工智能,準(zhǔn)備每年投入數(shù)十億美元,另一方面他依舊需要確保OpenAI能夠持續(xù)獲得足夠的資金來維持運(yùn)營(yíng)。
這些資金將來自實(shí)力雄厚的投資者,例如微軟(Microsoft)的首席執(zhí)行官薩蒂亞·納德拉。
這意味著總是要領(lǐng)先于谷歌(Google)等競(jìng)爭(zhēng)對(duì)手發(fā)布成果。
比如OpenAI的最新旗艦產(chǎn)品GPT-4o,該公司稱它具有根據(jù)文本、音頻和視頻進(jìn)行實(shí)時(shí)“推理”的能力?!巴评怼边@個(gè)詞在通用人工智能領(lǐng)域存在爭(zhēng)議。
OpenAI在本周演示的女聲助手非常逼真,人們?cè)u(píng)價(jià)它就像是從斯派克·瓊茲的人工智能科幻電影《她》(Her)中直接提取的聲音。
“伊利亞看到了什么?”
超級(jí)對(duì)齊團(tuán)隊(duì)創(chuàng)建幾個(gè)月后,蘇茨克維爾和控股該公司的非營(yíng)利部門的其他非執(zhí)行董事會(huì)成員罷免了奧爾特曼,稱他們對(duì)首席執(zhí)行官失去了信心。
出于對(duì)公司分裂的擔(dān)憂,納德拉很快就奧爾特曼回歸安排了磋商。幾天后,懊悔的蘇茨克維爾為他在此次“政變”中的角色道歉。
當(dāng)時(shí),路透社(Reuters)報(bào)道稱,此次事件可能與一個(gè)秘密項(xiàng)目有關(guān),該項(xiàng)目的目標(biāo)是開發(fā)一款具有更強(qiáng)大的推理能力的人工智能。
在那之后,蘇茨克維爾就很少公開露面。由于這次政變引起的轟動(dòng)和隨后掩蓋這件事情的方式,在社交媒體上引發(fā)了各種猜測(cè)。
“伊利亞看到了什么?”變成了人工智能社區(qū)經(jīng)常出現(xiàn)的一句話。
最近,科科塔杰洛稱他對(duì)公司失去了信心,為了表達(dá)抗議決定辭職,這進(jìn)一步加劇了外界的擔(dān)憂。
但在5月14日發(fā)布的一份聲明里,蘇茨克維爾似乎暗示,他離開OpenIA并不是因?yàn)閷?duì)安全問題的擔(dān)憂,而是為了追求個(gè)人感興趣的其他事業(yè),他會(huì)在后期對(duì)外公布。
蘇茨克維爾寫道:“公司的發(fā)展過程堪稱奇跡,我相信OpenAI可以開發(fā)出安全和對(duì)人類有益的通用人工智能?!彼麑?duì)OpenAI的三巨頭薩姆·奧爾特曼、格雷格·布羅克曼和米拉·穆拉蒂以及他的繼任者雅各布·帕喬基表達(dá)了支持。(財(cái)富中文網(wǎng))
譯者:劉進(jìn)龍
審校:汪皓
OpenAI的多位知名度高的員工接連離職,讓外界質(zhì)疑負(fù)責(zé)人工智能安全的團(tuán)隊(duì)是否正在被逐步掏空。
在OpenAI任職近十年的首席科學(xué)家伊利亞·蘇茨克維爾宣布離開公司,不久之后,他的團(tuán)隊(duì)合作伙伴和《時(shí)代》周刊(Time)全球百大人工智能人物之一揚(yáng)·雷克也宣布辭職。
雷克在5月14日發(fā)帖稱:“我已經(jīng)辭職?!?/p>
在這兩人宣布離職之前,有媒體報(bào)道利奧波德·阿申布雷納因?yàn)樾孤┬畔⒍唤夤停つ釥枴た瓶扑苈逵诮衲?月離職,威廉·桑德斯在今年早些時(shí)候離職。
OpenAI的多位員工發(fā)帖表達(dá)了他們?cè)诼牭竭@些消息之后的失望心情。他們沒有回應(yīng)《財(cái)富》雜志的置評(píng)請(qǐng)求。
OpenAI的研究員卡羅爾·溫萊特寫道:“我很榮幸過去兩年半在OpenAI與揚(yáng)共事。為了保證通用人工智能的安全性和有益性,他付出了巨大努力,沒有人能夠與他相比。失去他之后,公司將表現(xiàn)得越來越糟糕。”
本周,中美兩國(guó)的高層特使在日內(nèi)瓦開會(huì),討論當(dāng)人類即將開發(fā)出通用人工智能,當(dāng)人工智能可以在許多任務(wù)上與人類競(jìng)爭(zhēng)時(shí),我們必須做些什么。
超級(jí)智能對(duì)齊
但科學(xué)家們已經(jīng)將目光轉(zhuǎn)向了下一個(gè)進(jìn)化階段——超級(jí)人工智能。
蘇茨克維爾和雷克共同負(fù)責(zé)在2023年7月成立的一個(gè)團(tuán)隊(duì)。該團(tuán)隊(duì)的任務(wù)是解決超級(jí)人工智能對(duì)齊所面臨的核心技術(shù)挑戰(zhàn),所謂“對(duì)齊”是為了保證人類保留對(duì)智力和能力都遠(yuǎn)超人類的機(jī)器的控制。
OpenAI曾經(jīng)承諾將為此投入現(xiàn)有算力資源的20%,目標(biāo)是在未來四年實(shí)現(xiàn)超級(jí)對(duì)齊。
但與開發(fā)尖端人工智能有關(guān)的成本卻變成了阻礙。
本月早些時(shí)候,奧爾特曼說,一方面他為了開發(fā)通用人工智能,準(zhǔn)備每年投入數(shù)十億美元,另一方面他依舊需要確保OpenAI能夠持續(xù)獲得足夠的資金來維持運(yùn)營(yíng)。
這些資金將來自實(shí)力雄厚的投資者,例如微軟(Microsoft)的首席執(zhí)行官薩蒂亞·納德拉。
這意味著總是要領(lǐng)先于谷歌(Google)等競(jìng)爭(zhēng)對(duì)手發(fā)布成果。
比如OpenAI的最新旗艦產(chǎn)品GPT-4o,該公司稱它具有根據(jù)文本、音頻和視頻進(jìn)行實(shí)時(shí)“推理”的能力?!巴评怼边@個(gè)詞在通用人工智能領(lǐng)域存在爭(zhēng)議。
OpenAI在本周演示的女聲助手非常逼真,人們?cè)u(píng)價(jià)它就像是從斯派克·瓊茲的人工智能科幻電影《她》(Her)中直接提取的聲音。
“伊利亞看到了什么?”
超級(jí)對(duì)齊團(tuán)隊(duì)創(chuàng)建幾個(gè)月后,蘇茨克維爾和控股該公司的非營(yíng)利部門的其他非執(zhí)行董事會(huì)成員罷免了奧爾特曼,稱他們對(duì)首席執(zhí)行官失去了信心。
出于對(duì)公司分裂的擔(dān)憂,納德拉很快就奧爾特曼回歸安排了磋商。幾天后,懊悔的蘇茨克維爾為他在此次“政變”中的角色道歉。
當(dāng)時(shí),路透社(Reuters)報(bào)道稱,此次事件可能與一個(gè)秘密項(xiàng)目有關(guān),該項(xiàng)目的目標(biāo)是開發(fā)一款具有更強(qiáng)大的推理能力的人工智能。
在那之后,蘇茨克維爾就很少公開露面。由于這次政變引起的轟動(dòng)和隨后掩蓋這件事情的方式,在社交媒體上引發(fā)了各種猜測(cè)。
“伊利亞看到了什么?”變成了人工智能社區(qū)經(jīng)常出現(xiàn)的一句話。
最近,科科塔杰洛稱他對(duì)公司失去了信心,為了表達(dá)抗議決定辭職,這進(jìn)一步加劇了外界的擔(dān)憂。
但在5月14日發(fā)布的一份聲明里,蘇茨克維爾似乎暗示,他離開OpenIA并不是因?yàn)閷?duì)安全問題的擔(dān)憂,而是為了追求個(gè)人感興趣的其他事業(yè),他會(huì)在后期對(duì)外公布。
蘇茨克維爾寫道:“公司的發(fā)展過程堪稱奇跡,我相信OpenAI可以開發(fā)出安全和對(duì)人類有益的通用人工智能?!彼麑?duì)OpenAI的三巨頭薩姆·奧爾特曼、格雷格·布羅克曼和米拉·穆拉蒂以及他的繼任者雅各布·帕喬基表達(dá)了支持。(財(cái)富中文網(wǎng))
譯者:劉進(jìn)龍
審校:汪皓
A series of high-profile departures at OpenAI has raised questions as to whether the team responsible for AI safety is gradually being hollowed out.
Immediately following the announcement by chief scientist Ilya Sutskever that he was leaving the company after almost a decade, his team partner and one of Time’s 100 most important AI figures, Jan Leike, also announced he was quitting.
“I resigned,” Leike posted on May 14.
The duo follow Leopold Aschenbrenner, reportedly fired for leaking information, as well as Daniel Kokotajlo, who left in April, and William Saunders earlier this year.
Several staffers at OpenAI, which did not respond to a request by Fortune for comment, posted their disappointment upon hearing the news.
“It was an honor to work with Jan the past two and a half years at OpenAI. No one pushed harder than he did to make AGI safe and beneficial,” wrote OpenAI researcher Carroll Wainwright. “The company will be poorer without him.”
High-level envoys from China and the USA are meeting in Geneva this week to discuss what must be done now that mankind is on the cusp of developing artificial general intelligence (AGI), when AI can compete with humans in a wide variety of tasks.
Superintelligence alignment
But scientists have already turned their attention to the next stage of evolution—artificial super intelligence.
Sutskever and Leike jointly headed up a team created in July tasked with solving the core technical challenges of ASI alignment, a euphemism for ensuring humans retain control over machines far more intelligent and capable than they.
OpenAI pledged to commit 20% of its existing computing resources towards that goal with the aim of achieving superalignment in the next four years.
But the costs associated with developing cutting-edge AI are prohibitive.
Earlier this month, Altman said that while he’s prepared to burn billions every year in the pursuit of AGI, he still needs to ensure that OpenAI can continually secure enough funding to keep the lights on.
That money needs to come from deep-pocketed investors like Satya Nadella, CEO of Microsoft.
This means constantly delivering results ahead of its rivals like Google.
This includes OpenAI’s newest flagship product, GPT-4o, which the company claims can actually “reason”—a verb laden with controversy in GenAI circles—across text, audio and video in real time.
The female voice assistant it displayed this week is so lifelike people are remarking it seems to have been lifted straight out of Spike Jonze’s AI science fiction film “Her”.
“What did Ilya see?”
A few months after the Superalignment team was formed, Sutskever, together with other non-executive directors on the board of the non-profit arm that controls the company, ousted Altman, claiming they no longer had faith in their CEO.
Nadella quickly negotiated his return amid fears the company could split, and days later a rueful Sutskever apologized for his role in the mutiny.
At the time, Reuters reported it may have been linked to a secret project with the goal of developing an AI capable of higher reasoning.
Since then, Sutskever has barely been visible. The spectacular nature of the coup, along with the manner in which it was subsequently swept under the carpet prompted widespread speculation in social media.
“What did Ilya see?” became a common refrain within the broader AI community.
Kokotajlo furthered these concerns recently by remarking he had resigned in protest after losing confidence in the company.
In a statement on May 14, Sutskever seemed to suggest, however, that he was not leaving OpenAI due to concerns over safety but to pursue other interests personal to him that he would reveal at a later date.
“The company’s trajectory has been nothing short of miraculous, and I’m confident that OpenAI will build AGI that is both safe and beneficial,” he wrote, endorsing OpenAI’s trio of top leaders, Sam Altman, Greg Brockman and Mira Murati, as well as his successor, Jakub Pachocki.