最近各大科技巨頭都在人工智能聊天機(jī)器人上發(fā)力,谷歌也不例外。它最新推出的Bard聊天機(jī)器人可以迅速生成任何東西——從關(guān)于莎翁文學(xué)的論文,到說(shuō)唱歌詞。但是以Bard為代表的這些聊天機(jī)器人還有一個(gè)嚴(yán)重的問(wèn)題尚未解決。那就是它們有時(shí)會(huì)胡編亂造。
比如說(shuō),在CBS電視臺(tái)上周日的《60分鐘》節(jié)目上,Bard在談到通脹問(wèn)題時(shí),曾自信滿滿地宣稱,彼得·特明的《通脹戰(zhàn)爭(zhēng):一部現(xiàn)代史》一書(shū)“講述了美國(guó)的通脹史”,這本書(shū)還探討了美國(guó)的通脹政策。問(wèn)題是這本書(shū)根本就不存在。
問(wèn)題是,Bard撒的這個(gè)謊具有很強(qiáng)的迷惑性。彼得·特明是麻省理工學(xué)院的一位知名經(jīng)濟(jì)學(xué)家,他研究的領(lǐng)域就是通貨膨脹,而且寫(xiě)過(guò)十幾本經(jīng)濟(jì)學(xué)方面的書(shū),只不過(guò)沒(méi)有任何一本叫這個(gè)名字。Bard在回答通脹問(wèn)題時(shí),不光“幻想”出了這本書(shū),甚至還“幻想”出了一系列其他經(jīng)濟(jì)學(xué)書(shū)籍的名稱和摘要。
Bard是谷歌推出的一款對(duì)標(biāo)ChatGPT的聊天機(jī)器人,它是今年三月份才剛剛發(fā)布的,但它已經(jīng)不只一次在公開(kāi)場(chǎng)合鬧笑話了。此前它曾在一次公開(kāi)演示中聲稱,詹姆斯韋伯太空望遠(yuǎn)鏡曾在2015年首次捕捉到一顆系外行星的照片,但是實(shí)際上,Bard所說(shuō)的這顆行星是位于智利的“甚大望遠(yuǎn)鏡”于2014年發(fā)現(xiàn)的。
像Bard和ChatGPT這樣的聊天機(jī)器人都使用了大型語(yǔ)言模型(又稱LLMs)來(lái)對(duì)數(shù)十億個(gè)數(shù)據(jù)點(diǎn)進(jìn)行分析,來(lái)預(yù)測(cè)語(yǔ)義中的下一個(gè)單詞。這種方法又叫所謂的“生成式人工智能”。它有時(shí)會(huì)給產(chǎn)生一種言之鑿鑿的錯(cuò)覺(jué),讓人覺(jué)得它給出的答案是可信的,但事實(shí)并非如此。那么AI機(jī)器人的這種“一本正經(jīng)地胡說(shuō)八道”是否是一種普遍現(xiàn)象呢?
對(duì)此,谷歌CEO桑達(dá)爾·皮查伊上周日在接受《60分鐘》采訪時(shí)承認(rèn),這個(gè)問(wèn)題是“意料之中”的。“這個(gè)領(lǐng)域目前還沒(méi)有人能解決‘錯(cuò)覺(jué)’的問(wèn)題。所有型號(hào)的生成式AI都存在這個(gè)問(wèn)題。”
當(dāng)被到這個(gè)問(wèn)題未來(lái)是否會(huì)得到解決時(shí),皮查伊表示:“在這個(gè)問(wèn)題上有很多爭(zhēng)論。”但他認(rèn)為,他的團(tuán)隊(duì)最終會(huì)在這方面“取得進(jìn)展”。
一些人工智能專家指出,由于人工智能系統(tǒng)的復(fù)雜性,要取得這種進(jìn)展,可能是一件很困難的事。皮查伊表示,就連谷歌的工程師都“無(wú)法完全理解”人工智能技術(shù)有時(shí)為何會(huì)有那樣的反應(yīng)。
他表示:“我們這個(gè)領(lǐng)域的人把這方面的問(wèn)題稱為‘黑箱’。你也不知道它為什么會(huì)這樣說(shuō),或者為什么會(huì)出錯(cuò)。”
皮查伊說(shuō),谷歌工程師們對(duì)AI聊天機(jī)器人的工作模式“已經(jīng)有了一些概念”,他們對(duì)生成式AI的理解水平正在提高。“不過(guò)目前的技術(shù)水平就是這樣。”不過(guò)對(duì)于一些批評(píng)人士來(lái)說(shuō),皮查伊這個(gè)解釋顯然不能讓他們滿意。不少批評(píng)人士警告說(shuō),復(fù)雜的人工智能系統(tǒng)很可能會(huì)引發(fā)意想不到的后果。
比如,微軟創(chuàng)始人比爾·蓋茨今年3月表示,人工智能技術(shù)的發(fā)展很可能會(huì)加劇全球的貧富差距。他在一篇博客文章中寫(xiě)道:“市場(chǎng)力量不可能自然而然地產(chǎn)生自動(dòng)幫助窮人的AI產(chǎn)品和服務(wù),更有可能發(fā)生的是相反的情況。”
最近幾個(gè)月,伊隆·馬斯克也反復(fù)在公開(kāi)場(chǎng)合提到人工智能可能造成的風(fēng)險(xiǎn)。他表示,這項(xiàng)技術(shù)對(duì)經(jīng)濟(jì)的沖擊將不亞于“小行星撞擊地球”。上個(gè)月,馬斯克甚至聯(lián)名1100多位科技企業(yè)CEO、技術(shù)專家和AI研究人員,呼吁全球暫停開(kāi)發(fā)人工智能工具6個(gè)月——當(dāng)然很多人不知道的是,他同時(shí)正忙著創(chuàng)辦自己的人工智能公司。
斯坦福大學(xué)人本AI研究中心曾對(duì)人工智能行業(yè)的從業(yè)者進(jìn)行過(guò)一項(xiàng)年度調(diào)查,調(diào)查結(jié)果顯示,很多AI從業(yè)者認(rèn)為,人工智能技術(shù)的發(fā)展,使得“深度造假”變得越來(lái)越容易,這很可能會(huì)加劇虛假信息的泛濫,甚至有可能危害環(huán)境。該研究中心上周表示,AI帶來(lái)的威脅甚至有可能達(dá)到“核級(jí)別的災(zāi)難”。
上周日,谷歌CEO皮查伊也在參加節(jié)目時(shí)透露,即便是在谷歌內(nèi)部,也有一些研究人員擔(dān)心,人工智能一旦應(yīng)用不當(dāng),“可能會(huì)產(chǎn)生非常有害的影響”。他還表示:“我們尚未找到所有問(wèn)題的答案,而且這項(xiàng)技術(shù)的發(fā)展是很快的。如果你問(wèn)我,這個(gè)問(wèn)題有沒(méi)有可能讓我睡不著覺(jué),答案是當(dāng)然肯定的。”
皮查伊還表示,為了確保人工智能造福所有人,AI系統(tǒng)的開(kāi)發(fā)工作“不僅應(yīng)該包括工程師,還應(yīng)該包括社會(huì)科學(xué)家、倫理學(xué)家、哲學(xué)家等等”。
“我認(rèn)為,這些問(wèn)題是社會(huì)在發(fā)展過(guò)程中需要解決的問(wèn)題,而不應(yīng)該由一個(gè)公司來(lái)決定。”他說(shuō)。(財(cái)富中文網(wǎng))
譯者:樸成奎
最近各大科技巨頭都在人工智能聊天機(jī)器人上發(fā)力,谷歌也不例外。它最新推出的Bard聊天機(jī)器人可以迅速生成任何東西——從關(guān)于莎翁文學(xué)的論文,到說(shuō)唱歌詞。但是以Bard為代表的這些聊天機(jī)器人還有一個(gè)嚴(yán)重的問(wèn)題尚未解決。那就是它們有時(shí)會(huì)胡編亂造。
比如說(shuō),在CBS電視臺(tái)上周日的《60分鐘》節(jié)目上,Bard在談到通脹問(wèn)題時(shí),曾自信滿滿地宣稱,彼得·特明的《通脹戰(zhàn)爭(zhēng):一部現(xiàn)代史》一書(shū)“講述了美國(guó)的通脹史”,這本書(shū)還探討了美國(guó)的通脹政策。問(wèn)題是這本書(shū)根本就不存在。
問(wèn)題是,Bard撒的這個(gè)謊具有很強(qiáng)的迷惑性。彼得·特明是麻省理工學(xué)院的一位知名經(jīng)濟(jì)學(xué)家,他研究的領(lǐng)域就是通貨膨脹,而且寫(xiě)過(guò)十幾本經(jīng)濟(jì)學(xué)方面的書(shū),只不過(guò)沒(méi)有任何一本叫這個(gè)名字。Bard在回答通脹問(wèn)題時(shí),不光“幻想”出了這本書(shū),甚至還“幻想”出了一系列其他經(jīng)濟(jì)學(xué)書(shū)籍的名稱和摘要。
Bard是谷歌推出的一款對(duì)標(biāo)ChatGPT的聊天機(jī)器人,它是今年三月份才剛剛發(fā)布的,但它已經(jīng)不只一次在公開(kāi)場(chǎng)合鬧笑話了。此前它曾在一次公開(kāi)演示中聲稱,詹姆斯韋伯太空望遠(yuǎn)鏡曾在2015年首次捕捉到一顆系外行星的照片,但是實(shí)際上,Bard所說(shuō)的這顆行星是位于智利的“甚大望遠(yuǎn)鏡”于2014年發(fā)現(xiàn)的。
像Bard和ChatGPT這樣的聊天機(jī)器人都使用了大型語(yǔ)言模型(又稱LLMs)來(lái)對(duì)數(shù)十億個(gè)數(shù)據(jù)點(diǎn)進(jìn)行分析,來(lái)預(yù)測(cè)語(yǔ)義中的下一個(gè)單詞。這種方法又叫所謂的“生成式人工智能”。它有時(shí)會(huì)給產(chǎn)生一種言之鑿鑿的錯(cuò)覺(jué),讓人覺(jué)得它給出的答案是可信的,但事實(shí)并非如此。那么AI機(jī)器人的這種“一本正經(jīng)地胡說(shuō)八道”是否是一種普遍現(xiàn)象呢?
對(duì)此,谷歌CEO桑達(dá)爾·皮查伊上周日在接受《60分鐘》采訪時(shí)承認(rèn),這個(gè)問(wèn)題是“意料之中”的。“這個(gè)領(lǐng)域目前還沒(méi)有人能解決‘錯(cuò)覺(jué)’的問(wèn)題。所有型號(hào)的生成式AI都存在這個(gè)問(wèn)題。”
當(dāng)被到這個(gè)問(wèn)題未來(lái)是否會(huì)得到解決時(shí),皮查伊表示:“在這個(gè)問(wèn)題上有很多爭(zhēng)論。”但他認(rèn)為,他的團(tuán)隊(duì)最終會(huì)在這方面“取得進(jìn)展”。
一些人工智能專家指出,由于人工智能系統(tǒng)的復(fù)雜性,要取得這種進(jìn)展,可能是一件很困難的事。皮查伊表示,就連谷歌的工程師都“無(wú)法完全理解”人工智能技術(shù)有時(shí)為何會(huì)有那樣的反應(yīng)。
他表示:“我們這個(gè)領(lǐng)域的人把這方面的問(wèn)題稱為‘黑箱’。你也不知道它為什么會(huì)這樣說(shuō),或者為什么會(huì)出錯(cuò)。”
皮查伊說(shuō),谷歌工程師們對(duì)AI聊天機(jī)器人的工作模式“已經(jīng)有了一些概念”,他們對(duì)生成式AI的理解水平正在提高。“不過(guò)目前的技術(shù)水平就是這樣。”不過(guò)對(duì)于一些批評(píng)人士來(lái)說(shuō),皮查伊這個(gè)解釋顯然不能讓他們滿意。不少批評(píng)人士警告說(shuō),復(fù)雜的人工智能系統(tǒng)很可能會(huì)引發(fā)意想不到的后果。
比如,微軟創(chuàng)始人比爾·蓋茨今年3月表示,人工智能技術(shù)的發(fā)展很可能會(huì)加劇全球的貧富差距。他在一篇博客文章中寫(xiě)道:“市場(chǎng)力量不可能自然而然地產(chǎn)生自動(dòng)幫助窮人的AI產(chǎn)品和服務(wù),更有可能發(fā)生的是相反的情況。”
最近幾個(gè)月,伊隆·馬斯克也反復(fù)在公開(kāi)場(chǎng)合提到人工智能可能造成的風(fēng)險(xiǎn)。他表示,這項(xiàng)技術(shù)對(duì)經(jīng)濟(jì)的沖擊將不亞于“小行星撞擊地球”。上個(gè)月,馬斯克甚至聯(lián)名1100多位科技企業(yè)CEO、技術(shù)專家和AI研究人員,呼吁全球暫停開(kāi)發(fā)人工智能工具6個(gè)月——當(dāng)然很多人不知道的是,他同時(shí)正忙著創(chuàng)辦自己的人工智能公司。
斯坦福大學(xué)人本AI研究中心曾對(duì)人工智能行業(yè)的從業(yè)者進(jìn)行過(guò)一項(xiàng)年度調(diào)查,調(diào)查結(jié)果顯示,很多AI從業(yè)者認(rèn)為,人工智能技術(shù)的發(fā)展,使得“深度造假”變得越來(lái)越容易,這很可能會(huì)加劇虛假信息的泛濫,甚至有可能危害環(huán)境。該研究中心上周表示,AI帶來(lái)的威脅甚至有可能達(dá)到“核級(jí)別的災(zāi)難”。
上周日,谷歌CEO皮查伊也在參加節(jié)目時(shí)透露,即便是在谷歌內(nèi)部,也有一些研究人員擔(dān)心,人工智能一旦應(yīng)用不當(dāng),“可能會(huì)產(chǎn)生非常有害的影響”。他還表示:“我們尚未找到所有問(wèn)題的答案,而且這項(xiàng)技術(shù)的發(fā)展是很快的。如果你問(wèn)我,這個(gè)問(wèn)題有沒(méi)有可能讓我睡不著覺(jué),答案是當(dāng)然肯定的。”
皮查伊還表示,為了確保人工智能造福所有人,AI系統(tǒng)的開(kāi)發(fā)工作“不僅應(yīng)該包括工程師,還應(yīng)該包括社會(huì)科學(xué)家、倫理學(xué)家、哲學(xué)家等等”。
“我認(rèn)為,這些問(wèn)題是社會(huì)在發(fā)展過(guò)程中需要解決的問(wèn)題,而不應(yīng)該由一個(gè)公司來(lái)決定。”他說(shuō)。(財(cái)富中文網(wǎng))
譯者:樸成奎
Google’s new chatbot, Bard, is part of a revolutionary wave of artificial intelligence (A.I.) being developed that can rapidly generate anything from an essay on William Shakespeare to rap lyrics in the style of DMX. But Bard and all of its chatbot peers still have at least one serious problem—they sometimes make stuff up.
The latest evidence of this unwelcome tendency was displayed during CBS’ 60 Minutes on Sunday. The Inflation Wars: A Modern History by Peter Temin “provides a history of inflation in the United States” and discusses the policies that have been used to control it, Bard confidently declared during the report. The problem is the book doesn’t exist.
It’s an interesting lie by Bard—because it could be true. Temin is an accomplished MIT economist who studies inflation and has written over a dozen books on economics, he just never wrote one called The Inflation Wars: A Modern History. Bard “hallucinated” that, as well as names and summaries for a whole list of other economics books in response to a question about inflation.
It’s not the first public error the chatbot has made. When Bard was released in March to counter OpenAI’s rival ChatGPT, it claimed in a public demonstration that the James Webb Space Telescope was the first to capture an image of an exoplanet in 2005, but the aptly named Very Large Telescope had actually accomplished the task a year earlier in Chile.
Chatbots like Bard and ChatGPT use large language models, or LLMs, that leverage billions of data points to predict the next word in a string of text. This method of so-called generative A.I. tends to produce hallucinations in which the models generate text that appears plausible, yet isn’t factual. But with all the work being done on LLMs, are these types of hallucinations still common?
“Yes,” Google CEO Sundar Pichai admitted in his 60 Minutes interview Sunday, saying they’re “expected.” “No one in the field has yet solved the hallucination problems. All models do have this as an issue.”
When asked if the hallucination problem will be solved in the future, Pichai noted “it’s a matter of intense debate,” but said he thinks his team will eventually “make progress.”
That progress may be difficult to come by, as some A.I. experts have noted, due to the complex nature of A.I. systems. Pichai explained that there are still parts of A.I. technology that his engineers “don’t fully understand.”
“There is an aspect of this which we call—all of us in the field—call it a ‘black box,’” he said. “And you can’t quite tell why it said this, or why it got it wrong.”
Pichai said his engineers “have some ideas” about how their chatbot works, and their ability to understand the model is improving. “But that’s where the state of the art is,” he noted. That answer may not be good enough for some critics who warn about potential unintended consequences of complex A.I. systems, however.
Microsoft cofounder Bill Gates, for example, argued in March that the development of A.I. tech could exacerbate wealth inequality globally. “Market forces won’t naturally produce AI products and services that help the poorest,” the billionaire wrote in a blog post. “The opposite is more likely.”
And Elon Musk has been sounding the alarm about the dangers of A.I. for months now, arguing the technology will hit the economy “l(fā)ike an asteroid.” The Tesla and Twitter CEO was part of a group of more than 1,100 CEOs, technologists, and A.I. researchers who called for a six-month pause on developing A.I. tools last month—even though he was busy creating his own rival A.I. startup behind the scenes.
A.I. systems could also exacerbate the flood of misinformation through the creation of deep fakes—hoax images of events or people created by A.I.—and even harm the environment, according to researchers surveyed in an annual report on the technology by Stanford University’s Institute for Human-Centered A.I., who warned the threat amounts to a potential “nuclear-level catastrophe” last week.
On Sunday, Google’s Pichai revealed he shares some of the researchers’ concerns, arguing A.I. “can be very harmful” if deployed improperly. “We don’t have all the answers there yet—and the technology is moving fast. So does that keep me up at night? Absolutely,” he said.
Pichai added that the development of A.I. systems should include “not just engineers, but social scientists, ethicists, philosophers, and so on” to ensure the outcome benefits everyone.
“I think these are all things society needs to figure out as we move along. It’s not for a company to decide,” he said.