精品国产_亚洲人成在线高清,国产精品成人久久久久,国语自产偷拍精品视频偷拍

首頁(yè) 500強(qiáng) 活動(dòng) 榜單 商業(yè) 科技 領(lǐng)導(dǎo)力 專題 品牌中心
雜志訂閱

面對(duì)人工智能,企業(yè)要三思而行

Jonathan Vanian
2021-02-07

機(jī)器學(xué)習(xí)技術(shù)既能幫助公司在Facebook和推特上投放線上廣告,也能成為不良分子的宣傳工具,散播不實(shí)信息。

文本設(shè)置
小號(hào)
默認(rèn)
大號(hào)
Plus(0條)

亞歷克斯?斯皮內(nèi)利是商業(yè)軟件制造商LivePerson的首席技術(shù)專家,他認(rèn)為美國(guó)近期的國(guó)會(huì)暴亂事件揭示了人工智能的潛在危險(xiǎn),雖然這項(xiàng)技術(shù)通常與親特朗普的暴徒無(wú)關(guān)。

機(jī)器學(xué)習(xí)技術(shù)既能幫助公司在Facebook和推特上投放線上廣告,也能成為不良分子的宣傳工具,散播不實(shí)信息。

舉個(gè)例子,2016年有人在Facebook上分享虛假新聞,平臺(tái)的人工智能系統(tǒng)隨后將這些文章推送給了用戶。最近,F(xiàn)acebook的人工智能技術(shù)還推薦用戶加入討論QAnon陰謀論的群組,平臺(tái)最終屏蔽了這一話題。

斯皮內(nèi)利談及親特朗普的暴徒時(shí)表示:“他們生活的世界充滿了不實(shí)信息和謊言。”

人工智能不僅可以用來(lái)散播不實(shí)信息,它在隱私和面部識(shí)別等領(lǐng)域也存在問(wèn)題,這讓不少企業(yè)在應(yīng)用這項(xiàng)技術(shù)時(shí)三思而行。一些公司非常擔(dān)心人工智能相關(guān)的倫理問(wèn)題,于是取消了與其相關(guān)的項(xiàng)目,或者根本就不啟動(dòng)。

斯皮內(nèi)利表示,他已經(jīng)取消LivePerson和以前所在公司的一些人工智能項(xiàng)目。出于對(duì)人工智能的擔(dān)憂,他沒(méi)有透露這些公司的名字。這位專家之前就職于亞馬遜、廣告巨頭麥肯世界集團(tuán)和湯森路透。

根據(jù)他的說(shuō)法,這些項(xiàng)目涉及機(jī)器學(xué)習(xí),通過(guò)分析客戶數(shù)據(jù)來(lái)預(yù)測(cè)用戶行為。隱私維權(quán)人士經(jīng)常表達(dá)對(duì)這類項(xiàng)目的擔(dān)憂,因?yàn)樗鼈円蕾嚧罅康膫€(gè)人信息。

斯皮內(nèi)利說(shuō)道:“從哲學(xué)上講,我堅(jiān)信如果要使用一個(gè)人的數(shù)據(jù),對(duì)方必須知情同意。”

企業(yè)人工智能的道德問(wèn)題

人工智能可以預(yù)測(cè)銷售業(yè)績(jī),解讀法律文件,并讓客服機(jī)器人更加真實(shí),因此過(guò)去幾年一直受到企業(yè)的支持。不過(guò),與其相關(guān)的負(fù)面頭條新聞也是源源不斷。

去年,IBM、微軟和亞馬遜禁止警方使用他們的面部識(shí)別軟件,因其總是錯(cuò)認(rèn)女性和有色人種。微軟和亞馬遜都希望能繼續(xù)向警方銷售軟件,但他們呼吁聯(lián)邦政府規(guī)范執(zhí)法部門對(duì)該項(xiàng)技術(shù)的使用。

IBM首席執(zhí)行官阿爾溫德?克里希納更進(jìn)一步表示,公司會(huì)永久暫停面部識(shí)別軟件業(yè)務(wù),稱其反對(duì)任何“用于大規(guī)模監(jiān)視、種族側(cè)寫(xiě)、侵犯基本人權(quán)和自由”的技術(shù)。

2018年,知名人工智能研究人員蒂姆尼特?格布魯和喬伊?布奧拉姆維尼發(fā)表了一篇研究論文,強(qiáng)調(diào)了面部識(shí)別軟件存在的偏見(jiàn)問(wèn)題。魯曼?喬杜里是初創(chuàng)公司Parity AI的CEO,此前曾負(fù)責(zé)埃森哲咨詢公司的人工智能團(tuán)隊(duì),她表示一些化妝品公司因此暫停了人工智能項(xiàng)目,這些項(xiàng)目可以呈現(xiàn)化妝品在不同膚色的上妝效果,但這些公司擔(dān)心會(huì)造成對(duì)黑人女性的歧視。

“很多公司對(duì)面部識(shí)別技術(shù)的熱情在這個(gè)時(shí)候冷卻下來(lái),”喬杜里說(shuō)道。“我和化妝品行業(yè)的客戶開(kāi)了會(huì),所有的項(xiàng)目都停了。”

谷歌最近出現(xiàn)的問(wèn)題也促使企業(yè)反思人工智能。最近,上文提到的人工智能研究人員格布魯在離開(kāi)谷歌后表示,這家公司對(duì)她的研究進(jìn)行了審查。這份研究關(guān)注谷歌人工智能軟件的兩個(gè)問(wèn)題,一個(gè)是它可以理解人類語(yǔ)言但會(huì)因此產(chǎn)生偏見(jiàn),另一個(gè)是它在訓(xùn)練中消耗大量電力會(huì)破壞環(huán)境。

這對(duì)谷歌造成了不良影響,因?yàn)檫@家搜索巨頭以前也曾遇到過(guò)偏見(jiàn)問(wèn)題。谷歌標(biāo)榜自己是環(huán)境管理員,但它的Google Photos應(yīng)用將黑人誤認(rèn)為大猩猩。

格布魯離職不久,谷歌便禁止另一位人工智能倫理研究員訪問(wèn)公司的電腦系統(tǒng),后者一直對(duì)這家公司持批評(píng)態(tài)度。谷歌一位發(fā)言人拒絕對(duì)研究人員或公司在道德層面的失誤發(fā)表評(píng)論,他反而引用首席執(zhí)行官桑達(dá)爾?皮查伊和高管杰夫?迪恩的說(shuō)法,稱公司正在評(píng)估格布魯離職的相關(guān)情況,并將繼續(xù)進(jìn)行人工智能倫理研究。

米里亞姆?沃格爾曾是美國(guó)司法部律師,如今擔(dān)任非營(yíng)利組織EqualAI的負(fù)責(zé)人,這家組織幫助一些公司處理人工智能的偏見(jiàn)問(wèn)題。她透露道,許多公司和人工智能研究人員正在密切關(guān)注谷歌的問(wèn)題,其中一些人擔(dān)心,這會(huì)讓人以后不熱衷于研究與雇主商業(yè)利益無(wú)關(guān)的課題。

“這件事引起了每個(gè)人的關(guān)注,”沃格爾如此評(píng)價(jià)格布魯?shù)碾x職。“一個(gè)在這個(gè)領(lǐng)域倍受贊賞和尊敬的領(lǐng)袖也會(huì)面臨失業(yè)風(fēng)險(xiǎn),這一事實(shí)讓不少人心里一涼。”

谷歌一向把自己定位為人工智能倫理的領(lǐng)頭人,但這次失誤證明其德不配位。沃格爾希望其他公司不要過(guò)度反應(yīng),解雇或噤聲質(zhì)疑人工智能項(xiàng)目道德的員工。

沃格爾說(shuō)道:“希望這些公司不要覺(jué)得在公司內(nèi)設(shè)立倫理部門就會(huì)制造緊張氣氛,導(dǎo)致事態(tài)升級(jí)到目前這個(gè)水平。”

人工智能道德在向前發(fā)展

阿比謝克?古普塔在微軟專注機(jī)器學(xué)習(xí)方面的工作,他也是蒙特利爾人工智能倫理研究所的創(chuàng)始人兼首席研究員。據(jù)他所言,與幾年前相比,現(xiàn)在的公司會(huì)更多地考慮人工智能的倫理問(wèn)題,情況已經(jīng)有所改善。

而且,大家并不認(rèn)為公司應(yīng)該完全停止使用人工智能。舊金山附近的圣克拉拉大學(xué)有一間馬庫(kù)拉應(yīng)用倫理學(xué)中心,其技術(shù)倫理主任布萊恩?格林表示,人工智能已是一項(xiàng)舉足輕重的技術(shù),無(wú)法舍棄。

“人們對(duì)歇業(yè)的恐懼要超過(guò)對(duì)歧視的恐懼,”格林說(shuō)道。

雖然LivePerson的斯皮內(nèi)利對(duì)人工智能的一些用途感到擔(dān)心,但他的公司仍在大量投資自然語(yǔ)言處理等分支領(lǐng)域,讓電腦學(xué)習(xí)理解語(yǔ)言。他希望通過(guò)公開(kāi)公司在人工智能和道德方面的立場(chǎng),讓客戶相信LivePerson正在努力將危害降到最低。

LivePerson和專業(yè)服務(wù)巨頭高知特以及保險(xiǎn)公司哈門那都是EqualAI組織的成員,它們已經(jīng)公開(kāi)承諾將測(cè)試和監(jiān)控其人工智能系統(tǒng),以發(fā)現(xiàn)涉及偏見(jiàn)的問(wèn)題。

斯皮內(nèi)利說(shuō)道:“如果我們做得不好,請(qǐng)站出來(lái)質(zhì)疑我們。”(財(cái)富中文網(wǎng))

譯者:秦維奇

亞歷克斯?斯皮內(nèi)利是商業(yè)軟件制造商LivePerson的首席技術(shù)專家,他認(rèn)為美國(guó)近期的國(guó)會(huì)暴亂事件揭示了人工智能的潛在危險(xiǎn),雖然這項(xiàng)技術(shù)通常與親特朗普的暴徒無(wú)關(guān)。

機(jī)器學(xué)習(xí)技術(shù)既能幫助公司在Facebook和推特上投放線上廣告,也能成為不良分子的宣傳工具,散播不實(shí)信息。

舉個(gè)例子,2016年有人在Facebook上分享虛假新聞,平臺(tái)的人工智能系統(tǒng)隨后將這些文章推送給了用戶。最近,F(xiàn)acebook的人工智能技術(shù)還推薦用戶加入討論QAnon陰謀論的群組,平臺(tái)最終屏蔽了這一話題。

斯皮內(nèi)利談及親特朗普的暴徒時(shí)表示:“他們生活的世界充滿了不實(shí)信息和謊言。”

人工智能不僅可以用來(lái)散播不實(shí)信息,它在隱私和面部識(shí)別等領(lǐng)域也存在問(wèn)題,這讓不少企業(yè)在應(yīng)用這項(xiàng)技術(shù)時(shí)三思而行。一些公司非常擔(dān)心人工智能相關(guān)的倫理問(wèn)題,于是取消了與其相關(guān)的項(xiàng)目,或者根本就不啟動(dòng)。

斯皮內(nèi)利表示,他已經(jīng)取消LivePerson和以前所在公司的一些人工智能項(xiàng)目。出于對(duì)人工智能的擔(dān)憂,他沒(méi)有透露這些公司的名字。這位專家之前就職于亞馬遜、廣告巨頭麥肯世界集團(tuán)和湯森路透。

根據(jù)他的說(shuō)法,這些項(xiàng)目涉及機(jī)器學(xué)習(xí),通過(guò)分析客戶數(shù)據(jù)來(lái)預(yù)測(cè)用戶行為。隱私維權(quán)人士經(jīng)常表達(dá)對(duì)這類項(xiàng)目的擔(dān)憂,因?yàn)樗鼈円蕾嚧罅康膫€(gè)人信息。

斯皮內(nèi)利說(shuō)道:“從哲學(xué)上講,我堅(jiān)信如果要使用一個(gè)人的數(shù)據(jù),對(duì)方必須知情同意。”

企業(yè)人工智能的道德問(wèn)題

人工智能可以預(yù)測(cè)銷售業(yè)績(jī),解讀法律文件,并讓客服機(jī)器人更加真實(shí),因此過(guò)去幾年一直受到企業(yè)的支持。不過(guò),與其相關(guān)的負(fù)面頭條新聞也是源源不斷。

去年,IBM、微軟和亞馬遜禁止警方使用他們的面部識(shí)別軟件,因其總是錯(cuò)認(rèn)女性和有色人種。微軟和亞馬遜都希望能繼續(xù)向警方銷售軟件,但他們呼吁聯(lián)邦政府規(guī)范執(zhí)法部門對(duì)該項(xiàng)技術(shù)的使用。

IBM首席執(zhí)行官阿爾溫德?克里希納更進(jìn)一步表示,公司會(huì)永久暫停面部識(shí)別軟件業(yè)務(wù),稱其反對(duì)任何“用于大規(guī)模監(jiān)視、種族側(cè)寫(xiě)、侵犯基本人權(quán)和自由”的技術(shù)。

2018年,知名人工智能研究人員蒂姆尼特?格布魯和喬伊?布奧拉姆維尼發(fā)表了一篇研究論文,強(qiáng)調(diào)了面部識(shí)別軟件存在的偏見(jiàn)問(wèn)題。魯曼?喬杜里是初創(chuàng)公司Parity AI的CEO,此前曾負(fù)責(zé)埃森哲咨詢公司的人工智能團(tuán)隊(duì),她表示一些化妝品公司因此暫停了人工智能項(xiàng)目,這些項(xiàng)目可以呈現(xiàn)化妝品在不同膚色的上妝效果,但這些公司擔(dān)心會(huì)造成對(duì)黑人女性的歧視。

“很多公司對(duì)面部識(shí)別技術(shù)的熱情在這個(gè)時(shí)候冷卻下來(lái),”喬杜里說(shuō)道。“我和化妝品行業(yè)的客戶開(kāi)了會(huì),所有的項(xiàng)目都停了。”

谷歌最近出現(xiàn)的問(wèn)題也促使企業(yè)反思人工智能。最近,上文提到的人工智能研究人員格布魯在離開(kāi)谷歌后表示,這家公司對(duì)她的研究進(jìn)行了審查。這份研究關(guān)注谷歌人工智能軟件的兩個(gè)問(wèn)題,一個(gè)是它可以理解人類語(yǔ)言但會(huì)因此產(chǎn)生偏見(jiàn),另一個(gè)是它在訓(xùn)練中消耗大量電力會(huì)破壞環(huán)境。

這對(duì)谷歌造成了不良影響,因?yàn)檫@家搜索巨頭以前也曾遇到過(guò)偏見(jiàn)問(wèn)題。谷歌標(biāo)榜自己是環(huán)境管理員,但它的Google Photos應(yīng)用將黑人誤認(rèn)為大猩猩。

格布魯離職不久,谷歌便禁止另一位人工智能倫理研究員訪問(wèn)公司的電腦系統(tǒng),后者一直對(duì)這家公司持批評(píng)態(tài)度。谷歌一位發(fā)言人拒絕對(duì)研究人員或公司在道德層面的失誤發(fā)表評(píng)論,他反而引用首席執(zhí)行官桑達(dá)爾?皮查伊和高管杰夫?迪恩的說(shuō)法,稱公司正在評(píng)估格布魯離職的相關(guān)情況,并將繼續(xù)進(jìn)行人工智能倫理研究。

米里亞姆?沃格爾曾是美國(guó)司法部律師,如今擔(dān)任非營(yíng)利組織EqualAI的負(fù)責(zé)人,這家組織幫助一些公司處理人工智能的偏見(jiàn)問(wèn)題。她透露道,許多公司和人工智能研究人員正在密切關(guān)注谷歌的問(wèn)題,其中一些人擔(dān)心,這會(huì)讓人以后不熱衷于研究與雇主商業(yè)利益無(wú)關(guān)的課題。

“這件事引起了每個(gè)人的關(guān)注,”沃格爾如此評(píng)價(jià)格布魯?shù)碾x職。“一個(gè)在這個(gè)領(lǐng)域倍受贊賞和尊敬的領(lǐng)袖也會(huì)面臨失業(yè)風(fēng)險(xiǎn),這一事實(shí)讓不少人心里一涼。”

谷歌一向把自己定位為人工智能倫理的領(lǐng)頭人,但這次失誤證明其德不配位。沃格爾希望其他公司不要過(guò)度反應(yīng),解雇或噤聲質(zhì)疑人工智能項(xiàng)目道德的員工。

沃格爾說(shuō)道:“希望這些公司不要覺(jué)得在公司內(nèi)設(shè)立倫理部門就會(huì)制造緊張氣氛,導(dǎo)致事態(tài)升級(jí)到目前這個(gè)水平。”

人工智能道德在向前發(fā)展

阿比謝克?古普塔在微軟專注機(jī)器學(xué)習(xí)方面的工作,他也是蒙特利爾人工智能倫理研究所的創(chuàng)始人兼首席研究員。據(jù)他所言,與幾年前相比,現(xiàn)在的公司會(huì)更多地考慮人工智能的倫理問(wèn)題,情況已經(jīng)有所改善。

而且,大家并不認(rèn)為公司應(yīng)該完全停止使用人工智能。舊金山附近的圣克拉拉大學(xué)有一間馬庫(kù)拉應(yīng)用倫理學(xué)中心,其技術(shù)倫理主任布萊恩?格林表示,人工智能已是一項(xiàng)舉足輕重的技術(shù),無(wú)法舍棄。

“人們對(duì)歇業(yè)的恐懼要超過(guò)對(duì)歧視的恐懼,”格林說(shuō)道。

雖然LivePerson的斯皮內(nèi)利對(duì)人工智能的一些用途感到擔(dān)心,但他的公司仍在大量投資自然語(yǔ)言處理等分支領(lǐng)域,讓電腦學(xué)習(xí)理解語(yǔ)言。他希望通過(guò)公開(kāi)公司在人工智能和道德方面的立場(chǎng),讓客戶相信LivePerson正在努力將危害降到最低。

LivePerson和專業(yè)服務(wù)巨頭高知特以及保險(xiǎn)公司哈門那都是EqualAI組織的成員,它們已經(jīng)公開(kāi)承諾將測(cè)試和監(jiān)控其人工智能系統(tǒng),以發(fā)現(xiàn)涉及偏見(jiàn)的問(wèn)題。

斯皮內(nèi)利說(shuō)道:“如果我們做得不好,請(qǐng)站出來(lái)質(zhì)疑我們。”(財(cái)富中文網(wǎng))

譯者:秦維奇

Alex Spinelli, chief technologist for business software maker LivePerson, says the recent U.S. Capitol riot shows the potential dangers of a technology not usually associated with pro-Trump mobs: artificial intelligence.

The same machine-learning tech that helps companies target people with online ads on Facebook and Twitter also helps bad actors distribute propaganda and misinformation.

In 2016, for instance, people shared fake news articles on Facebook, whose A.I. systems then funneled them to users. More recently, Facebook's A.I. technology recommended that users join groups focused on the QAnon conspiracy, a topic that Facebook eventually banned.

“The world they live in day in and day out is filled with disinformation and lies,” says Spinelli about the pro-Trump rioters.

A.I.'s role in disinformation, and problems in other areas including privacy and facial recognition, are causing companies to think twice about using the technology. In some cases, businesses are so concerned about ethics related to A.I. that they are killing projects involving A.I. or never starting them to begin with.

Spinelli says that he has canceled some A.I. projects at LivePerson and at previous employers that he declined to name because of concerns about A.I. He previously worked at Amazon, advertising giant McCann Worldgroup, and Thomson Reuters.

The projects, Spinelli says, involved machine learning analyzing customer data in order to predict user behavior. Privacy advocates often raise concerns about such projects, which rely on huge amounts of personal information.

"Philosophically, I’m a big believer in the use of your data being approved by you,” Spinelli says.

Ethical problems in corporate A.I.

Over the past few years, artificial intelligence has been championed by companies for its ability to predict sales, interpret legal documents, and power more realistic customer chatbots. But it's also provided a steady drip of unflattering headlines.

Last year, IBM, Microsoft, and Amazon barred police use of their facial recognition software because it more frequently misidentifies women and people of color. Microsoft and Amazon both want to continue selling the software to police, but they called for federal rules about how law enforcement can use the technology.

IBM CEO Arvind Krishna went a step further by saying his company would permanently suspend its facial recognition software business, saying that the company opposes any technology used "for mass surveillance, racial profiling, violations of basic human rights and freedoms."

In 2018, high-profile A.I. researchers Timnit Gebru and Joy Buolamwini published a research paper highlighting bias problems in facial recognition software. In reaction, some cosmetics companies paused A.I. projects that would determine how makeup products would look on certain people's skin, for fear the technology could discriminate against Black women, says Rumman Chowdhury, the former head of Accenture’s responsible A.I. team and now CEO of startup Parity AI.

“That was when lot of companies cooled down with how much they wanted to use facial recognition,” Chowdhury says. “I had meetings with clients in makeup, and all of it stopped.”

Recent problems at Google have also caused companies to rethink A.I. More recently, Gebru, the A.I. researcher, left Google and then claimed that the company had censored some of her research. That research focused on bias problems with the company's A.I. software that understands human language and the fact that the software used huge amounts of electricity in its training, which could harm the environment.

This reflected poorly on Google because the search giant has experienced bias problems in the past, when its Google Photos product misidentified Black people as gorillas, and the search giant champions itself as an environmental steward.

Shortly after Gebru's departure, Google suspended computer access to another of its A.I. ethics researchers who has been critical of the search giant. A Google spokesperson declined to comment about the researchers or the company's ethical blunders. Instead, he pointed to previous statements by Google CEO Sundar Pichai and Google executive Jeff Dean saying that the company is conducting a review of the circumstances of Gebru's departure and is committed to continuing its A.I. ethics research.

Miriam Vogel, a former Justice Department lawyer who now heads the EqualAI nonprofit, which helps companies address A.I. bias, says many companies and A.I. researchers are paying close attention to Google’s A.I. problems. Some fear that the problems may have a chilling impact on future research about topics that don't align with their employers' business interests.

“This issue has captured everyone’s attention,” Vogel says about Gebru leaving Google. “It took their breath away that someone who was so widely admired and respected as a leader in this field could have their job at risk.”

Although Google has positioned itself as a leader in A.I. ethics, the company's missteps point to a contradiction with that high-profile crown. Vogel hopes that companies don’t overreact by firing or silencing their own employees who question the ethics of certain A.I. projects.

“I would hope companies do not take fear that by having an ethical arm of their organization that they would create tensions that would lead to an escalation at this level,” Vogel says.

A.I. ethics going forward

Still, the fact that companies are thinking about A.I. ethics is an improvement from a few years ago, when they gave the issue relatively little thought, says Abhishek Gupta, who focuses on machine learning at Microsoft and is founder and principal researcher of the Montreal AI Ethics Institute.

And no one thinks companies will completely stop using A.I. Brian Green, the director of technology ethics at the Markkula Center for Applied Ethics at Santa Clara University, near San Francisco, says it's become too important of a tool to drop.

“The fear of going out of business trumps the fear of discrimination,” Green says.

And while LivePerson's Spinelli worries about some uses of A.I., his company is still heavily investing in its subsets like natural language processing, in which computers learn to understand language. He’s hoping that by being public about the company’s stance on A.I. and ethics, customers will trust that LivePerson is trying to minimize any harms.

LivePerson, along with professional services giant Cognizant and insurance firm Humana, are members of the EqualAI organization and have made public pledges that they will test and monitor their A.I. systems for problems involving bias.

Says Spinelli, “Call us out if we fail.”

財(cái)富中文網(wǎng)所刊載內(nèi)容之知識(shí)產(chǎn)權(quán)為財(cái)富媒體知識(shí)產(chǎn)權(quán)有限公司及/或相關(guān)權(quán)利人專屬所有或持有。未經(jīng)許可,禁止進(jìn)行轉(zhuǎn)載、摘編、復(fù)制及建立鏡像等任何使用。
0條Plus
精彩評(píng)論
評(píng)論

撰寫(xiě)或查看更多評(píng)論

請(qǐng)打開(kāi)財(cái)富Plus APP

前往打開(kāi)

            主站蜘蛛池模板: 寿阳县| 龙江县| 佳木斯市| 苍南县| 永康市| 蓝山县| 赫章县| 青冈县| 阳春市| 高安市| 寿宁县| 锡林浩特市| 斗六市| 汶川县| 宁阳县| 太和县| 大姚县| 呼图壁县| 道孚县| 嵩明县| 徐州市| 长顺县| 报价| 资中县| 巴中市| 万宁市| 兴化市| 应用必备| 黄龙县| 顺平县| 亳州市| 广德县| 宜宾县| 石门县| 双桥区| 洪泽县| 岳阳市| 怀仁县| 济南市| 咸丰县| 盐津县|