在很短的時(shí)間內(nèi),人工智能就從只有科技精英使用的技術(shù),變成了每天有最多人使用或者至少會(huì)遇到的技術(shù)。人工智能正在被廣泛應(yīng)用于醫(yī)療應(yīng)用程序、客戶服務(wù)互動(dòng)、社交媒體、營(yíng)銷(xiāo)郵件等諸多領(lǐng)域。公司紛紛開(kāi)發(fā)自己的人工智能,并思考如何把該技術(shù)整合到業(yè)務(wù)當(dāng)中,與此同時(shí),他們還要面對(duì)一個(gè)挑戰(zhàn):如何以透明的方式說(shuō)明他們使用這些先進(jìn)技術(shù)的方式。
紐約大學(xué)(New York University)的負(fù)責(zé)任人工智能中心(Center for Responsible AI)的教授兼主任茱莉亞·斯托亞諾維奇說(shuō):“許多人通常在沒(méi)有明確決定使用人工智能系統(tǒng)的時(shí)候,就已經(jīng)受到了人工智能的影響。我們希望讓人們有決定權(quán),有能力幫助找出漏洞,了解人工智能帶來(lái)的好處,并關(guān)注它可能帶來(lái)的風(fēng)險(xiǎn)與傷害。”
畢馬威(KPMG)今年發(fā)布的一項(xiàng)調(diào)查顯示,42%的人認(rèn)為生成式人工智能已經(jīng)對(duì)個(gè)人生活產(chǎn)生了“重大影響”,而60%的人預(yù)計(jì)這將在未來(lái)兩年內(nèi)發(fā)生。皮尤研究中心(Pew Research Center)2023年的一項(xiàng)研究稱(chēng),盡管人工智能產(chǎn)生了巨大影響,但只有10%的美國(guó)人表示對(duì)人工智能的態(tài)度是“激動(dòng)多于擔(dān)憂”。全球的政策制定者們都在研究人工智能的潛在監(jiān)管法規(guī),與此同時(shí),一些公司則在主動(dòng)公布其為負(fù)責(zé)任創(chuàng)新所采取的措施。
財(cái)捷集團(tuán)(Intuit)把人工智能整合到公司的全產(chǎn)品線,包括在TurboTax、QuickBooks、Credit Karma中整合的生成式人工智能助手,以及公司電子郵箱營(yíng)銷(xiāo)平臺(tái)Mailchimp上的一系列工具。該公司表示,數(shù)百萬(wàn)個(gè)模型每天驅(qū)動(dòng)650億次機(jī)器學(xué)習(xí)預(yù)測(cè),每年進(jìn)行8.1億次人工智能驅(qū)動(dòng)的交互。
Intuit Mailchimp的首席執(zhí)行官拉尼亞·蘇卡爾說(shuō):“五年前,我們宣布公司的策略是將人工智能與專(zhuān)業(yè)知識(shí)相結(jié)合,創(chuàng)建一個(gè)人工智能驅(qū)動(dòng)的專(zhuān)家平臺(tái)。如今,這項(xiàng)投資讓我們擁有了數(shù)百萬(wàn)個(gè)在線的人工智能驅(qū)動(dòng)模型。當(dāng)生成式人工智能出現(xiàn)時(shí),得益于我們所做的投資,以及我們所看到的它對(duì)最終用戶的潛力,我們準(zhǔn)備大展拳腳。”
小企業(yè)有許多數(shù)據(jù)點(diǎn)可以驗(yàn)證成功的和不成功的技術(shù),因此該公司看到了把生成式人工智能推向大眾的機(jī)會(huì),而不是只有那些有能力自行開(kāi)發(fā)人工智能模型的大公司才能夠使用。蘇卡爾指出,財(cái)捷集團(tuán)開(kāi)發(fā)的生成式人工智能操作系統(tǒng),可以保護(hù)其訓(xùn)練數(shù)據(jù)的私密性。然后,Intuit Mailchimp的客戶能夠利用人工智能,以品牌的語(yǔ)氣生成營(yíng)銷(xiāo)郵件和文本,他們還可以設(shè)置自動(dòng)郵件,以幫助歡迎新客戶或提醒客戶他們的在線購(gòu)物車(chē)?yán)镉形唇Y(jié)算的商品。
蘇卡爾表示,過(guò)去幾個(gè)月,Intuit Mailchimp的生成式人工智能文本生成工具使用率提高了70%以上。盡管如此,該公司在考慮如何在擴(kuò)大產(chǎn)品規(guī)模時(shí)保持謹(jǐn)慎。
人工智能模型存在的一個(gè)固有問(wèn)題是,它們并不完美。人工智能可能提供虛假信息,生成冒犯性的信息,并且擴(kuò)大在模型訓(xùn)練數(shù)據(jù)中可能存在的偏差。蘇卡爾稱(chēng),為了避免這些情況,Intuit Mailchimp正在謹(jǐn)慎地選擇能夠使用其生成式人工智能工具的行業(yè)。(她拒絕透露目前Intuit Mailchimp的生成式人工智能不支持哪些行業(yè)。)
但區(qū)別在于,雖然人工智能的能力正在快速增強(qiáng),變得可以接管從日常瑣事到創(chuàng)意工作等各種任務(wù),但財(cái)捷集團(tuán)仍然相信人類(lèi)在這個(gè)世界中依舊有一席之地。人工智能生成的每一條內(nèi)容在被發(fā)送給客戶之前,都會(huì)由用戶進(jìn)行審查。糟糕的或不準(zhǔn)確的回答等惡化情況,能夠上報(bào)給人類(lèi)內(nèi)容審查員。蘇卡爾指出,就像人們可以在TurboTax上與人類(lèi)專(zhuān)家溝通一樣,在市場(chǎng)營(yíng)銷(xiāo)領(lǐng)域離不開(kāi)人類(lèi)專(zhuān)家。
蘇卡爾說(shuō):“人類(lèi)專(zhuān)家仍舊能夠提供人工智能所不具備的更高層次的專(zhuān)業(yè)知識(shí),并提高小企業(yè)的信心。”
其他科技公司正在采取措施幫助人們了解其人工智能如何工作,并區(qū)分真實(shí)內(nèi)容與人工智能生成的內(nèi)容。TikTok為創(chuàng)作者推出了一款工具,用于標(biāo)記人工智能生成的內(nèi)容,而且該公司在2023年還表示正在測(cè)試自動(dòng)標(biāo)記這類(lèi)內(nèi)容的方法。Meta宣布將在Facebook、Instagram和Threads上標(biāo)記人工智能生成的圖片。微軟(Microsoft)在一篇博客里解釋了其為生成式人工智能產(chǎn)品Copilot和Microsoft Designer執(zhí)行的安全保護(hù)措施。2023年,谷歌(Google)修改了搜索算法,把人工智能生成的高質(zhì)量?jī)?nèi)容考慮在內(nèi)。
區(qū)分真實(shí)內(nèi)容和人工智能生成的內(nèi)容,只是人工智能問(wèn)題的一部分。深度造假技術(shù)的泛濫,例如最近偽造的泰勒·斯威夫特的露骨圖片等,暴露出人工智能存在的一個(gè)根本問(wèn)題。Ceartas公司使用人工智能模型打擊在線剽竊行為。該公司的聯(lián)合創(chuàng)始人及首席執(zhí)行官丹·珀塞爾稱(chēng),他已經(jīng)可以為客戶刪除越來(lái)越多的人工智能生成的圖片。他的客戶既有名人和內(nèi)容創(chuàng)作者,也有公司高管。
珀塞爾說(shuō):“我們的技術(shù)工作的方式是創(chuàng)建一個(gè)侵權(quán)模型,我們不需要訪問(wèn)原始內(nèi)容,也不需要對(duì)視頻片段進(jìn)行指紋識(shí)別。我們只需要知道內(nèi)容的名稱(chēng),因?yàn)槿藗冋峭ㄟ^(guò)名稱(chēng)來(lái)在線查找內(nèi)容。在服務(wù)個(gè)別內(nèi)容創(chuàng)作者和企業(yè)時(shí),我們會(huì)對(duì)技術(shù)進(jìn)行小幅調(diào)整,使其更加具體到品牌或個(gè)人,然后更廣泛地應(yīng)用所獲取的信息。”
過(guò)去兩年證明,人工智能的進(jìn)步只會(huì)變得越來(lái)越好。(看看OpenAI的文本生成視頻平臺(tái)Sora獲得的反應(yīng)就可見(jiàn)一斑。)斯托楊諾維奇指出,雖然我們可能再也無(wú)法避免使用人工智能,但我們還需要做更多的工作,集合行業(yè)參與者、學(xué)術(shù)界人士、政策制定者和用戶,就可行的人工智能治理框架達(dá)成共識(shí)。與此同時(shí),隨著人們開(kāi)始在日常生活中看到越來(lái)越多的使用人工智能的例子,她提出了這條建議:
“要對(duì)人工智能和其他技術(shù)的能力保持合理的懷疑態(tài)度,這一點(diǎn)很重要。”斯托楊諾維奇說(shuō),“如果這聽(tīng)起來(lái)難以置信,同時(shí)如果我們不知道模型所使用的數(shù)據(jù)以及模型的驗(yàn)證方式,那么它可能并不像廣告里宣傳的那樣有效。”(財(cái)富中文網(wǎng))
譯者:劉進(jìn)龍
審校:汪皓
在很短的時(shí)間內(nèi),人工智能就從只有科技精英使用的技術(shù),變成了每天有最多人使用或者至少會(huì)遇到的技術(shù)。人工智能正在被廣泛應(yīng)用于醫(yī)療應(yīng)用程序、客戶服務(wù)互動(dòng)、社交媒體、營(yíng)銷(xiāo)郵件等諸多領(lǐng)域。公司紛紛開(kāi)發(fā)自己的人工智能,并思考如何把該技術(shù)整合到業(yè)務(wù)當(dāng)中,與此同時(shí),他們還要面對(duì)一個(gè)挑戰(zhàn):如何以透明的方式說(shuō)明他們使用這些先進(jìn)技術(shù)的方式。
紐約大學(xué)(New York University)的負(fù)責(zé)任人工智能中心(Center for Responsible AI)的教授兼主任茱莉亞·斯托亞諾維奇說(shuō):“許多人通常在沒(méi)有明確決定使用人工智能系統(tǒng)的時(shí)候,就已經(jīng)受到了人工智能的影響。我們希望讓人們有決定權(quán),有能力幫助找出漏洞,了解人工智能帶來(lái)的好處,并關(guān)注它可能帶來(lái)的風(fēng)險(xiǎn)與傷害。”
畢馬威(KPMG)今年發(fā)布的一項(xiàng)調(diào)查顯示,42%的人認(rèn)為生成式人工智能已經(jīng)對(duì)個(gè)人生活產(chǎn)生了“重大影響”,而60%的人預(yù)計(jì)這將在未來(lái)兩年內(nèi)發(fā)生。皮尤研究中心(Pew Research Center)2023年的一項(xiàng)研究稱(chēng),盡管人工智能產(chǎn)生了巨大影響,但只有10%的美國(guó)人表示對(duì)人工智能的態(tài)度是“激動(dòng)多于擔(dān)憂”。全球的政策制定者們都在研究人工智能的潛在監(jiān)管法規(guī),與此同時(shí),一些公司則在主動(dòng)公布其為負(fù)責(zé)任創(chuàng)新所采取的措施。
財(cái)捷集團(tuán)(Intuit)把人工智能整合到公司的全產(chǎn)品線,包括在TurboTax、QuickBooks、Credit Karma中整合的生成式人工智能助手,以及公司電子郵箱營(yíng)銷(xiāo)平臺(tái)Mailchimp上的一系列工具。該公司表示,數(shù)百萬(wàn)個(gè)模型每天驅(qū)動(dòng)650億次機(jī)器學(xué)習(xí)預(yù)測(cè),每年進(jìn)行8.1億次人工智能驅(qū)動(dòng)的交互。
Intuit Mailchimp的首席執(zhí)行官拉尼亞·蘇卡爾說(shuō):“五年前,我們宣布公司的策略是將人工智能與專(zhuān)業(yè)知識(shí)相結(jié)合,創(chuàng)建一個(gè)人工智能驅(qū)動(dòng)的專(zhuān)家平臺(tái)。如今,這項(xiàng)投資讓我們擁有了數(shù)百萬(wàn)個(gè)在線的人工智能驅(qū)動(dòng)模型。當(dāng)生成式人工智能出現(xiàn)時(shí),得益于我們所做的投資,以及我們所看到的它對(duì)最終用戶的潛力,我們準(zhǔn)備大展拳腳。”
小企業(yè)有許多數(shù)據(jù)點(diǎn)可以驗(yàn)證成功的和不成功的技術(shù),因此該公司看到了把生成式人工智能推向大眾的機(jī)會(huì),而不是只有那些有能力自行開(kāi)發(fā)人工智能模型的大公司才能夠使用。蘇卡爾指出,財(cái)捷集團(tuán)開(kāi)發(fā)的生成式人工智能操作系統(tǒng),可以保護(hù)其訓(xùn)練數(shù)據(jù)的私密性。然后,Intuit Mailchimp的客戶能夠利用人工智能,以品牌的語(yǔ)氣生成營(yíng)銷(xiāo)郵件和文本,他們還可以設(shè)置自動(dòng)郵件,以幫助歡迎新客戶或提醒客戶他們的在線購(gòu)物車(chē)?yán)镉形唇Y(jié)算的商品。
蘇卡爾表示,過(guò)去幾個(gè)月,Intuit Mailchimp的生成式人工智能文本生成工具使用率提高了70%以上。盡管如此,該公司在考慮如何在擴(kuò)大產(chǎn)品規(guī)模時(shí)保持謹(jǐn)慎。
人工智能模型存在的一個(gè)固有問(wèn)題是,它們并不完美。人工智能可能提供虛假信息,生成冒犯性的信息,并且擴(kuò)大在模型訓(xùn)練數(shù)據(jù)中可能存在的偏差。蘇卡爾稱(chēng),為了避免這些情況,Intuit Mailchimp正在謹(jǐn)慎地選擇能夠使用其生成式人工智能工具的行業(yè)。(她拒絕透露目前Intuit Mailchimp的生成式人工智能不支持哪些行業(yè)。)
但區(qū)別在于,雖然人工智能的能力正在快速增強(qiáng),變得可以接管從日常瑣事到創(chuàng)意工作等各種任務(wù),但財(cái)捷集團(tuán)仍然相信人類(lèi)在這個(gè)世界中依舊有一席之地。人工智能生成的每一條內(nèi)容在被發(fā)送給客戶之前,都會(huì)由用戶進(jìn)行審查。糟糕的或不準(zhǔn)確的回答等惡化情況,能夠上報(bào)給人類(lèi)內(nèi)容審查員。蘇卡爾指出,就像人們可以在TurboTax上與人類(lèi)專(zhuān)家溝通一樣,在市場(chǎng)營(yíng)銷(xiāo)領(lǐng)域離不開(kāi)人類(lèi)專(zhuān)家。
蘇卡爾說(shuō):“人類(lèi)專(zhuān)家仍舊能夠提供人工智能所不具備的更高層次的專(zhuān)業(yè)知識(shí),并提高小企業(yè)的信心。”
其他科技公司正在采取措施幫助人們了解其人工智能如何工作,并區(qū)分真實(shí)內(nèi)容與人工智能生成的內(nèi)容。TikTok為創(chuàng)作者推出了一款工具,用于標(biāo)記人工智能生成的內(nèi)容,而且該公司在2023年還表示正在測(cè)試自動(dòng)標(biāo)記這類(lèi)內(nèi)容的方法。Meta宣布將在Facebook、Instagram和Threads上標(biāo)記人工智能生成的圖片。微軟(Microsoft)在一篇博客里解釋了其為生成式人工智能產(chǎn)品Copilot和Microsoft Designer執(zhí)行的安全保護(hù)措施。2023年,谷歌(Google)修改了搜索算法,把人工智能生成的高質(zhì)量?jī)?nèi)容考慮在內(nèi)。
區(qū)分真實(shí)內(nèi)容和人工智能生成的內(nèi)容,只是人工智能問(wèn)題的一部分。深度造假技術(shù)的泛濫,例如最近偽造的泰勒·斯威夫特的露骨圖片等,暴露出人工智能存在的一個(gè)根本問(wèn)題。Ceartas公司使用人工智能模型打擊在線剽竊行為。該公司的聯(lián)合創(chuàng)始人及首席執(zhí)行官丹·珀塞爾稱(chēng),他已經(jīng)可以為客戶刪除越來(lái)越多的人工智能生成的圖片。他的客戶既有名人和內(nèi)容創(chuàng)作者,也有公司高管。
珀塞爾說(shuō):“我們的技術(shù)工作的方式是創(chuàng)建一個(gè)侵權(quán)模型,我們不需要訪問(wèn)原始內(nèi)容,也不需要對(duì)視頻片段進(jìn)行指紋識(shí)別。我們只需要知道內(nèi)容的名稱(chēng),因?yàn)槿藗冋峭ㄟ^(guò)名稱(chēng)來(lái)在線查找內(nèi)容。在服務(wù)個(gè)別內(nèi)容創(chuàng)作者和企業(yè)時(shí),我們會(huì)對(duì)技術(shù)進(jìn)行小幅調(diào)整,使其更加具體到品牌或個(gè)人,然后更廣泛地應(yīng)用所獲取的信息。”
過(guò)去兩年證明,人工智能的進(jìn)步只會(huì)變得越來(lái)越好。(看看OpenAI的文本生成視頻平臺(tái)Sora獲得的反應(yīng)就可見(jiàn)一斑。)斯托楊諾維奇指出,雖然我們可能再也無(wú)法避免使用人工智能,但我們還需要做更多的工作,集合行業(yè)參與者、學(xué)術(shù)界人士、政策制定者和用戶,就可行的人工智能治理框架達(dá)成共識(shí)。與此同時(shí),隨著人們開(kāi)始在日常生活中看到越來(lái)越多的使用人工智能的例子,她提出了這條建議:
“要對(duì)人工智能和其他技術(shù)的能力保持合理的懷疑態(tài)度,這一點(diǎn)很重要。”斯托楊諾維奇說(shuō),“如果這聽(tīng)起來(lái)難以置信,同時(shí)如果我們不知道模型所使用的數(shù)據(jù)以及模型的驗(yàn)證方式,那么它可能并不像廣告里宣傳的那樣有效。”(財(cái)富中文網(wǎng))
譯者:劉進(jìn)龍
審校:汪皓
In a short time, AI has gone from being a technology used by the tech elite to one that most people use—or at least encounter—daily. AI is being deployed in health apps, customer service interactions, on social media, and in marketing emails, to name just a few examples. While companies are building their own AI and figuring out how the technology fits into their businesses, they’re also facing the challenge of how to transparently convey the ways in which they’re using these advancements.
“Many are subjected to AI, often without an explicit decision to use these systems,” said Julia Stoyanovich, professor and director of the Center for Responsible AI at New York University. “We want to give the ability to decide, to help debug, to help understand the benefits, and look out for risks and harms, back to people.”
According to a KPMG survey released this year, 42% of people believe generative AI is already having a “significant impact” on their personal lives, while 60% expect this within the next two years. Despite AI’s outsize impact, only 10% of Americans report being “more excited than concerned” about AI, according to a study last year from the Pew Research Center. As policymakers around the world examine potential regulations for AI, some companies are proactively offering insight into steps they’re taking to innovate responsibly.
At Intuit, AI is integrated across the company’s line of products, including generative AI assistants in TurboTax, QuickBooks, Credit Karma, and a suite of tools on the company’s email marketing platform, Mailchimp. Millions of models are driving 65 billion machine learning predictions everyday and conducting 810 million AI-powered interactions annually, according to the company.
“Five years ago we declared our strategy as a company was to build an AI-driven expert platform, which combines AI and expertise. We now have millions of live, AI-driven models in the offerings today as a result of that investment,” said Rania Succar, CEO of Intuit Mailchimp. “When generative AI came along, we were ready to go really big because of the investment we’ve made and because of the potential we saw for our end customers.”
With so many data points in small businesses demonstrating what works and what doesn’t, the company saw an opportunity to bring generative AI to the masses—not just the big players who can afford to build their own AI models. Intuit built its own generative AI operating system that keeps the data it trains on private, Succar said. Intuit Mailchimp customers are then able to use the AI to generate marketing emails and text in their brand’s voice, and set up automated emails to help welcome new customers or remind someone when they’ve left an item in their online cart.
In the past few months, Intuit Mailchimp has seen generative AI text generation adoption grow by more than 70%, Succar said. Despite the growth, the company is being careful about how the product is scaled.
One of the inherent problems with AI models everywhere is that they are never perfect. AI can hallucinate false information, generate offensive information, and exacerbate biases that might be present in the model’s training data. In an effort to keep this from happening, Succar said Intuit Mailchimp is being deliberate in selecting industries that have access to its generative AI tools. (She declined to say which industries Intuit Mailchimp currently does not support with generative AI.)
Perhaps the differentiator, though, is that Intuit still believes there’s a place for humans in a world where AI is rapidly becoming capable of taking over everything from the mundane to the creative. Every piece of generated content is reviewed by the user before it is sent out to clients. Escalations, such as poor or inaccurate answers, can be reported to human content reviewers. Just as people can connect with a human expert on TurboTax, Succar said, there’s a place for human experts in marketing.
“Human experts will always be able to add the next level of expertise that AI doesn’t and create confidence for the small business,” Succar noted.
Other technology companies are taking steps to help people understand how their AI works and discern between what’s real and what isn’t. TikTok rolled out a tool for creators to label their AI-generated content and said last year it is also testing ways to do so automatically. Meta announced it will label AI-generated images on Facebook, Instagram, and Threads. Microsoft explained in a blog post the safeguards it’s put in place for its generative AI products Copilot and Microsoft Designer. And last year, Google revised its search algorithm to consider high-quality AI-generated content.
Understanding what’s real and what isn’t is only one part of the equation. The proliferation of deepfakes, most recently explicit images?in the likeness of?Taylor Swift, have highlighted a fundamental problem with AI. Dan Purcell, cofounder and CEO of Ceartas, a company that uses AI models to combat online piracy, said he’s been able to get an increasing number of AI-generated images removed for his clients, who range from celebrities and content creators to C-suite executives.
“The way our technology works is we build a model of an infringement. We don’t need access to the original content. We don’t need to fingerprint clips. We just need the name of the content, because that is how people find it online,” he said. “When we look at individual content creators and businesses, we slightly change ingredients to be more specific to that brand or individual, and then apply the learning to a broad spectrum.”
As the past two years have demonstrated, advancements in AI are only going to keep getting better. (Look no further than the reaction to Sora, OpenAI’s text-to-video platform.) While there may no longer be an option to avoid AI, Stoyanovich said there’s more work that will need to be done, bringing together industry players, academics, policymakers, and users to come to a consensus on an actionable AI governance framework. In the meantime, as people start to notice more examples of AI in their day-to-day, she offered this advice:
“What is important is to keep a healthy dose of skepticism about the capabilities of this and other kinds of technology,” she said. “If it sounds too good to be true and, at the same time, if we don’t know what data the model is based on and how it was validated, then it probably doesn’t work as advertised.”