從瀏覽無休無止的Netflix建議,到Spotify新人工智能DJ令人毛骨悚然的準確性,推薦算法已經(jīng)成為我們數(shù)字生活的傀儡操縱者。Meta旗下的Instagram于2022年7月在其信息推送中加入了人工智能功能,可以向用戶顯示用戶未關(guān)注的賬戶的推薦帖子,此舉引發(fā)了大規(guī)模不滿,就連美國女演員、模特凱莉·詹娜也轉(zhuǎn)發(fā)了“讓Instagram做回Instagram”的請愿。
一年后,Meta揭開了曾經(jīng)導致混亂的這種無形算法的神秘面紗。在Meta全球事務(wù)總裁尼克·克萊格發(fā)表的一篇博客中,Meta宣布將公布22個“系統(tǒng)卡”,以幫助人們了解Facebook和Instagram的人工智能排名系統(tǒng)和預測。Meta還計劃在TikTok克隆版“Reels”上添加“我為什么會看到這項內(nèi)容”的功能,向用戶解釋他們看到推薦內(nèi)容的原因。
克萊格在接受科技博客the Verge采訪時表示:“我們面臨的最大的問題之一是,由于互動的過程是無形的,因此很難向外行人解釋。當然,留出的信息真空被最嚴重的恐懼和懷疑填滿。”
克萊格所說的最嚴重的懷疑有充分證據(jù),例如《紐約客》(New Yorker)把這些情緒稱為“算法焦慮”。隨著我們的屏幕被基于無形算法推薦的內(nèi)容所占據(jù),不安的情緒與日俱增,讓我們開始懷疑我們所看到的究竟是我們自己選擇的內(nèi)容,還是我們只是神秘代碼的棋子而已。Meta的首席執(zhí)行官馬克·扎克伯格在公司最近的營收電話會議上稱,F(xiàn)acebook和Instagram目前超過20%的推送內(nèi)容是基于人工智能的推薦。
Meta提高算法透明度的措施,希望能夠打消一些可能在用戶中流行的陰謀論。TikTok和推特(Twitter)也曾經(jīng)進行過類似的嘗試。然而,the Verge指出,算法透明并不能從根本上解決問題。雖然用戶了解了算法的工作原理,卻對這些系統(tǒng)的工作方式感到難以接受。通過這項新功能,用戶可以了解這些高度準確的預測如何細致地跟蹤他們的一舉一動,但并非所有人在看到它的工作原理之后都能夠接受現(xiàn)實。
隨著透明與不安之間的較量日益激烈,有一件事情是顯而易見的:社交媒體平臺正在朝著更開放的未來大步前進。此外,Meta正在增加與學術(shù)研究人員的合作,以深入了解和完善其系統(tǒng),從而更好地幫助全世界解決這個問題。
顯然,Meta決定減輕用戶的焦慮,幫助用戶更好地了解其系統(tǒng),甚至希望借此恢復公司的信譽。(財富中文網(wǎng))
譯者:劉進龍
審校:汪皓
從瀏覽無休無止的Netflix建議,到Spotify新人工智能DJ令人毛骨悚然的準確性,推薦算法已經(jīng)成為我們數(shù)字生活的傀儡操縱者。Meta旗下的Instagram于2022年7月在其信息推送中加入了人工智能功能,可以向用戶顯示用戶未關(guān)注的賬戶的推薦帖子,此舉引發(fā)了大規(guī)模不滿,就連美國女演員、模特凱莉·詹娜也轉(zhuǎn)發(fā)了“讓Instagram做回Instagram”的請愿。
一年后,Meta揭開了曾經(jīng)導致混亂的這種無形算法的神秘面紗。在Meta全球事務(wù)總裁尼克·克萊格發(fā)表的一篇博客中,Meta宣布將公布22個“系統(tǒng)卡”,以幫助人們了解Facebook和Instagram的人工智能排名系統(tǒng)和預測。Meta還計劃在TikTok克隆版“Reels”上添加“我為什么會看到這項內(nèi)容”的功能,向用戶解釋他們看到推薦內(nèi)容的原因。
克萊格在接受科技博客the Verge采訪時表示:“我們面臨的最大的問題之一是,由于互動的過程是無形的,因此很難向外行人解釋。當然,留出的信息真空被最嚴重的恐懼和懷疑填滿?!?/p>
克萊格所說的最嚴重的懷疑有充分證據(jù),例如《紐約客》(New Yorker)把這些情緒稱為“算法焦慮”。隨著我們的屏幕被基于無形算法推薦的內(nèi)容所占據(jù),不安的情緒與日俱增,讓我們開始懷疑我們所看到的究竟是我們自己選擇的內(nèi)容,還是我們只是神秘代碼的棋子而已。Meta的首席執(zhí)行官馬克·扎克伯格在公司最近的營收電話會議上稱,F(xiàn)acebook和Instagram目前超過20%的推送內(nèi)容是基于人工智能的推薦。
Meta提高算法透明度的措施,希望能夠打消一些可能在用戶中流行的陰謀論。TikTok和推特(Twitter)也曾經(jīng)進行過類似的嘗試。然而,the Verge指出,算法透明并不能從根本上解決問題。雖然用戶了解了算法的工作原理,卻對這些系統(tǒng)的工作方式感到難以接受。通過這項新功能,用戶可以了解這些高度準確的預測如何細致地跟蹤他們的一舉一動,但并非所有人在看到它的工作原理之后都能夠接受現(xiàn)實。
隨著透明與不安之間的較量日益激烈,有一件事情是顯而易見的:社交媒體平臺正在朝著更開放的未來大步前進。此外,Meta正在增加與學術(shù)研究人員的合作,以深入了解和完善其系統(tǒng),從而更好地幫助全世界解決這個問題。
顯然,Meta決定減輕用戶的焦慮,幫助用戶更好地了解其系統(tǒng),甚至希望借此恢復公司的信譽。(財富中文網(wǎng))
譯者:劉進龍
審校:汪皓
From swiping through endless Netflix suggestions to the eerie accuracy of Spotify’s new A.I. DJ, recommendation algorithms have become the puppet masters of our digital lives. Meta-owned Instagram introduced the feature to its own feed last July, showing users recommended posts from accounts they don’t follow, which caused an uproar so large that even Kylie Jenner shared a petition to “Make Instagram Instagram again.”
Now, a year later, Meta is pulling back the curtain on the invisible calculations that caused so much turbulence. In a blog post from Meta’s president of global affairs Nick Clegg, the company announced it’ll be publishing 22 “system cards,” which will provide insights into Facebook and Instagram’s AI ranking systems and predictions. Meta also plans to expand the “Why Am I Seeing This?” feature, which shows a user why they’ve been served recommended content, to its TikTok clone “Reels.”
“One of the biggest problems we have is because that interaction is invisible to the naked eye, it’s pretty difficult to explain to the layperson,” Clegg said in an interview with the Verge. “Of course, what fills that vacuum is the worst fears and the worst suspicions.”
The worst suspicions that Clegg speaks of have been well documented, the New Yorker dubbing it “algorithmic anxiety.” As our screens serve up content based on invisible calculations, an uneasiness has grown, leaving users to question whether we are scrolling through our own choices or if we’re mere pawns in the hands of mysterious code. Meta CEO Mark Zuckerberg said on the firm’s most recent earnings call that over 20% of content in Facebook and Instagram feeds is now AI-recommended.
Meta’s step to make its algorithm more transparent, something both TikTok and Twitter have attempted, will hopefully dispel some of the tinfoil hat theories users might have about it. However, as the Verge points out, that transparency may not address the underlying issue — users understand how it works, but are simply uncomfortable with how these systems function. With this new feature, users have the power to see all the eerily accurate predictions employed to meticulously track their every move, and not everyone has the stomach for sausage once they see how it’s made.
As the battle between transparency and unease wages on, one thing is clear: social media platforms are taking strides towards a more open future. What’s more, Meta is expanding its collaborations with academic researchers to better understand and improve its systems, something that will better help the world navigate this landscape.
Clearly, Meta is determined to navigate its user’s anxieties and provide them with a better understanding of its systems — and potentially aiming to restore its reputation in the process.