谷歌和Facebook最大的難題不在于控制自身平臺,而在于管理公眾期望
谷歌(Google)的首席執行官桑德爾·皮查伊在12月11日出席美國眾議院司法委員會(House Judiciary Committee)的聽證會,只不過是科技公司被迫就偏見問題進行回應的最新例證。顯然,皮查伊耗費了大量時間來保護公司免受這類針對谷歌和YouTube搜索結果的指控,不過他并不孤單。例如,Facebook就因為“迎合保守派”和作為“極左自由主義意識形態的溫床”而飽受指責。 盡管公眾很容易就能指責這些公司存在偏見,但這種做法并不正確。 正如加利福尼亞州民主黨議員佐伊·洛夫格倫在皮查伊的聽證會上準確指出的那樣:“這不是某個小人坐在幕布后面,指揮(這些公司)給用戶顯示什么結果?!毕喾?,這些公司——以及其中的員工——的任務是審核全球數十億用戶創造的內容,與此同時滿足廣大群眾和毫不害怕濫用職權的心存抵觸的立法者。此外,這些公司在進行這項幾乎不可能完成的審核任務時,還要不斷以中立的意識形態過濾內容。而在大多數情況下,他們的工作都值得贊揚。 考慮到這項任務的規模與復雜程度,我們對于呈現結果的差異不應感到驚訝。正如皮查伊指出,去年谷歌提供了超過3萬億次搜索,而谷歌接受的日常搜索中,有15%的詞條是之前從未出現過的。計算一下,就意味著谷歌去年搜索了約4,500億次全新的詞條。 不可避免的,許多人會不滿于自己喜歡的評論員和意識形態的觀點在那些搜索結果中呈現的方式,以及在其他平臺中被審核的方式。錯誤會出現,權衡也在所難免,而對于內容審核充滿偏見和敵意的言論總會不斷涌現。 科技公司試圖一次性實現許多不同——有時甚至是互相沖突——的目標。他們盡力限制裸露和暴力,控制假新聞,屏蔽仇恨言論,保障所有人的網絡安全。這樣一張任務清單導致我們很難給成功設定一個標準,而要實現成功則難上加難。當這些目標與美國人神圣不可侵犯的言論自由原則和尊重不同觀點的愛好相抵觸時,情況尤其如此。 一旦這些價值觀產生沖突,誰來決定屏蔽哪些言論,又允許哪些言論? 由于Facebook不斷擴張并受到超過20億用戶的喜愛,公司的內容審核系統也進行了升級。公司如今在全球11個辦事處配備了由律師、政策專家和公關專家組成的團隊,他們的任務是出臺“社區標準”,決定怎樣審核內容。 近幾個月里,Facebook對于這些規則如何出臺并執行變得更加開明。今年春天,公司的全球政策管理主管莫妮卡·比克特撰文闡述了Facebook安全、發聲和平等三大原則,并“努力把這些標準持續、公平地應用到所有的社區與文化中。” 哪個標準可以持續應用到每天以超過100種不同語言發布的數十億篇文章中?人工智能和機器學習很擅長過濾裸露照片、垃圾信息、虛假賬號和影像暴力。但內容取決于上下文,相較之下總是更加棘手,因此平臺必須通過人工版主來處理每篇可能違規的文章。 Facebook和其他平臺的運營不可能讓任何政治派別滿意,撇開這個事實不談,他們嚴肅認真地履行了自己的義務來保護用戶。畢竟,每個平臺都有很強的經濟動機來取悅用戶,避免出現對某種政治理念的傾向。因此,創造可以持續遵守的中立規則,不考慮政治立場,是符合平臺自身利益的。 不過,看看內容審核的實現方式,你會很明顯地發現人類的判斷在其中起到了很大作用。Facebook關于仇恨言論判定的政策是由人類制定的,最終也由人類執行。無論這些人多么心懷善意,他們來自不同背景,存在不同傾向,對主題也有不同理解。因此如果最后呈現的結果矛盾而混亂,讓保守主義者和自由主義者都大為不滿,我們也不用吃驚。這并不意味著科技公司存在政治傾向,只表明他們的工作實在是難以置信的困難。(財富中文網) 作者克里斯托弗·庫普曼是猶他州立大學增長與機遇中心的戰略與研究高級主管,梅根·漢森是該機構的研究主管。 譯者:嚴匡正 |
Google CEO Sundar Pichai’s testimony before the House Judiciary Committee on Dec 11 is just the latest example of a tech company having to respond to accusations of bias. While Pichai obviously spent much of his time defending Google from such allegations in search results on Google and YouTube, he isn’t alone. Platforms like Facebook, for instance, are being blamed of both “catering to conservatives,” and acting as a network of “incubators for far-left liberal ideologies.” While accusing these companies of bias is easy, it’s also wrong. As Rep. Zoe Lofgren (D-CA) correctly pointed out during Pichai’s testimony, “It’s not some little man sitting behind the curtain figuring out what [companies] are going to show the users.” Instead, these companies—and the people who work there—have been tasked with moderating content created by billions of users across the globe while also having to satisfy both the broader public and competing lawmakers who aren’t afraid to throw their weight around. Moreover, these companies are taking on this impossible task of moderating while also filtering content in a consistent and ideologically neutral way. And, for the most part, they are doing an admirable job. Given the complexity and scale of the task, we shouldn’t be surprised that results vary. As Pichai noted, Google served over 3 trillion searches last year, and 15% of the searches Google sees per day have never been entered before on the platform. To do the math, that means that somewhere around 450 billion of the searches Google served last year were brand new inquiries. Inevitably, many people will be left unsatisfied with how their preferred commentators and ideological views are returned in those searches, or moderated on other platforms. Mistakes will occur, trade-offs will be made, and there will always be claims that content moderation is driven by bias and animus. Tech companies are attempting to achieve many different—sometimes conflicting—goals at once. They are working to limit nudity and violence, control fake news, prevent hate speech, and keep the internet safe for all. Such a laundry list makes success hard to define—and even harder to achieve. This is especially the case when these goals are pitted against the sacrosanct American principle of free speech, and a desire (if not a business necessity) to respect differing viewpoints. When these values come into conflict, who decides what to moderate, and what to allow? As it has expanded and welcomed in more than 2 billion users, Facebook has upped its content moderation game as well. The company now has a team of lawyers, policy professionals, and public relations experts in 11 offices across the globe tasked with crafting “community standards” that determine how to moderate content. In recent months, Facebook has been more open about how these rules are developed and employed. This spring, Monika Bickert, the platform’s head of global policy management, wrote about Facebook’s three principles of safety, voice, and equity, and the “aim to apply these standards consistently and fairly to all communities and cultures.” Can any standard be consistently applied to billions of posts made every single day in more than 100 different languages? Artificial intelligence and machine learning are very good at filtering out nudity, spam, fake accounts, and graphic violence. But for content that is dependent on context—which has always been the thornier issue—platforms must rely on human moderators to sort through each and every post that might violate its rules. Putting aside the fact that they have not been able to satisfy those operating on either side of the political spectrum, Facebook and other platforms have taken their obligation to protect users seriously. After all, each faces a strong financial incentive to keep their users happy, and to avoid the appearance of favoring one set of political beliefs over another. Thus, creating neutral rules that can be consistently applied, regardless of political affiliation, is in a platform’s self-interest. But when you look at how content moderation actually gets done, it’s clear that discretion by human beings plays a very large role. Facebook’s policies on what constitutes hate speech are written by human beings, and ultimately are enforced by human beings who—no matter how well-meaning they are—have different backgrounds, biases, and understandings of the subject matter. We shouldn’t be surprised when the results are inconsistent, messy, and end up leaving both conservatives and liberals unhappy. This doesn’t mean tech companies are politically biased—it means their job is incredibly difficult. Christopher Koopman is the senior director of strategy and research and Megan Hansen is the research director for the Center for Growth and Opportunity at Utah State University. |