今年,領英面臨著前所未有的挑戰:用戶發布的帖文數量增加了50%,這意味著在內容服務上,領英實現了創紀錄的增長;但新帖文的大量涌入也導致了問題帖文的增加,領英不得不收緊規則,并擴充了內容審核團隊。
隨著新冠疫情流行、“黑人的命也是命”(Black Lives Matter)抗議活動及總統大選等事件爆發,緊張局勢從線下蔓延到線上,領英平臺上出現了越來越多偏離專業對話的帖文,轉向了陰謀論、誤導性信息和仇恨言論。
領英產品管理總監Liz Li表示:“我們迫切需要標準化,劃清紅線,明確建設性和尊重性的發言對領英而言意味著什么。”
今年,領英已經做出了一系列的政策調整。春季,領英明令禁止發布與新冠疫情相關的虛假信息(該政策也涉及到針對新冠疫苗的誤導性信息的禁令)。隨著與QAnon(一種與極右勢力有關的陰謀論)相關的帖子增多,領英于夏季開始進行相關打擊,刪除了包含誤導性信息的QAnon帖文,并禁用了與之相關的熱門標簽。秋季,領英又闡明了其政策,在性騷擾政策中加入了“令人不適的挑逗”(unwanted advances)等措辭,禁止與種族和宗教相關的詆毀和誹謗,禁止“恐怖”的及“駭人聽聞”的內容。
領英采取行動的同時,其他社交媒體也都在努力應對分裂、仇恨及誤導性帖文激增的問題。在美國總統選舉期間,推特和臉書迅速調整政策,把白人民族主義、否認大屠殺、假獲選聲明等違規內容貼上標簽或直接移除。
在之前,領英似乎與如此嚴苛的內容審核機制無關,因為用戶們往往只用這個軟件進行專業性的社交和求職。但今年,情況變了。
“我們看到,領英上的內容和用戶對話開始發生變化,”Liz Li說,“當然大部分還是很好,很專業,很有禮貌。但同時我們也看到,越來越多的用戶報告稱,一些內容他們并不想看到,甚至已經違反了我們的政策。”
領英表示,其在增加內容審核員數量方面進行了“大量投入”,但沒有具體說明1.6萬名員工中有多少人在審核帖文。相比之下,臉書在全球已經雇傭了1.5萬多名內容審核員,谷歌也在努力加強技術,在問題內容被看到之前,進行主動檢測并刪除。今年,領英也開始咨詢其用戶在信息流上想看到及不想看到的內容,并提供理由。
今年領英刪除的違規內容比以往任何時候都多。領英表示,在3月至8月的五個月里,平臺共刪除了2萬多條涉及仇恨、騷擾、煽動性或極端暴力的內容,相比之下,領英去年一整年刪除同類違規帖文也不過約3.8萬個。
Liz Li表示,領英的員工們一直在致力于解決這一問題,今年以來他們一直在家工作,但比以往任何時候都來得更加團結合作。她說,今年的經歷讓團隊“在運營上變得更加強大”,為解決內容審核問題,他們將付出更多努力。
“人們的關注度,人們在互聯網上瀏覽的對象,都將越來越集中到這一領域。”她說。(財富中文網)
編譯:楊二一
今年,領英面臨著前所未有的挑戰:用戶發布的帖文數量增加了50%,這意味著在內容服務上,領英實現了創紀錄的增長;但新帖文的大量涌入也導致了問題帖文的增加,領英不得不收緊規則,并擴充了內容審核團隊。
隨著新冠疫情流行、“黑人的命也是命”(Black Lives Matter)抗議活動及總統大選等事件爆發,緊張局勢從線下蔓延到線上,領英平臺上出現了越來越多偏離專業對話的帖文,轉向了陰謀論、誤導性信息和仇恨言論。
領英產品管理總監Liz Li表示:“我們迫切需要標準化,劃清紅線,明確建設性和尊重性的發言對領英而言意味著什么。”
今年,領英已經做出了一系列的政策調整。春季,領英明令禁止發布與新冠疫情相關的虛假信息(該政策也涉及到針對新冠疫苗的誤導性信息的禁令)。隨著與QAnon(一種與極右勢力有關的陰謀論)相關的帖子增多,領英于夏季開始進行相關打擊,刪除了包含誤導性信息的QAnon帖文,并禁用了與之相關的熱門標簽。秋季,領英又闡明了其政策,在性騷擾政策中加入了“令人不適的挑逗”(unwanted advances)等措辭,禁止與種族和宗教相關的詆毀和誹謗,禁止“恐怖”的及“駭人聽聞”的內容。
領英采取行動的同時,其他社交媒體也都在努力應對分裂、仇恨及誤導性帖文激增的問題。在美國總統選舉期間,推特和臉書迅速調整政策,把白人民族主義、否認大屠殺、假獲選聲明等違規內容貼上標簽或直接移除。
在之前,領英似乎與如此嚴苛的內容審核機制無關,因為用戶們往往只用這個軟件進行專業性的社交和求職。但今年,情況變了。
“我們看到,領英上的內容和用戶對話開始發生變化,”Liz Li說,“當然大部分還是很好,很專業,很有禮貌。但同時我們也看到,越來越多的用戶報告稱,一些內容他們并不想看到,甚至已經違反了我們的政策。”
領英表示,其在增加內容審核員數量方面進行了“大量投入”,但沒有具體說明1.6萬名員工中有多少人在審核帖文。相比之下,臉書在全球已經雇傭了1.5萬多名內容審核員,谷歌也在努力加強技術,在問題內容被看到之前,進行主動檢測并刪除。今年,領英也開始咨詢其用戶在信息流上想看到及不想看到的內容,并提供理由。
今年領英刪除的違規內容比以往任何時候都多。領英表示,在3月至8月的五個月里,平臺共刪除了2萬多條涉及仇恨、騷擾、煽動性或極端暴力的內容,相比之下,領英去年一整年刪除同類違規帖文也不過約3.8萬個。
Liz Li表示,領英的員工們一直在致力于解決這一問題,今年以來他們一直在家工作,但比以往任何時候都來得更加團結合作。她說,今年的經歷讓團隊“在運營上變得更加強大”,為解決內容審核問題,他們將付出更多努力。
“人們的關注度,人們在互聯網上瀏覽的對象,都將越來越集中到這一領域。”她說。(財富中文網)
編譯:楊二一
LinkedIn faced an unprecedented challenge this year as users increased the number of posts they made by 50%, representing a record rise in content on the service. But the influx also led to more problematic posts, prompting LinkedIn to tighten its rules and expand its content moderation team.
But as events like the coronavirus pandemic, Black Lives Matter protests, and the 2020 U.S. presidential election led to increasing tensions both online and off, more posts on LinkedIn strayed from professional conversations to conspiracy theories, misinformation, and hate speech.
“We really needed to standardize and make clear what it meant to be constructive and respectful on LinkedIn,” said Liz Li, LinkedIn’s director of product management.
This year, LinkedIn made a slew of policy changes, including prohibiting coronavirus-related misinformation in spring (the policy also extends to misleading information about the coronavirus vaccine). Following a rise in posts related to QAnon, a conspiracy theory tied to the far right, the service began cracking down in summer. It removed QAnon posts that contained misinformation and disabled popular hashtags related to it. Then, in fall, LinkedIn clarified a number of policies adding verbiage like “unwanted advances” to its sexual harassment policy, forbidding the use of racial and religious slurs, and banning excessively gruesome or shocking content.
The actions taken by Microsoft-owned LinkedIn come as all social media companies grapple with a rise in divisive, hateful, and misleading posts. Twitter and Facebook also have been rapidly changing their policies, labeling or removing everything from white nationalism to Holocaust denial and false claims of victory during the U.S. presidential election.
But as events like the coronavirus pandemic, Black Lives Matter protests, and the 2020 U.S. presidential election led to increasing tensions both online and off, more posts on LinkedIn strayed from professional conversations to conspiracy theories, misinformation, and hate speech.
“We started to see the content and the conversations on LinkedIn really sort of transform,” Li said. “A ton of it is great—it’s professional, it’s respectful. But at the same time, we’ve also seen an increase in members reporting that there’s stuff that they either don’t want to see or even stuff that would violate our policies.”
LinkedIn said it made a “significant investment” to expand the number of content moderators it employs, though it wouldn’t specify how many of its 16,000 employees review posts for the service. For comparison, Facebook employs more than 15,000 content moderators around the world. It also has been working to strengthen its technology to proactively detect and remove problematic content before anyone sees it. This year, LinkedIn also started asking members to specify what content they do and don’t want to see on their feeds and provide reasons why.
As a result, LinkedIn is removing more harmful content than it ever has before. For example, in the months from March to August, LinkedIn said it removed more than 20,000 pieces of content for being hateful, harassing, inflammatory, or extremely violent. For reference, the service removed about 38,000 posts for the same violations over the entirety of last year.
Li said LinkedIn employees who have been working on the issue have come together and collaborated more than ever this year, which was especially surprising given that workers are still working from home. She said following this year, the teams are "stronger operationally" and that overall, there's a bigger effort aimed at tackling content moderation issues.
“A lot more people’s attention and bandwidth is focused on this area,” Li said.