最強利器!AI幫助YouTube制止不當視頻的傳播
YouTube在一份報告中首次詳細披露,已刪除多少違反平臺政策的視頻,數量確實不少。 2017年第四季度,谷歌母公司Alphabet旗下的YouTube刪除視頻超過800萬條。那么,YouTube怎樣判斷哪些視頻應該刪除?機器學習技術在其中發揮了重要作用。 據YouTube報告,經過評估刪除的視頻之中,超過83%都來自機器判斷,并非人工評定。超過四分之三的視頻還沒點擊量時就被刪去,大部分都是垃圾廣告或者色情內容。 科技業人士喜歡將該技術稱為機器學習或人工智能(AI),主要利用數據改進算法,辨識出模式后自行采取行動,無需人工干預。這次YouTube就用人工智能自動識別會引起不滿的內容。 YouTube團隊在博客文章中表示,運用人工智能技術成效顯著。 舉例來說,YouTube平臺禁止播放含“暴力極端主義”內容的視頻,2017年初采用人工智能技術以前,僅有8%的相關視頻在評論不足十條的時候被刪除。2017年年中,YouTube開始用機器學習識別視頻,一半以上包括暴力極端主義的視頻評論不足十條時就被刪除。 然而某些原本應保留的視頻也被刪除,因而機器學習也導致一些疑問,比如有些看似暴力極端主義的視頻其實只是諷刺,或者只是如實的報道。 Middle East Eye和Bellingcat等多家新聞機構發現,去年年末,YouTube刪除了之前分享有關敘利亞戰爭罪行的視頻。調查馬航17號航班飛經烏克蘭遇襲墜毀事件中,Bellingcat曾發揮公民記者的重要角色,卻發現在YouTube的整個頻道都被中止播放了。 YouTube當時表示:“網站上視頻數量龐大,有時確實會弄錯。發現某條視頻或者某個頻道被誤刪后,我們會迅速恢復。” YouTube在本周一的博客文章中稱,機器學習系統審查可能違規內容時仍需要人工協助。隨著人工智能技術處理視頻數量變多,實際上也增加了視頻審核人手。 YouTube團隊稱:“去年我們承諾,到2018年年末谷歌內部處理違規內容的工作人員增加到1萬人。在YouTube,大多數新增人手都是為了審核內容。我們已聘請了解暴力極端主義、反恐和人權領域的全職專家,各地區專家團隊也已擴充。”(財富中文網) 譯者:Pessy 審稿:夏林 ? |
YouTube has for the first time revealed a report detailing how many videos it takes down due to violations of the platform’s policies—and it’s a really big number. The Alphabet-owned site removed more than 8 million videos during the last quarter of 2017. But how did it decide to take them down? Machine learning technology played a big role. According to YouTube, machines rather than humans flagged up more than 83% of the now-deleted videos for review. And more than three quarters of those videos were taken down before they got any views. The majority were spam or porn. Machine learning—or AI, as the tech industry often likes to call it—involves training algorithms on data so that they become able to spot patterns and take actions by themselves, without human intervention. In this case, YouTube uses the technology to automatically spot objectionable content. In a blogpost, the YouTube team said the use of the technique had a big effect. Regarding videos containing “violent extremism,” which is banned on the platform, only 8% of such videos were flagged and removed in early 2017 before 10 views had taken place. After YouTube started using machine learning for flagging in the middle of the year, “more than half of the videos we remove for violent extremism have fewer than 10 views,” the team said. However, the use of machine learning does raise serious questions about content being taken down that should stay up—some depictions of violent extremism, for example, may be satire or just reportage. Several news organizations, such as Middle East Eye and Bellingcat, found late last year that YouTube was taking down videos they had shared, depicting war crimes in Syria. Bellingcat, which played a key citizen-journalist role in investigating the downing of Malaysia Airlines Flight 17 over Ukraine in 2014, found its entire channel suspended. “With the massive volume of videos on our site, sometimes we make the wrong call. When it’s brought to our attention that a video or channel has been removed mistakenly, we act quickly to reinstate it,” YouTube said at the time. In its Monday blog post, YouTube said its machine learning systems still require humans to review potential content policy violations, and the number of videos being flagged up using the technology has actually increased staffing requirements. “Last year we committed to bringing the total number of people working to address violative content to 10,000 across Google by the end of 2018,” the team said. “At YouTube, we’ve staffed the majority of additional roles needed to reach our contribution to meeting that goal. We’ve also hired full-time specialists with expertise in violent extremism, counterterrorism, and human rights, and we’ve expanded regional expert teams.” |