如果企業希望用戶能順利使用其開發的人工智能軟件,就必須確保開發團隊以及軟件訓練數據集的多樣化。
這一結論來自《財富》雜志周二舉辦的在線論壇,主要議題是人工智能偏見。
然而,企業想在社會中找到公平又具有普遍代表性的數據可能是一項挑戰。前法官、美國頂級律所Cravath, Swaine & Moore合伙人凱瑟琳·福里斯特解釋說,一些數據集,比如刑事司法系統中的數據集就出了名地不平衡。
比如某個城市里,如果當地執法部門曾過度針對黑人社區,人工智能又用逮捕數據集為基礎數據,在預測犯罪時就可能會錯誤地判斷黑人的犯罪幾率更高。
福里斯特說:“因此,和我們的歷史一樣,應用于人工智能工具的數據資產也存在結構性不平等。坦率地說,很難擺脫這種不平等。”
福里斯特表示,一直在努力告誡法官,數據偏見會影響法律系統里某些人工智能工具。但這很有挑戰性,因為軟件產品種類繁多,并且沒有相互比較的標準。
她說,人們應當了解到今天的人工智能“確實存在一定的局限性,所以要謹慎使用。”
Dropbox軟件公司的多樣性、公平性和包容性主管丹尼·吉洛里稱,一直希望通過產品多樣性委員會來減少人工智能偏見。該委員會成員分析公司產品,審核對某些群體是否存在不經意的歧視。Dropbox工作人員發布產品之前,不僅要提交研發中產品的隱私審核,還要提交多樣性審核。
吉洛里說,Dropbox的多樣性委員會在某個產品中發現了與“個人身份信息”相關的偏見問題,后來順利解決。
關鍵是要盡早發現偏見,而不是必須“出事之后再回過頭解決問題,” 吉洛里說。(財富中文網)
譯者:曉維
審校:夏林
如果企業希望用戶能順利使用其開發的人工智能軟件,就必須確保開發團隊以及軟件訓練數據集的多樣化。
這一結論來自《財富》雜志周二舉辦的在線論壇,主要議題是人工智能偏見。
然而,企業想在社會中找到公平又具有普遍代表性的數據可能是一項挑戰。前法官、美國頂級律所Cravath, Swaine & Moore合伙人凱瑟琳·福里斯特解釋說,一些數據集,比如刑事司法系統中的數據集就出了名地不平衡。
比如某個城市里,如果當地執法部門曾過度針對黑人社區,人工智能又用逮捕數據集為基礎數據,在預測犯罪時就可能會錯誤地判斷黑人的犯罪幾率更高。
福里斯特說:“因此,和我們的歷史一樣,應用于人工智能工具的數據資產也存在結構性不平等。坦率地說,很難擺脫這種不平等。”
福里斯特表示,一直在努力告誡法官,數據偏見會影響法律系統里某些人工智能工具。但這很有挑戰性,因為軟件產品種類繁多,并且沒有相互比較的標準。
她說,人們應當了解到今天的人工智能“確實存在一定的局限性,所以要謹慎使用。”
Dropbox軟件公司的多樣性、公平性和包容性主管丹尼·吉洛里稱,一直希望通過產品多樣性委員會來減少人工智能偏見。該委員會成員分析公司產品,審核對某些群體是否存在不經意的歧視。Dropbox工作人員發布產品之前,不僅要提交研發中產品的隱私審核,還要提交多樣性審核。
吉洛里說,Dropbox的多樣性委員會在某個產品中發現了與“個人身份信息”相關的偏見問題,后來順利解決。
關鍵是要盡早發現偏見,而不是必須“出事之后再回過頭解決問題,” 吉洛里說。(財富中文網)
譯者:曉維
審校:夏林
If companies are serious about their artificial intelligence software working well for everyone, they must ensure that the teams developing it as well as the datasets used to train the software are diverse.
That’s one takeaway from an online panel discussion about A.I.’s bias hosted by Fortune on Tuesday.
It can be challenging for companies to find datasets that are both fair and reflective of everyone in society. In fact, some datasets like those from the criminal justice system are notoriously plagued with inequality, explained Katherine Forrest, a former judge and partner at the law firm Cravath, Swaine and Moore.
Consider a dataset of arrests in a city in which local law enforcement has a history of over-policing Black neighborhoods. Because of the underlying data, an A.I. tool developed to predict who is likely to commit a crime may incorrectly deduce that Black people are far more likely to be offenders.
“So the data assets used for all of these tools is only as good as our history,” Forrest said. “We have structural inequalities that are built into that data that are frankly difficult to get away from.”
Forrest said she has been trying to educate judges about bias problems affecting certain A.I. tools used in the legal system. But it’s challenging because there are many different software products and there is no standard for comparing them to each other.
She said that people should know that today’s A.I. “has some real limitations, so use it with caution.”
Danny Guillory, the head of diversity, equity, and inclusion for Dropbox, said one way his software company has been trying to mitigate A.I. bias is through a product diversity council. Council members analyze the company’s products to learn if they inadvertently discriminate against certain groups of people. Similar to how Dropbox workers submit products under development for privacy reviews prior to their release, employees submit products for diversity reviews.
Guillory said the company's diversity council has already discovered some bias problems in an unspecified product that had to do with “personal identifying information” and workers were able to fix the issues.
The point is to spot bias problems early, instead of having to “retroactively fix things,” Guillory said.