反智主義盛行,美國可能會輸掉全球人工智能競賽
一個國家徜若在人工智能領域里贏得了全球的主導權,則必將獲得巨大的經濟利益,其經濟增長率在2035年前甚至可能提高一倍。可惜美國在如何開展競爭上卻沒有得到很好的建議。 過去一年里,中、日、英、法、印、加等國都啟動了由政府支持的大規模人工智能項目,以在該領域里拔得頭籌。雖然特朗普政府也開始關注人工智能技術的發展,但在國家層面上,美國國內并未形成一個有凝聚力的戰略與中日等國抗衡。美國政界反而興起了一股反智主義,決策者們更多擔心人工智能的潛在危害,呼吁給這項技術念“緊箍咒”的多,支持其發展的少。 人工智能確實給社會帶來了一些獨特的挑戰——比如它有可能加劇刑事司法系統中的種族偏見,而自動駕駛汽車技術也產生了一些倫理問題。至于如何解決這些挑戰,也有人給出了一些解決方案,目前最流行的觀點,是要確立人工智能算法的透明性原則或可解釋性原則,或者建立一個最高層級的人工智能監管機構。然而這些措施不僅可能無助于解決問題,反而會嚴重拖慢美國人工智能技術的發展和應用速度。 透明性原則的支持者們認為,應該要求人工智能公司公開源代碼,讓監管機構、記者和有責任心的公民可以對這些代碼進行審查,從而發現任何不法行為的跡象。不過以人工智能系統的復雜性,我們很難相信這種方法會有什么效果,反而會使那些奉行“拿來主義”的國家的企業更容易偷到美國的原代碼。這種做法顯然會阻礙美國在全球人工智能競賽中的競爭,使美國企業更不愿意投資這項技術。 可解釋性原則的支持者則認為,政府應要求公司采取必要措施,使終端用戶有能力解讀他們的算法——比如描述算法的工作原理,或者只允許使用那些能夠解釋清楚其決策機制的算法。比如歐盟就把算法可解釋性作為評估人工智能潛在風險的一個主要指標。歐盟的《通用數據保護條例》(GDPR)規定,一個自然人有權獲得關于算法決策機制的“有意義的信息”。 可解釋性原則可能是個合理的要求,而且它已經成為了刑事司法或消費金融等很多領域的標準。但是在某些領域里,你甚至無法要求一個自然人去解釋他的決策機制,你又怎能將這個要求硬套在人工智能身上呢?非要這樣的話,企業只得繼續依賴真人進行決策,以避免監管壓力,而這則會不可避免地造成效率和創新的遲滯。 另外,可解釋性與準確性是魚與熊掌的關系。一個算法越復雜,其準確性一般越高;但一個算法越復雜,它就越難以解釋清楚。這種矛盾是始終存在的。兩個變量的線性回歸,肯定要比200個變量的線性回歸容易解釋。算法使用的數學模型越先進,這種矛盾就愈發尖銳。因此,可解釋性原則只有在可以犧牲準確性的條件下才能實現,而這種條件顯然太少了。比如對于自動駕駛汽車技術,為了可解釋性而犧牲準確性,后果無疑是災難性的。導航精度哪怕稍稍損失一點,或者計算機某一次不小心將行人和廣告牌上的人像搞混了,都會造成巨大的危險。 另一個頗為流行的餿主意,是建立一個全國性的類似于美國食品與藥品管理局(FDA)和美國國家運輸交通安全委員會(NTSB)的人工智能監管機構。埃隆·馬斯克就堅決支持這一倡議。持這種觀點的人好像把搞人工智能當成了賣衣服和快餐,似乎認為所有人工智能算法都對社會有同樣的危險性。然而人工智能系統的決策機制也跟人類一樣,是受一系列行業法律法規約束的,其危險性有高有低,主要取決于它的應用場景。你不能只因為它是一個人工智能算法,就對一個低風險的人工智能產品搞監管,這必然會顯著阻礙這項技術的研發,進而限制美國企業采用人工智能技術的能力。 好在政策制定者們還是有一種可行的辦法,既能解決人工智能的潛在威脅,又不會阻礙它的發展——那就是算法責任原則。它是一種低干涉的監管模式,企業只需通過一系列控制機制,驗證他們的人工智能系統是不是按照設計意圖運行,能不能識別和糾正有害結果。它既不會像透明性原則那樣危害知識產權,也不會像可解釋性原則那樣阻礙技術發展,企業仍然可以部署先進的創新人工智能系統。但在某些特殊情況下,根據實際需要,也可以要求企業解釋其決策機制,不管人工智能系統在這些決策中有沒有被使用。另外根據算法責任原則,各個行業的監管機構也將能夠理解各自領域內的人工智能技術,而不需要建立一個全國性的最高監管機構。這樣也就大大降低了人工智能部署的壁壘。 如果美國真的想領跑全球人工智能競賽,那么政策制定者最應該避免的,就是用一套低效的監管制度扼殺人工智能的發展潛力。如果政策制定者擔心人工智能的安全問題或是公平問題,他們完全可以采用算法責任原則來化解他們的擔憂,而不是當美國剛站在人工智能競賽的起跑線上,就去一棍子打斷他的腿。(財富中文網) 本文作者喬舒亞·紐是智庫機構數據創新中心(Center for Data Innovation)的高級政策研究分析師,該機構主要研究數據、科技與公共政策的交集。 譯者:樸成奎 |
The country that wins the global race for dominance in artificial intelligence stands to capture enormous economic benefits, including potentially doubling its economic growth rates by 2035. Unfortunately, the United States is getting bad advice about how to compete. Over the past year, Canada, China, France, India, Japan, and the United Kingdom have all launched major government-backed initiatives to compete in AI. While the Trump administration has begun to focus on how to advance the technology, it has not developed a cohesive national strategy to match that of other countries. This has allowed the conversation about how policymakers in the United States should support AI to be dominated by proposals from advocates primarily concerned with staving off potential harms of AI by imposing restrictive regulations on the technology, rather than supporting its growth. AI does pose unique challenges—from potentially exacerbating racial bias in the criminal justice system to raising ethical concerns with self-driving cars—and the leading ideas to address these challenges are to mandate the principle of algorithmic transparency or algorithmic explainability, or to form an overarching AI regulator. However, not only would these measures likely be ineffective at addressing potential challenges, they would significantly slow the development and adoption of AI in the United States. Proponents of algorithmic transparency contend that requiring companies to disclose the source code of their algorithms would allow regulators, journalists, and concerned citizens to scrutinize the code and identify any signs of wrongdoing. While the complexity of AI systems leaves little reason to believethat this would actually be effective, it would make it significantly easier for bad actors in countries that routinely flout intellectual property protections, to steal U.S. source code. This would simultaneously give a leg up to the United States’ main competition in the global AI race and reduce incentives for U.S. firms to invest in developing AI. Others have proposed algorithmic explainability, where the government would require companies to make their algorithms interpretable to end users, such as by describing how their algorithms work or by only using algorithms that can articulate rationales for their decisions. For example, the European Union has made explainability a primary check on the potential dangers of AI, guaranteeing in its General Data Protection Regulation (GDPR) a person’s right to obtain “meaningful information” about certain decisions made by an algorithm. Requiring explainability can be appropriate, and it is already the standard in many domains, such as criminal justice or consumer finance. But extending this requirement to AI decision-making in circumstances where the same standard doesn’t apply for human decisions would be a mistake. It would incentivize businesses to rely on humans to make decisions so they can avoid this regulatory burden, which would come at the expense of productivity and innovation. Additionally, there can be inescapable trade-offs between explainability and accuracy. An algorithm’s accuracy typically increases with its complexity, but the more complex an algorithm is, the more difficult it is to explain. This trade-off has always existed—a simple linear regression with two variables is easier to explain than one with 200 variables—but the tradeoffs become more acute when using more advanced data science methods. Thus, explainability requirements would only make sense in situations where it is appropriate to sacrifice accuracy—and these cases are rare. For example, it would be a terrible idea to prioritize explainability over accuracy in autonomous vehicles, as even slight reductions in navigational accuracy or to a vehicle’s ability to differentiate between a pedestrian on the road and a picture of a person on a billboard could be enormously dangerous. A third popular, but bad idea, championed most notably by Elon Musk, is to create the equivalent of the Food and Drug Administration or National Transportation Safety Board to serve as an overarching AI regulatory body. The problem is that establishing an AI regulator falsely implies that all algorithms pose the same level of risk and need for regulatory oversight. However, an AI system’s decisions, like a human’s decisions, are still subject to a wide variety of industry-specific laws and regulation and pose a wide variety of risk depending on their application. Subjecting low-risk decisions to regulatory oversight simply because they use an algorithm would be a considerable barrier to deploying AI, limiting the ability of U.S. firms to adopt the technology. Fortunately, there is a viable way for policymakers to address the potential risks of AI without sabotaging it: Adopt the principle of algorithmic accountability, a light-touch regulatory approach that incentivizes businesses deploying algorithms to use a variety of controls to verify that their AI systems act as intended, and to identify and rectify harmful outcomes. Unlike algorithmic transparency, it would not threaten intellectual property. Unlike algorithmic explainability, it would allow companies to deploy advanced, innovative AI systems, yet still require that they be able to explain certain decisions when context demands it, regardless of whether AI was used in those decisions. And unlike a master AI regulator, algorithmic accountability would ensure regulators could understand AI within their sector-specific domains while limiting the barriers to AI deployment. If the United States is to be a serious contender in the global AI race, the last thing policymakers should do is shackle AI with ineffective, economically damaging regulation. Policymakers who want to focus now on unfair or unsafe AI should instead pursue the principle of algorithmic accountability as a means of addressing their concerns without kneecapping the United States as it enters the global AI race. Joshua New is a senior policy analyst at the Center for Data Innovation, a think tank studying the intersection of data, technology, and public policy. |