無人駕駛汽車最大的倫理困境:車禍時先救誰?
????人們每天都會基于對風險的判斷做出各種決定。比如,在路上沒車的時候,急著趕公交車的你可能會橫穿馬路。但如果馬路上的車很多,你就不會這么做了。在某些危急時刻,人們必須在一瞬間做出決定。比如,正開車時,有個小孩突然跑到汽車前面,而你的左右兩方又有其它危險——比如一邊是只貓,另一邊是懸崖峭壁,這時你要如何抉擇?你會冒著自己的生命危險去保護其他人的安全嗎? ????不過,面對這些道德問題時,自動駕駛汽車技術卻沒有快速或是確定的方法,甚至可以說,毫無辦法。汽車廠商們都面臨著算法帶來的道德困境。車載電腦已經可以幫我們泊車和自動巡航,甚至可以在面臨重大安全問題的時刻控制車輛。但這也意味著,自動駕駛技術將面臨人類有時會遭遇的艱難抉擇。 ????那么,應該如何給電腦編制“道德算法”呢? ????·計算每一種可能結果,選取傷亡數字最低的那一條路走。每個生命都要被平等對待。 ????·計算每一種可能結果,選取兒童傷亡數字最低的那一條路走。 ????·賦值計算:人命20分,貓命4分,狗命2分,馬命1分。然后計算每種規避措施造成后果的總分值,選取總分最低的那一條路走。那么,一大群狗的得分要超過兩只貓,車子就會向另一個方向規避,救下這一群狗。 ????可是,如果車載電腦將車里的駕駛員和乘客也計算在內呢?要是車外的生命總分要比車里的高怎么辦?要是人們知道,在必要情況下,車子會按照既定程序犧牲他們,他們還愿意坐進車里嗎? ????法國圖盧茲經濟學院的讓·弗朗西斯·博納豐最近發表的一篇研究認為,這些問題沒有正確或錯誤的答案。在此次調查中,博納豐選取了亞馬遜公司土耳其機器人項目(Mechanical Turk)的數百名員工作為訪問對象。調查人員首先問道:如果為了拯救一位或更多行人,車子就得轉向并撞上障礙物,從而導致駕駛員喪生,那么是否還應該進行這樣的規避?隨后調查人員會增減被拯救的行人數量。博納豐發現,大多數人都在原則上同意對汽車進行編程以盡量減少傷亡人數,但談到這些情景的具體細節時,他們就不那么確定了。他們支持其他人使用自動駕駛汽車,自己卻不那么熱心。人們往往都有實利主義本能,認為應該犧牲車內人員的生命來救助更多的車外人員——除非這些車內人員就是他們自己。 ????智能機器 ????科幻小說家們總是愛寫關于機器人暴政的故事,比如機器人占領了世界(如《終結者》電影以及其它許多文藝作品),每個人說的話都會被監聽和分析(就像奧威爾的小說《1984》里描寫的一樣)等等。這些情景暫時不可能發生,但科幻小說中的許多奇思妙想正在成為主流科技。互聯網和云計算提供的平臺已經造就了很多技術飛躍,顯示了人工智能相對于人類的巨大優勢。 ????在斯坦利·庫布里克的經典電影《2001漫游太空》中,我們已經可以見到一絲未來的影子。在電影中,電腦已經可以根據任務的優先性做出決定。飛船上的電腦HAL說道:“這個任務對我太重要了,我不能讓你危害到它。”如今,從手機到汽車,機器智能已經在我們身邊的許多設備上出現了。據英特爾公司預測,到2020年,全球將有1.52億輛聯網汽車行駛在路面上,它們每年將產生多達1.1億億字節的數據,足以裝滿4萬多個250 GB的硬盤。那么它們有多智能呢?用英特爾公司的話說,幾乎和你一樣智能。屆時,聯網汽車將分享和分析一系列數據,從而在行駛中隨時做出決策。在大多數情況下,無人駕駛汽車的確比有人駕駛的汽車更安全,但我們所擔心的正是那些異常情況。 ????科幻小說家阿西莫夫著名的“機器人三定律”,為我們提出了未來智能設備如何在危急情況下進行決策的有益建議。 ????1、機器人不能傷害人,也不能不作為而坐視人受到傷害。 ????2、機器人必須服從人類的命令,除非人類的命令違背第一原則。 ????3、在不違背第一及第二原則的情況下,機器人必須保護自己。 ????阿西莫夫甚至還在“三定律”之前增加了一條級別高于三定律的“第零定律”: ????·機器人不能傷害人類社會,也不能不作為而坐視人類社會受到傷害。 ????阿西莫多雖然沒能幫我們解決“撞車”悖論,不過隨著傳感器技術的進一步發展,數據的來源越來越多,數據處理能力越來越強,“撞車”決策已經被簡化為冷冰冰的數據分析。 ????當然,軟件愛鬧Bug是出了名的。如果有人惡意篡改這些系統,會造成怎樣的災難?到了機器智能真的能從人類手中接管方向盤的那一天,又會發生什么?這樣做是否正確?到時候,購車者能否購買可以編程的“道德配置”,對自己的車子進行“道德定制”?既然有的汽車上帖著“我不為任何人踩剎車”的保險杠車貼,到時候會不會有“我不為任何人踩剎車”的人工智能?如果是這樣的話,你怎么才能知道車子在危機時刻會做出怎樣的反應——就算你知道,你又是否愿意坐上這樣一臺車呢? ????此外還有法律上的問題。如果一輛車本可以采取措施減少傷亡,但它卻沒有這樣做,那會怎樣?如果它根據“道德計算”,直接從行人身上碾過去了怎么辦?這些都是人類駕駛汽車時所要擔負的責任,但機器是按指令行事的,那么應該由誰(或者什么東西)來承擔責任?如今智能手機、機場監控設備甚至連Facebook的面部識別技術都在不斷進步,對于計算機來說,識別物體并且根據車速和路況迅速計算出一系列可能后果,并選擇和采取相應行動,已經不是很困難的事了。到那個時候,置身事中的你,很可能連選擇的機會都沒有了。(財富中文網) ????本文作者Bill Buchanan是愛丁堡龍比亞大學分布式計算及網絡與安全中心主任,本文最初發表于《The Conversation》。 ????譯者:樸成奎 ????審校:任文科 |
????We make decisions every day based on risk – perhaps running across a road to catch a bus if the road is quiet, but not if it’s busy. Sometimes these decisions must be made in an instant, in the face of dire circumstances: a child runs out in front of your car, but there are other dangers to either side, say a cat and a cliff. How do you decide? Do you risk your own safety to protect that of others? ????Now that self-driving cars are here and with no quick or sure way of overriding the controls – or even none at all – car manufacturers are faced with an algorithmic ethical dilemma. On-board computers in cars are already parking for us, driving on cruise control, and could take control in safety-critical situations. But that means they will be faced with the difficult choices that sometimes face humans. ????How to program a computer’s ethical calculus? ????? Calculate the lowest number of injuries for each possible outcome, and take that route. Every living instance would be treated the same. ????? Calculate the lowest number of injuries for children for each possible outcome, and take that route. ????? Allocate values of 20 for each human, four for a cat, two for a dog, and one for a horse. Then calculate the total score for each in the impact, and take the route with the lowest score. So a big group of dogs would rank more highly than two cats, and the car would react to save the dogs. ????What if the car also included its driver and passengers in this assessment, with the implication that sometimes those outside the car would score more highly than those within it? Who would willingly climb aboard a car programmed to sacrifice them if needs be? ????A recent study by Jean-Francois Bonnefon from the Toulouse School of Economics in France suggested that there’s no right or wrong answer to these questions. The research used several hundred workers found through Amazon’s Mechanical Turk to analyze viewpoints on whether one or more pedestrians could be saved when a car swerves and hits a barrier, killing the driver. Then they varied the number of pedestrians who could be saved. Bonnefon found that most people agreed with the principle of programming cars to minimize death toll, but when it came to the exact details of the scenarios they were less certain. They were keen for others to use self-driving cars, but less keen themselves. So people often feel a utilitarian instinct to save the lives of others and sacrifice the car’s occupant, except when that occupant is them. ????Intelligent machines ????Science fiction writers have had plenty of leash to write about robots taking over the world (Terminator and many others), or where everything that’s said is recorded and analyzed (such as in Orwell’s 1984). It’s taken a while to reach this point, but many staples of science fiction are in the process of becoming mainstream science and technology. The internet and cloud computing have provided the platform upon which quantum leaps of progress are made, showcasing artificial intelligence against the human. ????In Stanley Kubrick’s seminal film 2001: A Space Odyssey, we see hints of a future, where computers make decisions on the priorities of their mission, with the ship’s computer HAL saying: “This mission is too important for me to allow you to jeopardize it”. Machine intelligence is appearing in our devices, from phones to cars. Intel predicts that there will be 152 million connected cars by 2020, generating over 11 petabytes of data every year – enough to fill more than 40,000 250 GB hard disks. How intelligent? As Intel puts it, (almost) as smart as you. Cars will share and analyze a range data in order to make decisions on the move. It’s true enough that in most cases driverless cars are likely to be safer than humans, but it’s the outliers that we’re concerned with. ????The author Isaac Asimov’s famous three laws of robotics proposed how future devices will cope with the need to make decisions in dangerous circumstances. ????? A robot may not injure a human being or, through inaction, allow a human being to come to harm. ????? A robot must obey the orders given it by human beings, except where such orders would conflict with the First Law. ????? A robot must protect its own existence as long as such protection does not conflict with the First or Second Laws. ????He even added a more fundamental “0th law” preceding the others: ????? A robot may not harm humanity, or, by inaction, allow humanity to come to harm. ????Asimov did not tackle our ethical dilemma of the car crash, but with greater sensors to gather data, more sources of data to draw from, and greater processing power, the decision to act is reduced to a cold act of data analysis. ????Of course software is notoriously buggy. What havoc could malicious actors who compromise these systems wreak? And what happens at the point that machine intelligence takes control from the human? Will it be right to do so? Could a future buyer purchase programmable ethical options with which to customize their car? The artificial intelligence equivalent of a bumper sticker that says “I break for nobody”? In which case, how would you know how cars were likely to act – and would you climb aboard if you did? ????Then there are the legal issues. What if a car could have intervened to save lives but didn’t? Or if it ran people down deliberately based on its ethical calculus? This is the responsibility we bear as humans when we drive a car, but machines follow orders, so who (or what) carries the responsibility for a decision? As we see with improving face recognition in smartphones, airport monitors and even on Facebook, it’s not too difficult for a computer to identify objects, quickly calculate the consequences based on car speed and road conditions in order to calculate a set of outcomes, pick one, and act. And when it does so, it’s unlikely you’ll have an choice in the matter. ????Bill Buchanan is head of the center for distributed computing, networks and security at Edinburgh Napier University. This article originally appeared on The Conversation. |