不管《財富》和穆斯克爭論有多激烈,自動駕駛汽車還應繼續前行
上周傳出這樣一條新聞:特斯拉Model S車主自動駕駛時遭遇車禍去世,美國監管機構正調查事故原因。這是自動駕駛技術首次出現車禍致死,不僅要惋惜生命的逝去,也要正視自動駕駛汽車發展中面臨的一系列挑戰、道德難題和難以回答的疑問。過去僅在理論上的討論突然之間變成現實。 然而,沒人覺得這次事故會放緩汽車的自動化進程。特斯拉公司的股票略有下挫,但分析師稱之不過是“頭條新聞風險”。 |
News emerged this week that U.S. regulators were investigating the death of a driver using the Autopilot feature of a Tesla Model S. This was the first death of its kind, and while it’s first and foremost a tragic loss of life, it also points to an array of challenges, ethical conundrums, and unanswered questions about the quest for self-driving cars. What had been theoretical debates are suddenly starkly real. By and large, there seems little expectation that the event in and of itself will slow progress towards vehicle automation. Tesla’s own stock suffered modest losses on the news, and analysts described the event as a mere “headline risk.” |
原因之一是,雖然調查還在進行,但看起來特斯拉的自動駕駛系統Autopilot并不是事故的元兇。而且有猜測認為,遇難的特斯拉車駕駛者生前有可能注意力分散,也許還有超速情況。更重要的是,事故發生時向特斯拉迎面駛來的拖掛卡車司機采取了危險轉彎。Autopilot顯然并非完美,但事后看來,原因就是兩位駕駛員陷入了危險境地,自動駕駛系統也無能為力,駕駛系統并未犯致命錯誤。 從廣義的角度看,這起事故是不可避免的:誰也沒法保證自動駕駛系統能避免車禍,這樣一起事故發生時,特斯拉作為業內技術最先進的公司,多少也承擔了成為關注焦點的風險。特斯拉已經回應,指出這是Autopilot系統行駛1.3億英里首次撞車,而美國司機平均每1億英里行程就會出一起致命車禍。特斯拉的樣本規模還不夠大,跟人類駕駛對比證據稍顯不足,但至少目前看來Autopilot有可能讓汽車更安全。 盡管如此,這起致命事故還是導致了一些風險。首先,自動駕駛面臨監管可能更嚴格,目前監管施加的壓力并不大。嚴格的監管可能拖累技術研發的進度。這起事故的法律影響還不確定——如果有人說服法官或者陪審團相信特斯拉應為撞車負責,Autopilot和自動駕駛可能會遭受重挫。 關于監管和追究事故責任的追問不斷可能讓特斯拉焦頭爛額。雖然特斯拉反復強調,Autopilot是一款“測試版”產品,即使自動駕駛狀態開啟,系統內設的語音也會提示駕駛者把雙手放在方向盤上。在科技行業,產品尚未完善就推出很正常。可這起事故提醒我們,汽車行業人命關天,產品不能有瑕疵。盡管“測試”項目對幫助特斯拉改進Autopilot至關重要——但法官和議員可能得仔細衡量,讓駕駛員冒著生命危險使用監管松弛又不完善的產品值不值得。 |
That’s in part because, while an investigation is still underway, it so far does not seem that the Tesla Autopilot feature was the root cause of the accident. There is speculation that the Tesla driver may have been distracted, and perhaps speeding. More important still, most accounts of the incident have the semi-truck’s driver making a very dangerous turn across oncoming traffic. Autopilot clearly isn’t perfect, but the emerging picture is one in which two human drivers created a situation that an automated system failed to save them from, rather than one in which an automated system made a fatal mistake on its own. More broadly, this incident has an air of inevitability: No one claims that automated systems will prevent all crashes, and as the company with the most advanced commercially available automation tech, Tesla more or less knowingly shouldered the risk of being in the spotlight when a crash like this occurred. Tesla has responded to the event in part by pointing out that this is the first crash after 130 million miles of Autopilot use, while U.S. drivers overall average about one death per 100 millionvehicle miles traveled. Though Tesla’s sample size is not big enough to make the case on that comparison alone, it’s at least an early indicator that Autopilot does make the cars safer. Nonetheless, the incident generates some risks. For one, it could lead to political pressure to tighten regulation of automation features, which is currently relatively limited. Tighter regulation could slow development of the technology. The legal fallout from the incident is also still uncertain—if someone can convince a judge or jury that Tesla is liable for the crash, the picture for Autopilot and automation could shift rapidly. Questions of both regulation and liability could hinge on Tesla’s repeated insistence that Autopilot is a ‘beta’ product, and its many built-in warnings that drivers should keep their hands on the wheel even when it is active. While releasing a product that’s less than perfect is common practice in the tech world, where Elon Musk’s roots lie, this crash reminds us that things are different when it comes to cars. The ‘beta’ program has been crucial to helping Tesla improve Autopilot – but judges and lawmakers may ultimately have to decide whether that’s worth the tradeoff of risking driver lives on a lightly regulated and explicitly imperfect product. |
由此還引申了另一個問題,即“半自動化”是否導致一種新型風險。正如美國自動駕駛研究公司Kelley Blue Book一位分析師向《底特律新聞》指出,“濫用輔助駕駛技術的記錄”在Youtube等網站上比比皆是——一些視頻里駕駛員雙手離開方向盤,甚至邊看報邊開車。或許真應該問問:特斯拉是否應該采取更激進的方式管控濫用自動駕駛系統問題,也許推廣或者描述功能時可以更保守些。有可能很快就能看到變化。 在這場爭論中,最極端的觀點來自美國知名科技博客Gizmodo的編輯阿麗莎·沃爾克。她認為,這次致命撞車證明,“全自動汽車才最適合現實路況的。”這種觀點有問題,因為自動剎車等各類半自動化系統早已投入應用,挽救了不少生命。雖然時而有些進展公開,但全自動汽車會否很快面市很難說。那一天真正到來以前,如果不允許汽車應用線道偵測和其他安全技術,可能會阻礙全自動汽車必需的諸多功能開發進程。 至少近期內,特斯拉這起致命車禍的教訓是:汽車是強大卻危險的機器。或許有一天,全自動化會讓車輛真正安全,每年全球造成逾百萬起死亡的交通事故將大為減少,甚至不復存在。 可惜的是,那一天還沒到來。??(財富中文網) 譯者:Pessy 審校:夏林 |
Related to this is the question of whether ‘partial automation’ creates a unique sort of risk. As a Kelley Blue Book analyst put it to the Detroit News, “documented abuses of driver-assist technology” have been plastered all over sites like YouTube —videos of drivers operating their Tesla with no hands, or even while reading the newspaper. It’s fair to ask whether Tesla should have been more aggressive about policing these misuses of the system, perhaps by marketing or characterizing the technology itself more conservatively. Those changes could be coming soon. At the most extreme end of that debate, Gizmodo’s Alissa Walker argues that the crash proves that “fully autonomous vehicles are the only types of self-driving cars that make sense in our streets.” That’s a problematic argument, because various kinds of partial automation, such as automatic braking, arealready on the road and saving lives. Despite some bold public statements, there’s also little certainty that full vehicle automation is coming anytime soon, and keeping lane detection and other safety features out of cars until it’s here would possibly hinder the development of the myriad features necessary to add up to a fully autonomous car. At least in the near term, what it all boils down to is this: Automobiles are powerful, dangerous machines. Maybe full automation will someday make them truly safe, preventing most, or even all, of the million-plus traffic deaths that occur worldwide each year. But we’re not there yet. |