精品国产_亚洲人成在线高清,国产精品成人久久久久,国语自产偷拍精品视频偷拍

立即打開
馬斯克又錯了,人工智能并不比朝鮮核武更危險

馬斯克又錯了,人工智能并不比朝鮮核武更危險

Michael L. Littman 2017年08月20日
或許有一天,我們身邊的世界將到處都是人工智能機器。但是我們必將能與機器一起進化。

伊隆·馬斯克最近在推特上聲稱,人工智能要比朝鮮核導威脅還要危險。他之所以做出這一推論,是因為他堅信思想的力量。然而他的這一理論其實是經不起推敲的。

如果你堅信理念可以改變世界,如果你認為電腦終有一天會進化出自主思想,那么你可能會認為,電腦有朝一日統治世界的“賽博朋克”式的噩夢并非是沒有可能的。這個邏輯在馬斯克的腦中是根深蒂固的,作為一個熱衷于將理念變成行動并以此為生的人,他當然希望你們也相信這個理念。然而他的論斷是錯誤的,你不應該相信這個末日預言。

馬斯克的邏輯光靠一條推文是總結不完的——歸功于創新理念以及科學家們的不懈努力,加上海量的投資,近些年,電腦的運行速度已經變得越來越快,功能也越來越強大。最近幾年,計算機領域的一些重大難題已經相繼被攻克,比如計算機已經具備了識別物體、圖像和語音的能力,甚至在圍棋等項目上完勝人類的世界冠軍。如果機器學習研究人員編制的程序已經能夠取代字幕組、打字員和圍棋運動員這些工種,那么或許過不了多久,機器就將能夠自己給自己編程。一旦計算機程序進行自設計階段,他們就會迅速進化,更且將越來越擅于自我完善和優化。

由此帶來的“人工智能爆炸”將使計算機擁有無與倫比的能力,屆時統治世界的必將是它們而非人類。到時這些計算機會產生什么主觀意識和目的呢?哪怕它們的主觀目的是善意的,也必將對人類的存續產生重大威脅。這也是為什么馬斯克認為人工智能的問題要比朝鮮問題重大多了。就算有幾個美國城市被朝鮮核導彈轟平了,對人類的危害也不是永久的,而人類被不斷自我完善的計算機系統性地滅絕,人類的知識最終融為其強大計算能力的一部分,這才是人類永恒的噩夢。

不過馬斯克的推論畢竟高估了“人工智能爆炸”的可能性。我們不能僅僅因為機器學習領域最近取得的幾個成功,就斷言人工智能終將封神。而且機器學習技術也并不像它乍看起來那樣危險。

舉個例子,你可能看到過計算機以超人的能力處理某項任務,結果也令人非常驚嘆。人類的語言和博弈能力是建立在綜合的人生經驗基礎上的,因此當你看到機器能回答問題,或是在圍棋比賽中將你殺得一敗涂地時,你很可能自然而然地認為,計算機也同樣具備其他的人類技能。然而計算機系統并非是那樣工作的。

簡單地說,近年來取得成功的機器學習系統,都采用了以下構建方法:首先,人們要決定他們想解決什么問題,然后以一系列代碼的形式表現出來,這些代碼被稱為“目標函數”,系統可以針對目標函數進行打分。然而后他們會收集數以百萬計的案例,來“訓練”系統學會他們希望其展示的行為。然后他們會設計自己的人工智能架構,并對其進行優化,通過人類見解和強大的優化算法使目標函數最大化。

通過這種方法得到的計算機系統,往往可以展現出超人的性能。然而這種超人的性能僅僅限于系統最初賦予的單一任務。如果你希望這個系統能完成其他任務,那么你可能要按照這種方法重頭設計另一個系統。不過更重要的是,和圍棋游戲不同,人生的游戲是沒有一個清晰的目標函數的——現有的算法也不適合建立一個大而全的人工智能機器。

或許有一天,我們身邊的世界將到處都是人工智能機器。但是我們必將能與機器一起進化,而且我們還有無數個決策要做。世界將如何演化,也將取決于我們的這些決策。我們不應該讓恐懼阻止我們在技術上繼續前進。(財富中文網)

本文作者Michael L. Littman是美國布朗大學的計算機科學教授,也是布朗大學以人為本機器人項目(Humanity Centered Robotics Initiative)主任。

譯者:賈政景

Elon Musk's recent remark on Twitter that artificial intelligence (AI) is more dangerous than North Korea is based on his bedrock belief in the power of thought. But this philosophy has a dark side.

If you believe that a good idea can take over the world and if you conjecture that computers can or will have ideas, then you have to consider the possibility that computers may one day take over the world. This logic has taken root in Musk's mind and, as someone who turns ideas into action for a living, he wants to make sure you get on board too. But he’s wrong, and you shouldn’t believe his apocalyptic warnings.

Here's the story Musk wants you to know but hasn't been able to boil down to a single tweet. By dint of clever ideas, hard work, and significant investment, computers are getting faster and more capable. In the last few years, some famously hard computational problems have been mastered, including identifying objects in images, recognizing the words that people say, and outsmarting human champions in games like Go. If machine learning researchers can create programs that can replace captioners, transcriptionists, and board game masters, maybe it won't be long before they can replace themselves. And, once computer programs are in the business of redesigning themselves, each time they make themselves better, they make themselves better at making themselves better.

The resulting “intelligence explosion” would leave computers in a position of power, where they, not humans, control our future. Their objectives, even if benign when the machines were young, could be threatening to our very existence in the hands of an intellect dwarfing our own. That's why Musk thinks this issue is so much bigger than war with North Korea. The loss of a handful of major cities wouldn't be permanent, whereas human extinction by a system seeking to improve its own capabilities by turning us into computational components in its mega-brain—that would be forever.

Musk’s comparison, however, grossly overestimates the likelihood of an intelligence explosion. His primary mistake is in extrapolating from recent successes of machine learning the eventual development of general intelligence. But machine learning is not as dangerous as it might look on the surface.

For example, you may see a machine perform a task that appears to be superhuman and immediately be impressed. When people learn to understand speech or play games, they do so in the context of the full range of human experiences. Thus when you see something that can respond to questions or beat you soundly in a board game, it is not unreasonable to infer that it also possesses a range of other human capacities. But that's not how these systems work.

In a nutshell, here's the methodology that has been successful for building advanced systems of late: First, people decide what problem they want to solve and they express it in the form of a piece of code called an objective function—a way for the system to score itself on the task. They then assemble perhaps millions of examples of precisely the kind of behavior they want their system to exhibit. After that they design the structure of their AI system and tune it to maximize the objective function through a combination of human insight and powerful optimization algorithms.

At the end of this process, they get a system that, often, can exhibit superhuman performance. But the performance is on the particular task that was selected at the beginning. If you want the system to do something else, you probably will need to start the whole process over from scratch. Moreover, the game of life does not have a clear objective function—current methodologies are not suited to creating a broadly intelligent machine.

Someday we may inhabit a world with intelligent machines. But we will develop together and will have a billion decisions to make that shape how that world develops. We shouldn't let our fears prevent us from moving forward technologically.

Michael L. Littman is a professor of computer science at Brown University and co-director of Brown's Humanity Centered Robotics Initiative.

  • 熱讀文章
  • 熱門視頻
活動
掃碼打開財富Plus App

            主站蜘蛛池模板: 多伦县| 禹州市| 晋江市| 台中县| 丹凤县| 淮安市| 全南县| 察隅县| 天柱县| 商城县| 军事| 德惠市| SHOW| 香河县| 油尖旺区| 鄢陵县| 石阡县| 卓尼县| 博乐市| 吉林市| 收藏| 云龙县| 城固县| 镇江市| 安龙县| 邹城市| 太康县| 青田县| 库车县| 黑龙江省| 永新县| 安陆市| 大安市| 凤凰县| 孝感市| 双鸭山市| 河南省| 莆田市| 鹤岗市| 木兰县| 青河县|