精品国产_亚洲人成在线高清,国产精品成人久久久久,国语自产偷拍精品视频偷拍

立即打開
與其擔心人工智能消滅人類,不如擔心這四件事情

與其擔心人工智能消滅人類,不如擔心這四件事情

Jonathan Vanian 2018年03月06日
研究人員們并不擔心人工智能的發展會帶來人類的末日,他們擔心的是一些更加實際的問題。

人工智能領域的發展有望大大加快醫學研究,更好地檢測疾病,但同時也有可能放大各類不法行為的危害。

這便是近日牛津大學、劍橋大學、斯坦福大學、電子前沿基金會和人工智能研究組織OpenAI等機構發布的一份研究報告的結論。

研究人員們并不擔心人工智能的發展會帶來科幻文學式的末日,比如機器人像《終結者》系列一樣統治世界,他們擔心的是一些更加實際的問題。比如犯罪分子可能利用機器學習技術,進一步推動黑客攻擊自動化,使本已壓力山大的企業網安人員必須承擔更大的壓力,以確保企業計算機系統的安全。

該報告的目的并非是要勸說企業、學界和公眾阻止人工智能的研究,而是要突出一些現實的隱患,使人們能更好地應對甚至阻止未來更加智能化的黑客攻擊或其它與AI有關的問題。報告作者們建議各國政策制定者與科研人員一道著手解決潛在的人工智能風險,建立人工智能領域的道德標準,并提出了一系列其它防范建議。

以下是報告中比較有意思的幾個觀點:

1、 網絡釣魚或將愈演愈烈

隨著人工智能的發展,網絡釣魚即犯罪分子在貌似合法的郵件中隱藏惡意鏈接的做法或將更加普遍和有效。比如犯罪分子根據推特和臉書上獲取的網民在線信息和行為模式,或許能夠自動生成定制化的欺騙郵件引誘網民點擊。這些惡意郵件、網站或鏈接可以從偽造賬號中發出,而那些偽造賬號則可以模擬網民的親友的寫作風格,從而使這些釣魚郵件更具欺騙性。

2、黑客開始像金融機構一樣使用AI技術

如果銀行和信用卡機構能采用機器學習技術改進他們的服務,黑客自然也做得到。比如犯罪分子可使用人工智能技術將支付處理等任務自動化,從而可以更加快速地獲取勒索贖金。

犯罪分子還可以創建自動聊天機器人,用來與勒索軟件的受害者進行交流,在此期間,犯罪分子將一直綁架受害人的計算機系統直至支付贖金。使用了自動聊天機器人之后,犯罪分子就可以將勒索環節交給機器人,本人則可以省出時間對更多潛在受害者發動攻擊。

3、假新聞和假宣傳也將愈演愈烈

你或許覺得臉書上泛濫的各類假新聞已經夠讓人頭痛了,但是未來或許這個問題將更加嚴重。歸功于人工智能的進步,研究人員已經可以在音視頻中將虛擬的政治人物形象做得與真人一般無二。比如華盛頓大學的人工智能研究人員最近就制作了一段前總統奧巴馬的演講視頻,雖然這段視頻看起來極為真實,然而卻是徹底虛構的。

是不是覺得毛骨悚然?該報告的作者們指出,以后人們利用偽造的音視頻炮制“假新聞”將會更加容易。未來的某一天,人們甚至可能在視頻中看到,“國家領導人在做著他們從未說過的煽動性演說。”

報告的作者們還指出,不法分子有可能使用AI技術開展“自動化、超個性化的虛假宣傳活動”。在這些宣傳活動中,“不同地區的人可能會收到定制化的宣傳信息,以吸引他們投票。”

4、人工智能將使武器更具毀滅性

隨著人工智能的技術,哪怕一個普通人也具有了廣泛制造暴力的能力。比如隨著面部識別、無人機導航等開源技術的傳播,使一些犯罪分子利用這些技術實施犯罪成為了可能。想象一下,如果一架自動飛行的無人機具備了面部識別能力,然后精準發動攻擊,將是一幅什么樣的情景?

另一個令人擔憂的現象,是目前各國對可用于武器化的機器人缺乏監管,且對反制措施的研究遠遠不足,從而使“武器化機器人的全球泛濫”成為了現實風險。(財富中文網)

該報告的原文稱:

“雖然針對機器人攻擊(尤其是無人機)的防御技術得到了一定發展,但是一個對于具有一定才智的攻擊者,他憑借快速普及的軟硬件和技術,通過直接使用AI或AI相關系統,是可以造成大量人員傷亡的。目前對這一類襲擊的反制手段還很缺乏。”

譯者:樸成奎

?

Advances in artificial intelligence have the potential to supercharge medical research and better detect diseases, but it could also amplify the actions of bad actors.

That’s according to a report released this week by a team of academics and researchers from Oxford University, Cambridge University, Stanford University, the Electronic Frontier Foundation, artificial intelligence research group OpenAI, and others institutes.

The report’s authors aren’t concerned with sci-fi doomsday scenarios like robots taking over the world, such as in Terminator, but more practical concerns. Criminals, for instance, could use machine learning technologies to further automate hacking attempts, putting more pressure on already beleaguered corporate security officers to ensure their computer systems are safe.

The goal of the report is not to dissuade companies, researchers, or the public from AI, but to highlight the most realistic concerns so people can better prepare and possibly prevent future cyber attacks or other problems related to AI. The authors urge policymakers to work with researchers on addressing possible AI issues, and for technologists involved in AI to consider a code of ethics, among other recommendations.

Here’s some interesting takeaways:

1. Phishing scams could get even worse1.

Phishing scams, in which criminals send seemingly legitimate emails bundled with malicious links, could become even more prevalent and effective thanks to AI. The report outlines a scenario in which people’s online information and behaviors, presumably scraped from social networks like Twitter and Facebook, could be used to automatically create custom emails that entice them to click. These emails, bad websites, or links, could be sent from fake accounts that are able to mimic the writing style of people’s friends so they look real.

2. Hackers start using AI like financial firms

If banks and credit card firms adopt machine learning to improve their services, so too will hackers. For instance, the report said that criminals could use AI techniques to automate tasks like payment processing, presumably helping them collect ransoms more quickly.

Criminals could also create chatbots that would communicate with the victims of ransomware attacks, in which criminals hold people’s computers hostage until they receive payment. By using software that can talk or chat with people, hackers could conceivably target more people at once without having to actually personally communicate with them and demand payments.

3. Fake news and propaganda is only going to get worse

If you thought the spread of misleading news on social networks like Facebook was bad now, get ready for the future. Advances in AI have led to researchers creating realistic audio and videos of political figures that are designed to look, and talk like real-life counterparts. For instance, AI researchers at the University of Washington recently created a video of former President Barack Obama giving a speech that looks incredibly realistic, but was actually fake.

You can see where this is going. The report’s authors suggest that people could create “fake news reports” with fabricated video and audio. These fake news reports could show “state leaders seeming to make inflammatory comments they never actually made.”

The authors also suggest that bad actors could use AI to create “automated, hyper-personalized disinformation campaigns,” in which “Individuals are targeted in swing districts with personalized messages in order to affect their voting behavior.”

4. AI could make weapons more destructive

Advances in AI could enable people, even a “single person,” to cause widespread violence, the report said. With the widespread availability of open-source technologies like algorithms that can detect faces or help drones navigate, the authors are concerned that criminals could use them for nefarious purposes. Think self-flying drones with the ability to detect a person’s face below it, and then carry out an attack.

What’s also concerning is that there’s been little regulation or technical research about defense techniques to combat the “global proliferation of weaponizable robots.”

From the report:

While defenses against attacks via robots (especially aerial?drones) are being developed, there are few obstacles at present? to a moderately talented attacker taking advantage of the rapid proliferation of hardware, software, and skills to cause large amounts of physical harm through the direct use of AI or the subversion of AI-enabled systems.

  • 熱讀文章
  • 熱門視頻
活動
掃碼打開財富Plus App

            主站蜘蛛池模板: 宝丰县| 广昌县| 彰化市| 大洼县| 连云港市| 遂溪县| 沭阳县| 醴陵市| 米易县| 建始县| 甘南县| 同江市| 龙门县| 丰顺县| 关岭| 孟津县| 闽清县| 吴堡县| 大化| 封开县| 黄浦区| 息烽县| 游戏| 淮阳县| 海林市| 贡觉县| 张家界市| 女性| 信丰县| 新干县| 弋阳县| 梁河县| 手机| 桂东县| 吉隆县| 宿州市| 山阴县| 贞丰县| 佛冈县| 琼结县| 宁陵县|