關(guān)于“一位因為擔(dān)心人工智能消滅人類而知名的性情乖張的天才富豪”,《危險邊緣》節(jié)目(Jeopardy!)提出的問題肯定是“特斯拉(Tesla)的首席執(zhí)行官埃隆?馬斯克到底是誰?”但這位高科技投資者卻更加低調(diào)。他一直在竭盡全力解決人工智能可能帶來的潛在的或者理論上的威脅。他的名字叫讓?塔林。
他是誰?塔林與馬斯克很像,兩人年齡相仿,馬斯克今年49歲,塔林48歲。塔林也是一位工程師。兩人都在2000年代初史上最大的互聯(lián)網(wǎng)公司成功故事中積累了財富。馬斯克發(fā)家依靠PayPal。塔林的財富來自Skype。
塔林是一位來自愛沙尼亞的計算機(jī)程序員,是點對點文件共享技術(shù)的開拓者之一。他參與創(chuàng)建了Kazaa公司,后來利用類似的技術(shù)幫助創(chuàng)建了Skype。他在Skype擔(dān)任聯(lián)合創(chuàng)始人,也是該公司的第一批工程師之一。后來,他從Skype賺到了一大筆錢,并成為其他歐洲高科技初創(chuàng)公司領(lǐng)域的知名投資人。雖然他的財力無法與馬斯克相媲美,但他同樣身價不菲。(一家競爭對手商業(yè)媒體估計,他在2019年的凈身價為9億美元。)
與馬斯克一樣,塔林也是倫敦人工智能公司DeepMind的早期投資者之一,現(xiàn)在這家公司已經(jīng)被谷歌(Google)的母公司Alphabet收購。投資這家公司的經(jīng)歷,讓塔林對超人類人工智能消滅人類的可能性產(chǎn)生了擔(dān)憂。
塔林在劍橋大學(xué)(Cambridge University)參與創(chuàng)建了人類生存風(fēng)險研究中心(Centre for the Study of Existential Risk),并在美國馬薩諸塞州的另外一個劍橋參與創(chuàng)建了生命未來研究所(Future of Life Institute)。他還是牛津大學(xué)(University of Oxford)智庫人類未來研究所(Future of Humanity Institute)的主要資助人之一。該研究所致力于研究人類生存風(fēng)險,其創(chuàng)立者是哲學(xué)家尼克?博斯特羅。博斯特羅對于超智能機(jī)器的潛在危險的觀點,可能也影響了馬斯克。馬斯克也是該研究所的資助人之一。塔林還資助了伯克利的機(jī)器智能研究院(Machine Intelligence Research Institute),該機(jī)構(gòu)致力于確?!氨热祟惛斆鞯娜斯ぶ悄軙矸e極影響”。同樣與馬斯克一樣,塔林也是舊金山人工智能研究公司OpenAI的早期投資者,該公司最初成立的目標(biāo)是與谷歌和DeepMind等公司競爭。
現(xiàn)在,塔林向他之前投資的一家科技公司,提供了一筆非同尋常的捐款。這筆捐款之所以非同尋常,有三個原因:第一,這筆錢是贈與,而不是投資;第二,這筆錢資助的項目重點研究的方向并不是超級人工智能所帶來的人類生存威脅,而是當(dāng)前人工智能帶來的一些更加現(xiàn)實的風(fēng)險,例如算法偏見、缺乏透明度和對數(shù)據(jù)隱私的擔(dān)憂等;最后,這筆捐款全部使用加密貨幣。
這位愛沙尼亞投資人在2018年1月向倫敦快速增長的Faculty AI公司捐贈了350枚以太幣(與以太坊區(qū)塊鏈關(guān)聯(lián)的加密貨幣),當(dāng)時價值約434,000美元,2020年3月又贈與了50枚比特幣,價值約316,000美元。這些數(shù)據(jù)來自Faculty向英國公司注冊處(Companies House)申報的財務(wù)文件,文件在本月對外公開。塔林之前就是Faculty公司的種子投資人,當(dāng)時投資使用的是傳統(tǒng)的法定貨幣。英國公司注冊處的文件顯示,塔林的投資公司Metaplanet Holdings在Faculty公司持有的股份不足9%。
Faculty經(jīng)常被拿來與美國數(shù)據(jù)分析公司Palantir作對比。最近幾個月,F(xiàn)aculty因為幫助英國政府預(yù)測呼吸機(jī)和其他醫(yī)療器械的可用性以應(yīng)對新冠疫情,引起媒體廣泛關(guān)注。該公司與英國政府的這份合同引起了爭議,原因是合同判授并沒有經(jīng)過正常的招標(biāo)流程,而且政府拒絕披露具體合同條款。Faculty最近一共獲得了7份政府合同。Faculty公司之前名為ASI Data Science,曾經(jīng)幫助“投票脫歐”(Vote Leave)團(tuán)體成功推動英國脫離歐盟,并曾經(jīng)與該團(tuán)體負(fù)責(zé)人多米尼克?卡明斯合作??魉挂恢笔怯紫圊U里斯?約翰遜的親信。(據(jù)英國媒體爆料稱,約翰遜在11月13日突然解雇了卡明斯。)Faculty還從英國內(nèi)政部(U.K. Home Office)得到了一份價值800,000美元的合同,負(fù)責(zé)設(shè)計一個人工智能系統(tǒng),用于監(jiān)測社交媒體平臺上的恐怖主義言論。
塔林稱,他之所以用加密貨幣向Faculty捐款,是因為他的大部分財富都是加密貨幣的形式,將加密貨幣兌換成現(xiàn)金要額外繳納資本利得稅,這會導(dǎo)致他捐贈的金額減少。
塔林表示,他之所以支持Faculty解決當(dāng)前人工智能系統(tǒng)帶來的風(fēng)險,而不是未來超級人工智能所帶來的人類生存風(fēng)險,理由是許多因素致力于使今天的人工智能系統(tǒng)減少偏見并且更容易理解,這會減少有人創(chuàng)造出人工智能未來毀滅人類的風(fēng)險。他告訴《財富》雜志:“透明度和可解釋性在當(dāng)前的商業(yè)環(huán)境中是有用的,比如醫(yī)療環(huán)境。然而,這也能夠在我們部署一些比人類更聰明的技術(shù)時,更有效地保證人類的安全?!崩纾行┘夹g(shù)可能更容易理解一個比人類更聰明的系統(tǒng)的行為背后的意圖。
Faculty公司的創(chuàng)始人及首席執(zhí)行官馬克?華納表示,塔林的捐款幫助公司請來了人工智能安全領(lǐng)域的多位專家。該公司的安全研究圍繞四個領(lǐng)域展開。華納稱:“我們認(rèn)為人工智能必須是公平的、私密的、穩(wěn)健的和可解釋的?!?/p>
華納表示,公眾一直以來被要求在人工智能系統(tǒng)的安全性和性能之間做出選擇,這個選擇本身是錯誤的。有些研究者和銷售人工智能系統(tǒng)的公司聲稱,更透明的人工智能方法效果不及不透明的方法。華納說道:“這是完全錯誤的。”他指出,車前燈、雨刮器、座椅安全帶和安全氣囊等創(chuàng)新讓汽車變得更安全,同時性能也變得更加出色,人工智能同樣如此。
Faculty公司曾經(jīng)受到英國數(shù)據(jù)倫理與創(chuàng)新中心(Center for Data Ethics and Innovation)的委托,評估最新的人工智能公平性測量方法。該公司開發(fā)的工具,可以幫助用戶了解復(fù)雜的人工智能系統(tǒng)如何做出決策,這些工具曾經(jīng)在知名人工智能大會上展出。另外,該公司還在研究如何使機(jī)器學(xué)習(xí)系統(tǒng)更好地判斷數(shù)據(jù)的因果關(guān)系,而不只是相互關(guān)系。華納認(rèn)為,這是出于重要的安全性考慮,尤其是在醫(yī)療和金融領(lǐng)域使用人工智能時。該公司還在研究如何通過數(shù)學(xué)方法,揭露和避免人工智能算法當(dāng)中的偏見。
關(guān)于公司賬目中持有加密貨幣的安全性,則是另一回事。華納承認(rèn),接受塔林的捐贈,為公司的記賬帶來了一些麻煩。他告訴《財富》雜志:“我們的會計師只能去請教從事加密貨幣工作并且了解如何進(jìn)行加密貨幣記賬的人。”
最后,公司在賬目中把這些加密貨幣列為了無形資產(chǎn)。這意味著,隨著時間的推移,這些加密貨幣的價值將被攤銷,以太幣或比特幣價值的任何大幅下跌,都將使Faculty承受減值損失。但如果任何一種加密貨幣暴漲,這不會在公司賬面上有任何體現(xiàn)。這是目前會計行業(yè)對于如何處理加密貨幣資產(chǎn)的一致共識,但這種做法依舊存在爭議,有專家認(rèn)為加密貨幣應(yīng)該與任何其他金融證券一樣處理。
據(jù)Faculty的年度財報顯示,該公司在2019年3月至2020年3月期間確實賣掉了價值約144,000美元的以太幣。但由于加密貨幣一直在升值,今年到目前為止比特幣升值了300%,以太幣從年初至今升值了144%,因此該公司似乎很高興將塔林捐贈的加密貨幣繼續(xù)留在賬面上,以備不時之需。畢竟,如果保證人工智能安全的工作失敗,我們的機(jī)器人主宰突然降臨,這筆錢或許能夠用得到。(財富中文網(wǎng))
翻譯:劉進(jìn)龍
審校:汪皓
關(guān)于“一位因為擔(dān)心人工智能消滅人類而知名的性情乖張的天才富豪”,《危險邊緣》節(jié)目(Jeopardy!)提出的問題肯定是“特斯拉(Tesla)的首席執(zhí)行官埃隆?馬斯克到底是誰?”但這位高科技投資者卻更加低調(diào)。他一直在竭盡全力解決人工智能可能帶來的潛在的或者理論上的威脅。他的名字叫讓?塔林。
他是誰?塔林與馬斯克很像,兩人年齡相仿,馬斯克今年49歲,塔林48歲。塔林也是一位工程師。兩人都在2000年代初史上最大的互聯(lián)網(wǎng)公司成功故事中積累了財富。馬斯克發(fā)家依靠PayPal。塔林的財富來自Skype。
塔林是一位來自愛沙尼亞的計算機(jī)程序員,是點對點文件共享技術(shù)的開拓者之一。他參與創(chuàng)建了Kazaa公司,后來利用類似的技術(shù)幫助創(chuàng)建了Skype。他在Skype擔(dān)任聯(lián)合創(chuàng)始人,也是該公司的第一批工程師之一。后來,他從Skype賺到了一大筆錢,并成為其他歐洲高科技初創(chuàng)公司領(lǐng)域的知名投資人。雖然他的財力無法與馬斯克相媲美,但他同樣身價不菲。(一家競爭對手商業(yè)媒體估計,他在2019年的凈身價為9億美元。)
與馬斯克一樣,塔林也是倫敦人工智能公司DeepMind的早期投資者之一,現(xiàn)在這家公司已經(jīng)被谷歌(Google)的母公司Alphabet收購。投資這家公司的經(jīng)歷,讓塔林對超人類人工智能消滅人類的可能性產(chǎn)生了擔(dān)憂。
塔林在劍橋大學(xué)(Cambridge University)參與創(chuàng)建了人類生存風(fēng)險研究中心(Centre for the Study of Existential Risk),并在美國馬薩諸塞州的另外一個劍橋參與創(chuàng)建了生命未來研究所(Future of Life Institute)。他還是牛津大學(xué)(University of Oxford)智庫人類未來研究所(Future of Humanity Institute)的主要資助人之一。該研究所致力于研究人類生存風(fēng)險,其創(chuàng)立者是哲學(xué)家尼克?博斯特羅。博斯特羅對于超智能機(jī)器的潛在危險的觀點,可能也影響了馬斯克。馬斯克也是該研究所的資助人之一。塔林還資助了伯克利的機(jī)器智能研究院(Machine Intelligence Research Institute),該機(jī)構(gòu)致力于確?!氨热祟惛斆鞯娜斯ぶ悄軙矸e極影響”。同樣與馬斯克一樣,塔林也是舊金山人工智能研究公司OpenAI的早期投資者,該公司最初成立的目標(biāo)是與谷歌和DeepMind等公司競爭。
現(xiàn)在,塔林向他之前投資的一家科技公司,提供了一筆非同尋常的捐款。這筆捐款之所以非同尋常,有三個原因:第一,這筆錢是贈與,而不是投資;第二,這筆錢資助的項目重點研究的方向并不是超級人工智能所帶來的人類生存威脅,而是當(dāng)前人工智能帶來的一些更加現(xiàn)實的風(fēng)險,例如算法偏見、缺乏透明度和對數(shù)據(jù)隱私的擔(dān)憂等;最后,這筆捐款全部使用加密貨幣。
這位愛沙尼亞投資人在2018年1月向倫敦快速增長的Faculty AI公司捐贈了350枚以太幣(與以太坊區(qū)塊鏈關(guān)聯(lián)的加密貨幣),當(dāng)時價值約434,000美元,2020年3月又贈與了50枚比特幣,價值約316,000美元。這些數(shù)據(jù)來自Faculty向英國公司注冊處(Companies House)申報的財務(wù)文件,文件在本月對外公開。塔林之前就是Faculty公司的種子投資人,當(dāng)時投資使用的是傳統(tǒng)的法定貨幣。英國公司注冊處的文件顯示,塔林的投資公司Metaplanet Holdings在Faculty公司持有的股份不足9%。
Faculty經(jīng)常被拿來與美國數(shù)據(jù)分析公司Palantir作對比。最近幾個月,F(xiàn)aculty因為幫助英國政府預(yù)測呼吸機(jī)和其他醫(yī)療器械的可用性以應(yīng)對新冠疫情,引起媒體廣泛關(guān)注。該公司與英國政府的這份合同引起了爭議,原因是合同判授并沒有經(jīng)過正常的招標(biāo)流程,而且政府拒絕披露具體合同條款。Faculty最近一共獲得了7份政府合同。Faculty公司之前名為ASI Data Science,曾經(jīng)幫助“投票脫歐”(Vote Leave)團(tuán)體成功推動英國脫離歐盟,并曾經(jīng)與該團(tuán)體負(fù)責(zé)人多米尼克?卡明斯合作??魉挂恢笔怯紫圊U里斯?約翰遜的親信。(據(jù)英國媒體爆料稱,約翰遜在11月13日突然解雇了卡明斯。)Faculty還從英國內(nèi)政部(U.K. Home Office)得到了一份價值800,000美元的合同,負(fù)責(zé)設(shè)計一個人工智能系統(tǒng),用于監(jiān)測社交媒體平臺上的恐怖主義言論。
塔林稱,他之所以用加密貨幣向Faculty捐款,是因為他的大部分財富都是加密貨幣的形式,將加密貨幣兌換成現(xiàn)金要額外繳納資本利得稅,這會導(dǎo)致他捐贈的金額減少。
塔林表示,他之所以支持Faculty解決當(dāng)前人工智能系統(tǒng)帶來的風(fēng)險,而不是未來超級人工智能所帶來的人類生存風(fēng)險,理由是許多因素致力于使今天的人工智能系統(tǒng)減少偏見并且更容易理解,這會減少有人創(chuàng)造出人工智能未來毀滅人類的風(fēng)險。他告訴《財富》雜志:“透明度和可解釋性在當(dāng)前的商業(yè)環(huán)境中是有用的,比如醫(yī)療環(huán)境。然而,這也能夠在我們部署一些比人類更聰明的技術(shù)時,更有效地保證人類的安全?!崩?,有些技術(shù)可能更容易理解一個比人類更聰明的系統(tǒng)的行為背后的意圖。
Faculty公司的創(chuàng)始人及首席執(zhí)行官馬克?華納表示,塔林的捐款幫助公司請來了人工智能安全領(lǐng)域的多位專家。該公司的安全研究圍繞四個領(lǐng)域展開。華納稱:“我們認(rèn)為人工智能必須是公平的、私密的、穩(wěn)健的和可解釋的?!?/p>
華納表示,公眾一直以來被要求在人工智能系統(tǒng)的安全性和性能之間做出選擇,這個選擇本身是錯誤的。有些研究者和銷售人工智能系統(tǒng)的公司聲稱,更透明的人工智能方法效果不及不透明的方法。華納說道:“這是完全錯誤的。”他指出,車前燈、雨刮器、座椅安全帶和安全氣囊等創(chuàng)新讓汽車變得更安全,同時性能也變得更加出色,人工智能同樣如此。
Faculty公司曾經(jīng)受到英國數(shù)據(jù)倫理與創(chuàng)新中心(Center for Data Ethics and Innovation)的委托,評估最新的人工智能公平性測量方法。該公司開發(fā)的工具,可以幫助用戶了解復(fù)雜的人工智能系統(tǒng)如何做出決策,這些工具曾經(jīng)在知名人工智能大會上展出。另外,該公司還在研究如何使機(jī)器學(xué)習(xí)系統(tǒng)更好地判斷數(shù)據(jù)的因果關(guān)系,而不只是相互關(guān)系。華納認(rèn)為,這是出于重要的安全性考慮,尤其是在醫(yī)療和金融領(lǐng)域使用人工智能時。該公司還在研究如何通過數(shù)學(xué)方法,揭露和避免人工智能算法當(dāng)中的偏見。
關(guān)于公司賬目中持有加密貨幣的安全性,則是另一回事。華納承認(rèn),接受塔林的捐贈,為公司的記賬帶來了一些麻煩。他告訴《財富》雜志:“我們的會計師只能去請教從事加密貨幣工作并且了解如何進(jìn)行加密貨幣記賬的人。”
最后,公司在賬目中把這些加密貨幣列為了無形資產(chǎn)。這意味著,隨著時間的推移,這些加密貨幣的價值將被攤銷,以太幣或比特幣價值的任何大幅下跌,都將使Faculty承受減值損失。但如果任何一種加密貨幣暴漲,這不會在公司賬面上有任何體現(xiàn)。這是目前會計行業(yè)對于如何處理加密貨幣資產(chǎn)的一致共識,但這種做法依舊存在爭議,有專家認(rèn)為加密貨幣應(yīng)該與任何其他金融證券一樣處理。
據(jù)Faculty的年度財報顯示,該公司在2019年3月至2020年3月期間確實賣掉了價值約144,000美元的以太幣。但由于加密貨幣一直在升值,今年到目前為止比特幣升值了300%,以太幣從年初至今升值了144%,因此該公司似乎很高興將塔林捐贈的加密貨幣繼續(xù)留在賬面上,以備不時之需。畢竟,如果保證人工智能安全的工作失敗,我們的機(jī)器人主宰突然降臨,這筆錢或許能夠用得到。(財富中文網(wǎng))
翻譯:劉進(jìn)龍
審校:汪皓
The Jeopardy! question for “eccentric genius plutocrat best known for his concerns about artificial intelligence destroying the human race” is, without question, "Who is Tesla CEO Elon Musk?" But the technology investor who has arguably done the most to address this potential—if theoretical—threat is much more obscure: His name is Jaan Tallinn.
Jaan who? Like Musk, Tallinn is an engineer of a certain age—Musk is 49, Tallinn is 48—who made his money on one of the early 2000s’ greatest dotcom success stories. For Musk, it was PayPal. For Tallinn, it was Skype.
An Estonian computer programmer who was one of the pioneers of peer-to-peer file-sharing technology, Tallinn cofounded Kazaa and later used similar technology to help build Skype, where he was a cofounder and one of the first engineering hires. He then took the money he made from Skype and became a prominent investor in other European tech startups. Although he’s not been as financially successful as Musk, Tallinn’s not done badly. (A rival business publication estimated his net worth at $900 million in 2019.)
Like Musk, it was Tallinn's experience as an early investor in London A.I. company DeepMind, now part of Google parent Alphabet, that first ignited his concerns about the potential that superhuman artificial intelligence may destroy the human race.
Tallinn cofounded the Centre for the Study of Existential Risk at Cambridge University as well as the Future of Life Institute in the other Cambridge—Massachusetts, that is. He is also a prominent donor to the Future of Humanity Institute, the University of Oxford think tank devoted to existential risk founded by philosopher Nick Bostrom, whose views on the potential dangers of superintelligent machines also influenced Musk, another of the institute’s funders. Tallinn has also given money to the Machine Intelligence Research Institute, a Berkeley organization dedicated to ensuring “smarter-than-human artificial intelligence has a positive impact.” And, again like Musk, he was an early backer of OpenAI, the San Francisco A.I. research company initially established as a kind of counterweight to Google and DeepMind.
Now Tallinn has made an unusual donation to one of the technology companies he has previously backed. It’s unusual for three reasons: First, the money is a gift, not an investment. Secondly, the money will fund a program that is not focused on the existential threat of superhuman intelligence, but on the more mundane risks of today’s A.I., such as algorithmic bias, lack of transparency, and concerns about data privacy. Finally, the donation was made entirely in cryptocurrency.
The Estonian investor gave Faculty AI, a fast-growing London-based company that helps create machine-learning systems for companies and governments, 350 units of Ether, the coin associated with the Ethereum blockchain, in January 2018, worth about $434,000 at the time, and 50 Bitcoins in March 2020, worth about $316,000, according to Faculty’s financial filings at the U.K. business registry Companies House, which are being made public this month. Tallinn had previously been a seed investor—using more conventional fiat currency—in Faculty. His investment company Metaplanet Holdings holds just under 9% of the company's total shares, according to Companies House filings.
In recent months, Faculty, which has often drawn comparisons in the press to U.S. data analysis company Palantir, has been in the news for its work helping the U.K. government forecast the availability of ventilators and other medical equipment needed to address the COVID-19 pandemic. The contract, one of seven government contracts the company has recently received, was controversial because Faculty was awarded it outside the normal bidding process, and the government has refused to reveal the exact terms. Under a previous name, ASI Data Science, the company had helped the "Vote Leave" campaign in its successful push for Britain to leave the European Union and worked with Vote Leave's director Dominic Cummings, who had been serving as a close aide to U.K. Prime Minister Boris Johnson. (Johnson abruptly dismissed Cummings on November 13, according to U.K. news accounts.) Faculty also won an $800,000 contract from the U.K. Home Office to build an A.I. system that could detect terrorist propaganda on social media platforms.
The reason for donating the money to Faculty in cryptocurrency, Tallinn says, is that he keeps most of his personal wealth in that form, and converting it into cash would have resulted in an unnecessary capital gains tax bill, reducing the amount he could give.
The rationale for backing Faculty’s efforts to address the risk of today’s A.I. systems—instead of the existential risks from some future superintelligence—is that many of the considerations for making today’s A.I. less prone to bias and easier to understand could also reduce the risk of someone creating an A.I. that one day destroys the human race, Tallinn says. “Transparency and explainability are useful in current commercial settings, like, medical settings,” he tells Fortune. “However, it also might make it much safer to deploy something that is smarter than us.” For instance, such techniques might make it much easier to understand the intentions behind the actions of a smarter-than-human system.
Tallinn’s donation has helped Faculty to hire several experts in A.I. safety, company founder and chief executive Marc Warner says. The company’s safety research is organized around four pillars. “We believe that A.I. has to be fair, private, robust, and explainable,” he says.
Warner says that the public has been presented with a false choice between the safety and performance of A.I. systems, with some researchers and companies selling A.I. systems claiming that more transparent A.I. methods don’t work as well as more opaque techniques. “It’s just not true,” Warner says. He notes that cars have become both safer—with innovations such as headlights, windscreen wipers, seat belts, and airbags—as well as offering superior performance, and that the same thing can happen with A.I.
The company was commissioned by the U.K.’s Center for Data Ethics and Innovation to assess the latest approaches to A.I. fairness. It has built tools that help users understand how complex A.I. systems arrive at decisions that have been presented at prestigious A.I. conferences. It has also researched ways to create machine-learning systems that are better at figuring out causal relationships in data, not just correlations. That’s an important safety consideration, especially when using A.I. in medical and financial settings, Warner says. And the company has researched mathematical techniques to reveal and guard against bias in A.I. algorithms.
As for the safety of holding cryptocurrency on its books, that’s another matter. Warner admits that accepting Tallinn’s donation created a bookkeeping headache for Faculty. “Our accountants had to go and find someone else who was working on crypto and how to do accounting for crypto,” he tells Fortune.
Ultimately, the company listed the cryptocurrency on its book as an intangible asset. That means its value will be amortized over time, and any significant fall in the value of Ether or Bitcoin will result in Faculty taking an impairment charge. But if either cryptocurrency soars in value, that won’t be reflected on Faculty’s books. While this is the current consensus among accountants about how to handle cryptocurrency assets, it remains controversial, with some experts arguing cryptocurrency should be treated like any other financial instrument.
Faculty did sell about $144,000 worth of Ether between March 2019 and March 2020, according to its annual accounts. But given how cryptocurrency has been appreciating—the value of Bitcoin is up 300% so far this year, and Ether is up 144% year to date—the company seems happy to let most of Tallinn's cryptocurrency donation remain on its books for a rainy day. After all, it might come in handy if A.I. safety efforts fail and our robot overlords arrive unexpectedly.