谷歌擬推秘密云計劃
就在亞馬遜、微軟和IBM斥資建立覆蓋全球的云計算數據中心之際,該領域的另一個重要參與者谷歌公司卻鮮有動作。原來,這家搜索巨頭正在執行一項秘密的追趕計劃。 過去幾個月,亞馬遜、微軟和IBM輪番在中國、印度、德國、英國、韓國等地設立了新的云計算數據中心。但還有一家公共云服務供應商卻奇怪地保持沉默——它就是谷歌。 之所以說谷歌按兵不動很奇怪,是因為一家云服務供應商如果想向客戶特別是企業客戶提供云服務,地理位置是十分重要的。在運行中,電腦服務器和存儲設備離用戶越遠,延遲時間就越長。因此在這個問題上,慢絕對不是一件好事。 不過,接近谷歌的兩名消息人士表示,谷歌正在考慮一個有趣的計劃。 谷歌的云服務在全球只有四個地區性數據中心,相比之下,微軟Azure有20個,亞馬遜AWS有11個,而且明年還將建成5個數據中心。不過谷歌在全球各地還建有70個數據高速緩存站,用來儲存視頻、音頻和熱門網頁的緩存文件,這些高速緩存站的位置離目標用戶較近,可以起到加速作用。 這些終端是谷歌所稱的“對等互聯與內容分發網絡”的重要組成部分。我們的兩個消息源都表示,這個目前還在討論中的理念,旨在給予這些高速緩存站額外的計算能力,從而使它們扮演微型數據中心的角色。 這兩名消息人士都要求匿名,因為他們所在的兩家公司都是谷歌的合作伙伴。一位谷歌發言人表示,她不會為這些傳聞和推測發表評論。 一名消息人士指出,谷歌計算引擎團隊(GCE,基本上就是谷歌的云部門)正在與公司內部的基礎架構部門合作,看看是否能讓這些地區性終端系統具備一定的計算能力。 但這也可能會有一定的問題。首先,谷歌設計出的可能是一個分層式系統,大量工作還要在谷歌自己的大型數據中心里完成。谷歌數據中心運行著幾十萬個(甚至可能幾百萬個)服務器,它的計算能力要超過內容分發網絡(CDN)的小型終端節點。但是相比在GCE自己的數據中心里運行,這種做法會在計算能力上面臨更多的限制。 該消息人士表示:“用戶可能會面臨性能上限的限制,而且在這些小型終端點上運行的成本也更貴。” 這個方案也完全不符合公共云用戶的預期——此處所說的公共云用戶,指的就是租用了谷歌、亞馬遜、微軟等企業提供的計算、存儲和帶寬服務的用戶。他們一般認為,自己只要需要的話,就應該可以添加近乎無限的計算和存儲能力。所以這個方案可以利用兩種定價模式,建立起兩套基礎架構。 顯然,谷歌需要盡快著手在全球覆蓋上有所動作。目前公共云基礎架構領域的領頭羊是亞馬遜,據高德納公司去年預測,亞馬遜的計算能力,超過了排在它后面的14家云計算競爭對手的總和的10倍。而微軟則重點把精力放在了企業客戶上。 為了加速流入和流出云端的流量速度,谷歌近日與CDN市場的領頭羊阿卡邁公司建立了合作伙伴關系,以將谷歌自己的CDN網絡直接與阿卡邁CDN網絡進行互聯。今年9月,谷歌還與Cloudflare、Fastly、Level 3 Communications和Highwinds四家CDN服務供應商簽訂了類似協議。但這些協議只是整個藍圖的一小部分。 谷歌擁有極強的技術實力,去年谷歌向云計算市場投出了一顆大炸彈,宣布下調存儲和計算等一系列核心服務的價格,此舉也打了亞馬遜一個措手不及,因為亞馬遜歷來不習慣讓其他公司主打價格牌。然而谷歌在向企業用戶銷售云服務方面卻并不是一帆風順,因為很多企業用戶并不確定,作為一家在線搜索和廣告巨頭的谷歌會不會真正關心云服務。(當然,谷歌自己總是宣稱自己非常重視云服務。) 為了扭轉客戶的這種認知,谷歌最近任命VMware公司前CEO黛安?葛琳負責公司云計算部門。她的到來使谷歌云服務對客戶來說增添了一定的可靠性。當然,谷歌的另一個產品——谷歌應用引擎出現的問題,可能會不利于吸引潛在商業用戶。但要強調的是,亞馬遜AWS和微軟Azure服務也都出現過一些小問題。 Battery Ventures風險投資公司科技研究員、前Netflix公司云計算專家亞德里安?科克羅夫特表示,他沒有聽說過谷歌有這樣的計劃,但他認為,考慮到谷歌的確需要大面積的地緣覆蓋和快速的性能,這個計劃也是合情合理的。 科克羅夫特通過電子郵件對《財富》表示,如果谷歌云團隊真的想到了如何將這些CDN節點轉變成一個個迷你數據中心,那么這意味著他們聽取了企業客戶的要求。 科克羅夫特表示:“從技術的角度看,這還意味著他們找出了如何將云服務進行壓縮和包裝,用于小型區域性部署的辦法。”在這方面,微軟和另一家公共云服務公司Ocean已經走在了前面,而且科克羅夫特認為,隨著時間的推移,亞馬遜的AWS也將在這方面做得越來越好。(財富中文網) 譯者:樸成奎 審校:任文科 |
Over the past few months, while Amazon, Microsoft and IBM took turns unveiling new cloud computing data centers in China, India, Germany, the U.K., South Korea, and elsewhere, one public cloud provider remained eerily silent: Google. This is odd because, when it comes to delivering cloud services to customers—especially business customers—location matters. The further away the computer servers and storage are, the longer the lag time, or latency, in operations. And slow is definitely no good in this context. But two sources close to Googlesaid the company is considering an interesting plan. Google’s cloud services operate out of four data center regions worldwide compared with 20 for Microsoft Azure and 11 for Amazon Web Services (AWS) with five more due in the next year. But Google also has 70 data caching stations around the world that store copies (or caches) of video, audio and popular web pages close to their likely audiences to speed their delivery. Those endpoints are key pieces of what Google calls its “peering and content delivery network.” The idea, which both sources said is under discussion, calls for these Google outposts to be outfitted with additional computing capacity so they would become sort of mini data centers. Both sources requested anonymity because their companies work with Google. A Google spokeswoman would not comment on what she called rumor and speculation. One source noted that the Google Compute Engine (GCE) team—basically the cloud group—is working with the company’s broader internal infrastructure groups to see if it can put small pods or clusters of computing power into these regional endpoints. There could be wrinkles. For one thing this could end up being a tiered system, with big jobs running on Google’s own massive data centers which run hundreds of thousands (millions?) of servers that could outstrip the capabilities of the smaller end-point nodes of the CDN. If they were to do that, there’s more limited capacity than for the stuff running in GCE’s own data centers, he said. “Users might be given a cap and it would be more expensive to run” in these smaller clusters, the source said. That scenario flies in the face of the expectation of public cloud users—customers who rent out computing, storage and bandwidth run by Google, Amazon, Microsoft etc.—who think they can add nearly unlimited computing and storage capacity as needed. So this scenario could set up a two sets of infrastructure with two pricing models. But it is clear Google needs to do something about global coverage soon. Amazonis the leader in public cloud infrastructure, running what Gartner last year estimated to be 10 times more computing capacity than the next 14 cloud competitors combined. Microsoft MSFT -2.19% is making a play especially with business customers. To help speed traffic flow into and out of its cloud, Google and Akamai [Fortune-stock symbol=”AKAM”], the CDN market leader recently announced a partnership to directly link to Google’s own CDN interconnect with Akamai’s CDN. In September it announced a similar deal with four other CDN providers: Cloudflare, Fastly, Level 3 Communications and Highwinds. But those deals are just a piece of the overall puzzle. Google has great technical smarts, and entered the fray with a bang last year with a series of price cuts on key storage and computing services in a move that seemed to flummox Amazon, which is not used to other companies driving the price agenda. But it has struggled to sell its cloud to business users, many of whom are not totally sure that Google, the online search and ad giant, really cares about cloud services. (For the record, it always said it does.) To counter that perception, Google recently put former VMwareCEO Diane Greene in charge of its cloud unit. She brings added credibility to the push with business customers. (Of course, this week’s snafu with Google App Engine, anohter of Google’s cloud offerings, probably won’t help its case with prospective business users, but then again AWS and Azure have also had hiccups.) Adrian Cockcroft, technology fellow at Battery Ventures, and former cloud guru at Netflix NFLX -3.25% , said he has not heard about this Google plan but he that it would make sense given businesses needs for lots of geographical coverage and fast performance. If the Google cloud team has figured out how to convert those CDN nodes into far-flung mini-data center pods, it means they have listened to what enterprise customers have requested, Cockcroft told Fortune via email. “From a technology point of view, it also means they have figured out how to scale down and package their cloud for small regional deployments,” Cockcroft said. That is something Microsoft and Digital Ocean, another public cloud company, have already done and something he thinks AWS will get better with over time. |