精品国产_亚洲人成在线高清,国产精品成人久久久久,国语自产偷拍精品视频偷拍

首頁 500強(qiáng) 活動 榜單 商業(yè) 科技 領(lǐng)導(dǎo)力 專題 品牌中心
雜志訂閱

GPT-4首次亮相,在消費(fèi)辦公工具人工智能的競賽中,谷歌擊敗微軟

JEREMY KAHN
2023-03-16

谷歌急于證明人工智能競賽中自己并未邊緣化。

文本設(shè)置
小號
默認(rèn)
大號
Plus(0條)

谷歌云首席執(zhí)行官托馬斯·庫里安宣布了一系列針對Google Workspace和谷歌云用戶提供的新生成式人工智能功能。不過,谷歌太急于搶在微軟發(fā)布競爭性公告之前發(fā)布消息,連定價(jià)尚未確定就宣布開放訪問其人工智能模型。圖片來源:MICHAEL SHORT—BLOOMBERG VIA GETTY IMAGES

本周又是人工智能新聞重磅頻出的一周。這還沒算上硅谷銀行倒閉可能對一些人工智能初創(chuàng)企業(yè)及其背后風(fēng)投造成的深遠(yuǎn)影響。

OpenAI剛剛發(fā)布了期待已久的GPT-4模型。這是一款大型多模態(tài)模型,支持圖像和文本輸入,不過只支持文本輸出。根據(jù)OpenAI發(fā)布的數(shù)據(jù),在一系列的基準(zhǔn)測試中,包括一系列專為人類設(shè)計(jì)的測試中,GPT-4表現(xiàn)遠(yuǎn)遠(yuǎn)好于上一代GPT-3.5模型,以及支持ChatGPT的模型。舉例來說,GPT-4在模擬律師資格考試中分?jǐn)?shù)很高,排名排進(jìn)前10%。OpenAI還表示,GPT-4比GPT-3.5更安全,具體表現(xiàn)在能提供更多基于事實(shí)的答案,而且與GPT-3.5相比,想從GPT-4問出越界的回答要難得多。

不過,該公司也表示模型仍存在缺陷。新版人工智能模型仍會產(chǎn)生幻覺,也就是編造信息。OpenAI指出,從某些方面來說幻覺問題可能更嚴(yán)重,因?yàn)镚PT-4極少給出不準(zhǔn)確的答案,所以人們更容易輕信。因此,該模型也可能做出包含偏見和有害的表述。從參數(shù)層面,OpenAI幾乎沒提到GPT-4到底多大,需要多少專門的圖形處理單元訓(xùn)練,或者確切地說利用哪些數(shù)據(jù)訓(xùn)練。該公司表示,出于競爭和安全的考慮,希望對相關(guān)細(xì)節(jié)保密。目前看來,GPT-4相比上一代有了很大的進(jìn)步,但與過去兩個月里OpenAI和其他公司競相研發(fā)的產(chǎn)品相比,算不上顛覆性的進(jìn)步。這只會加劇關(guān)于OpenAI等科技公司是否不負(fù)責(zé)任的爭論,因?yàn)槿绱藦?qiáng)大的技術(shù)持續(xù)存在缺陷和弊端,就已提供給消費(fèi)者和客戶使用。

與此同時(shí),預(yù)計(jì)本周四微軟將公布一系列基于人工智能的Office軟件增強(qiáng)功能。中國搜索引擎巨頭百度也剛剛發(fā)布了“文心一言”。隨著ChatGPT以及OpenAI與微軟的聯(lián)盟迅速走紅,谷歌被打了個措手不及,急于證明人工智能競賽中自己并未邊緣化。所以,谷歌發(fā)布了一系列重要的人工智能進(jìn)展以擊退微軟。

對大多數(shù)人來說最重要的消息是,谷歌表示,流行的辦公效率工具(如谷歌文檔、工作表和幻燈片)中將添加生成性人工智能功能。新功能之一就是人們能通過文本框喚起谷歌人工智能,自動起草幾乎任何類型的文檔,或者為表格數(shù)據(jù)創(chuàng)建不同類型的圖表。用戶可以高亮顯示文本,要求谷歌人工智能編輯或改寫成不同的語氣和風(fēng)格。還可以在Gmail里自動起草郵件或總結(jié)郵件會話內(nèi)容。在Google Meet里可以生成新的虛擬背景并自動創(chuàng)建談話筆記,同步生成摘要。

谷歌宣布的另一則消息同樣重要:企業(yè)客戶可以通過谷歌云上的應(yīng)用編程界面,使用其最先進(jìn)的大型語言模型系列PaLM。

除了PaLM,谷歌還為人工智能開發(fā)者和數(shù)據(jù)科學(xué)家推出了更新的Vertex AI平臺。人們可通過該平臺訪問大型基礎(chǔ)模型,不僅來自谷歌,還來自其不斷壯大的人工智能實(shí)驗(yàn)室聯(lián)盟生態(tài)系統(tǒng),如Anthropic和Cohere,以及AI21實(shí)驗(yàn)室和Midjourney,等等。谷歌還推出了名為Generative AI App Builder的軟件,技術(shù)實(shí)力稍弱的團(tuán)隊(duì)也能利用生成性人工智能模型快速構(gòu)建并推出定制應(yīng)用。

谷歌表示,用戶可以使用Vertex AI和Generative AI App Builder兩大新功能:一是企業(yè)搜索工具,可以用谷歌搜索挖掘自己的數(shù)據(jù),包括CRM或ERP軟件生成的數(shù)據(jù),以及內(nèi)部網(wǎng)站和其他文檔,最后僅返回知識庫中搜到的結(jié)果。然后相關(guān)結(jié)果可用于自然語言任務(wù),如摘要、情感分析或問答,降低了語言模型虛構(gòu)信息或從其預(yù)訓(xùn)練數(shù)據(jù)而不是客戶自己數(shù)據(jù)中提取信息的風(fēng)險(xiǎn)。另一項(xiàng)新功能是類似聊天機(jī)器人的“對話式人工智能”功能,客戶可為相關(guān)搜索、自然語言處理和生成人工智能功能配置用戶界面。

谷歌宣布了首批“可信測試者”,相關(guān)企業(yè)可以立即訪問新的人工智能服務(wù),包括豐田(Toyota)、德意志銀行(Deutsche Bank)、HCA Healthcare、Equifax、Starz電視網(wǎng)和梅奧診所(Mayo Clinic)等。該公司表示,未來幾周內(nèi)將大規(guī)模推出新產(chǎn)品和新功能。此舉充分體現(xiàn)人工智能技術(shù)競賽多么激烈,新聞發(fā)布會上谷歌云業(yè)務(wù)首席執(zhí)行官托馬斯·庫里安被迫承認(rèn),盡管谷歌不斷發(fā)布新產(chǎn)品,但尚未確定如何定價(jià)。庫里安說,之前谷歌總是以免費(fèi)開源方式提供人工智能服務(wù),或者相關(guān)技術(shù)只是“嵌入到既有產(chǎn)品中”。“這是谷歌第一次采用新的通用人工智能模型,而且開發(fā)人員可通過API均可訪問,”他說。

谷歌關(guān)于新產(chǎn)品的新聞稿宣揚(yáng)對“負(fù)責(zé)任的人工智能”的承諾,發(fā)布的新功能也不斷強(qiáng)調(diào)該主題,稱Vertex AI和Generative AI App Builder包括“檢查、理解和修改模型行為”的工具,新系統(tǒng)的信息檢索使用了傳統(tǒng)搜索算法,減少了答案不準(zhǔn)確的風(fēng)險(xiǎn)。但庫里安并未明確說明谷歌如何向客戶保證,其大型語言模型喚起后不會出現(xiàn)不恰當(dāng)?shù)姆磻?yīng)——或者更糟的是,聊天機(jī)器人可能從友好的助手變成脾氣暴躁、滿口辱罵和威脅的“魔鬼”,正如測試人員在微軟新必應(yīng)上發(fā)現(xiàn)的情況一樣。谷歌也沒提到是否計(jì)劃采取措施,阻止用戶在其廣受歡迎的辦公效率工具中利用生成性人工智能功能,故意制造錯誤信息或在論文中作弊。

對此的擔(dān)憂與日俱增。原因之一可能是,大多研究人員都在科技巨頭工作,如果他們有越界舉動就會丟了工作。科技新聞網(wǎng)站The Verge和Casey Newton旗下的The Platformer剛剛透露,微軟最近解散了人工智能道德和社會團(tuán)隊(duì)——該核心小組一直努力提醒人們擔(dān)心微軟正建設(shè)的諸多先進(jìn)人工智能系統(tǒng),并敦促公司放緩?fù)瞥錾尚匀斯ぶ悄艿乃俣取R恍﹤惱韺<冶环峙涞狡渌麍F(tuán)隊(duì)。一些人則遭到解雇。微軟經(jīng)理向團(tuán)隊(duì)宣布團(tuán)隊(duì)重組的一段錄音泄露給了Casey Newton,錄音中清楚表明,首席執(zhí)行官薩提亞·納德拉和首席技術(shù)官凱文·斯科特施加壓力,要求盡快在全公司推廣OpenAI的先進(jìn)人工智能技術(shù),不管是質(zhì)疑該決定還是質(zhì)疑推進(jìn)速度都不受歡迎。

現(xiàn)在,微軟仍有另一個與“負(fù)責(zé)任的人工智能”相關(guān)的部門,但該部門角色更多是從高層設(shè)定原則、框架和流程,而不是實(shí)際的安全和道德檢查。人工智能倫理小組的解散進(jìn)一步證明了為何在人工智能倫理或安全方面,不應(yīng)該相信科技行業(yè)能做到自我監(jiān)管,以及為什么迫切需要政府監(jiān)管。(財(cái)富中文網(wǎng))

譯者:夏林

本周又是人工智能新聞重磅頻出的一周。這還沒算上硅谷銀行倒閉可能對一些人工智能初創(chuàng)企業(yè)及其背后風(fēng)投造成的深遠(yuǎn)影響。

OpenAI剛剛發(fā)布了期待已久的GPT-4模型。這是一款大型多模態(tài)模型,支持圖像和文本輸入,不過只支持文本輸出。根據(jù)OpenAI發(fā)布的數(shù)據(jù),在一系列的基準(zhǔn)測試中,包括一系列專為人類設(shè)計(jì)的測試中,GPT-4表現(xiàn)遠(yuǎn)遠(yuǎn)好于上一代GPT-3.5模型,以及支持ChatGPT的模型。舉例來說,GPT-4在模擬律師資格考試中分?jǐn)?shù)很高,排名排進(jìn)前10%。OpenAI還表示,GPT-4比GPT-3.5更安全,具體表現(xiàn)在能提供更多基于事實(shí)的答案,而且與GPT-3.5相比,想從GPT-4問出越界的回答要難得多。

不過,該公司也表示模型仍存在缺陷。新版人工智能模型仍會產(chǎn)生幻覺,也就是編造信息。OpenAI指出,從某些方面來說幻覺問題可能更嚴(yán)重,因?yàn)镚PT-4極少給出不準(zhǔn)確的答案,所以人們更容易輕信。因此,該模型也可能做出包含偏見和有害的表述。從參數(shù)層面,OpenAI幾乎沒提到GPT-4到底多大,需要多少專門的圖形處理單元訓(xùn)練,或者確切地說利用哪些數(shù)據(jù)訓(xùn)練。該公司表示,出于競爭和安全的考慮,希望對相關(guān)細(xì)節(jié)保密。目前看來,GPT-4相比上一代有了很大的進(jìn)步,但與過去兩個月里OpenAI和其他公司競相研發(fā)的產(chǎn)品相比,算不上顛覆性的進(jìn)步。這只會加劇關(guān)于OpenAI等科技公司是否不負(fù)責(zé)任的爭論,因?yàn)槿绱藦?qiáng)大的技術(shù)持續(xù)存在缺陷和弊端,就已提供給消費(fèi)者和客戶使用。

與此同時(shí),預(yù)計(jì)本周四微軟將公布一系列基于人工智能的Office軟件增強(qiáng)功能。中國搜索引擎巨頭百度也剛剛發(fā)布了“文心一言”。隨著ChatGPT以及OpenAI與微軟的聯(lián)盟迅速走紅,谷歌被打了個措手不及,急于證明人工智能競賽中自己并未邊緣化。所以,谷歌發(fā)布了一系列重要的人工智能進(jìn)展以擊退微軟。

對大多數(shù)人來說最重要的消息是,谷歌表示,流行的辦公效率工具(如谷歌文檔、工作表和幻燈片)中將添加生成性人工智能功能。新功能之一就是人們能通過文本框喚起谷歌人工智能,自動起草幾乎任何類型的文檔,或者為表格數(shù)據(jù)創(chuàng)建不同類型的圖表。用戶可以高亮顯示文本,要求谷歌人工智能編輯或改寫成不同的語氣和風(fēng)格。還可以在Gmail里自動起草郵件或總結(jié)郵件會話內(nèi)容。在Google Meet里可以生成新的虛擬背景并自動創(chuàng)建談話筆記,同步生成摘要。

谷歌宣布的另一則消息同樣重要:企業(yè)客戶可以通過谷歌云上的應(yīng)用編程界面,使用其最先進(jìn)的大型語言模型系列PaLM。

除了PaLM,谷歌還為人工智能開發(fā)者和數(shù)據(jù)科學(xué)家推出了更新的Vertex AI平臺。人們可通過該平臺訪問大型基礎(chǔ)模型,不僅來自谷歌,還來自其不斷壯大的人工智能實(shí)驗(yàn)室聯(lián)盟生態(tài)系統(tǒng),如Anthropic和Cohere,以及AI21實(shí)驗(yàn)室和Midjourney,等等。谷歌還推出了名為Generative AI App Builder的軟件,技術(shù)實(shí)力稍弱的團(tuán)隊(duì)也能利用生成性人工智能模型快速構(gòu)建并推出定制應(yīng)用。

谷歌表示,用戶可以使用Vertex AI和Generative AI App Builder兩大新功能:一是企業(yè)搜索工具,可以用谷歌搜索挖掘自己的數(shù)據(jù),包括CRM或ERP軟件生成的數(shù)據(jù),以及內(nèi)部網(wǎng)站和其他文檔,最后僅返回知識庫中搜到的結(jié)果。然后相關(guān)結(jié)果可用于自然語言任務(wù),如摘要、情感分析或問答,降低了語言模型虛構(gòu)信息或從其預(yù)訓(xùn)練數(shù)據(jù)而不是客戶自己數(shù)據(jù)中提取信息的風(fēng)險(xiǎn)。另一項(xiàng)新功能是類似聊天機(jī)器人的“對話式人工智能”功能,客戶可為相關(guān)搜索、自然語言處理和生成人工智能功能配置用戶界面。

谷歌宣布了首批“可信測試者”,相關(guān)企業(yè)可以立即訪問新的人工智能服務(wù),包括豐田(Toyota)、德意志銀行(Deutsche Bank)、HCA Healthcare、Equifax、Starz電視網(wǎng)和梅奧診所(Mayo Clinic)等。該公司表示,未來幾周內(nèi)將大規(guī)模推出新產(chǎn)品和新功能。此舉充分體現(xiàn)人工智能技術(shù)競賽多么激烈,新聞發(fā)布會上谷歌云業(yè)務(wù)首席執(zhí)行官托馬斯·庫里安被迫承認(rèn),盡管谷歌不斷發(fā)布新產(chǎn)品,但尚未確定如何定價(jià)。庫里安說,之前谷歌總是以免費(fèi)開源方式提供人工智能服務(wù),或者相關(guān)技術(shù)只是“嵌入到既有產(chǎn)品中”。“這是谷歌第一次采用新的通用人工智能模型,而且開發(fā)人員可通過API均可訪問,”他說。

谷歌關(guān)于新產(chǎn)品的新聞稿宣揚(yáng)對“負(fù)責(zé)任的人工智能”的承諾,發(fā)布的新功能也不斷強(qiáng)調(diào)該主題,稱Vertex AI和Generative AI App Builder包括“檢查、理解和修改模型行為”的工具,新系統(tǒng)的信息檢索使用了傳統(tǒng)搜索算法,減少了答案不準(zhǔn)確的風(fēng)險(xiǎn)。但庫里安并未明確說明谷歌如何向客戶保證,其大型語言模型喚起后不會出現(xiàn)不恰當(dāng)?shù)姆磻?yīng)——或者更糟的是,聊天機(jī)器人可能從友好的助手變成脾氣暴躁、滿口辱罵和威脅的“魔鬼”,正如測試人員在微軟新必應(yīng)上發(fā)現(xiàn)的情況一樣。谷歌也沒提到是否計(jì)劃采取措施,阻止用戶在其廣受歡迎的辦公效率工具中利用生成性人工智能功能,故意制造錯誤信息或在論文中作弊。

對此的擔(dān)憂與日俱增。原因之一可能是,大多研究人員都在科技巨頭工作,如果他們有越界舉動就會丟了工作。科技新聞網(wǎng)站The Verge和Casey Newton旗下的The Platformer剛剛透露,微軟最近解散了人工智能道德和社會團(tuán)隊(duì)——該核心小組一直努力提醒人們擔(dān)心微軟正建設(shè)的諸多先進(jìn)人工智能系統(tǒng),并敦促公司放緩?fù)瞥錾尚匀斯ぶ悄艿乃俣取R恍﹤惱韺<冶环峙涞狡渌麍F(tuán)隊(duì)。一些人則遭到解雇。微軟經(jīng)理向團(tuán)隊(duì)宣布團(tuán)隊(duì)重組的一段錄音泄露給了Casey Newton,錄音中清楚表明,首席執(zhí)行官薩提亞·納德拉和首席技術(shù)官凱文·斯科特施加壓力,要求盡快在全公司推廣OpenAI的先進(jìn)人工智能技術(shù),不管是質(zhì)疑該決定還是質(zhì)疑推進(jìn)速度都不受歡迎。

現(xiàn)在,微軟仍有另一個與“負(fù)責(zé)任的人工智能”相關(guān)的部門,但該部門角色更多是從高層設(shè)定原則、框架和流程,而不是實(shí)際的安全和道德檢查。人工智能倫理小組的解散進(jìn)一步證明了為何在人工智能倫理或安全方面,不應(yīng)該相信科技行業(yè)能做到自我監(jiān)管,以及為什么迫切需要政府監(jiān)管。(財(cái)富中文網(wǎng))

譯者:夏林

Greetings. It promises to be (another) massive week in A.I. news. And that’s leaving aside the lingering effects that the collapse of Silicon Valley Bank may have on some A.I. startups and the venture funds backing them.

Right as this newsletter was going to press, OpenAI released its long-anticipated GPT-4 model. The new model is multimodal, accepting both images and text as inputs, although it only generates text as its output. According to data released by OpenAI, GPT-4 performs much better than GPT-3.5, its latest model, and the one that powers ChatGPT, on a whole range of benchmark tests, including a battery of different tests designed for humans. For instance, GPT-4 scores well enough to be within the top 10% of test takers on a simulated bar exam. OpenAI also says that GPT-4 is safer than GPT-3.5—returning more factual answers and it’s much more difficult to get GPT-4 to jump its guardrails than has been the case with GPT-3.5.

But, the company is also saying that the model is still flawed. It will still hallucinate—making up information. And OpenAI notes that in some ways hallucination might be more of an issue because GPT-4 does this less often, so people might get very complacent about the answers it produces. It is also still possible to get the model to churn out biased and toxic language. OpenAI is saying very little about how big a model GPT-4 actually is, how many specialized graphics processing units it took to train it, or exactly what data it was trained on. It says it wants to keep these details secret for both competitive and safety reasons. I’ll no doubt be writing much more about GPT-4 in next week’s newsletter. But my initial take is that GPT-4 looks like a big step forward, but not a revolutionary advance over what OpenAI and others have been racing to put into production over the past two months. And it will only heighten the debate about whether tech companies, including OpenAI, are being irresponsible by putting this powerful technology in the hands of consumers and customers despite its persistent flaws and drawbacks.

Meanwhile, Microsoft is expected to unveil a range of A.I.-powered enhancements to its Office software suite on Thursday. And Baidu, the Chinese search giant, has a big announcement scheduled for later this week. Google, which was caught flat-footed by the viral popularity of ChatGPT and OpenAI’s alliance with Microsoft, is eager to prove that it’s not about to be sidelined in the A.I. race. And the big news today before OpenAI’s GPT-4 announcement was that Google had beaten Microsoft out of the gate with a bunch of big A.I. announcements of its own.

For most people, the main news is that the search giant said it is adding generative-A.I. features to its popular Workspace productivity tools, such as Google Docs, Sheets, and Slides. Among the things people will now be able to do is use a text box to prompt Google’s A.I. to automatically draft almost any kind of document, or to create different kinds of charts for Sheets data. Users can highlight text and ask Google’s A.I. to edit it for them or rewrite it in a different tone and style. You will also be able to automatically draft emails or summarize entire email threads in Gmail. In Google Meet you will be able to generate new virtual backgrounds and automatically create notes of conversations, complete with summaries.

But equally important was the other news Google announced: The company is allowing enterprise customers to tap its most advanced family of large language models—called PaLM —through an application programming interface on Google Cloud.

Beyond PaLM, it has also launched an updated set of its Vertex AI platform for A.I. developers and data scientists. The platform allows them access to large foundation models, not just from Google, but from its growing ecosystem of allied A.I. labs, such as Anthropic and Cohere, as well as AI21 Labs and Midjourney. And it has launched a set of software, called Generative AI App Builder, that will allow slightly less technical teams to quickly build and roll out custom applications using generative A.I. models.

For both Vertex AI and the Generative AI App Builder, Google says users will have access to two new related capabilities: The first is an enterprise search tool that will allow them to perform Google searches across their own data—including data generated by CRM or ERP software, as well as internal websites and other documents—and return results only from that knowledge base. These results can then be used for natural language tasks, such as summarization, sentiment analysis, or question-answering, with less risk that the language model will simply invent information or draw information from its pretraining data rather than the customer’s own data. The other new capability is a chatbot-like “conversational A.I.” function that customers can deploy to act as the user interface for these search, natural language processing, and generative A.I. capabilities.

Google announced a group of initial “trusted testers” who will have immediate access to these new A.I. services including Toyota, Deutsche Bank, HCA Healthcare, Equifax, the television network Starz, and the Mayo Clinic, among others. The new products and features will be rolled out more broadly in the coming weeks, the company said. But it was a sign of just how intense this A.I. technology race has become that Thomas Kurian, the CEO of Google’s Cloud business, was forced to acknowledge during the press briefing that although Google was releasing these new products without having yet worked out exactly how to price them. In the past, Kurian said, Google had always made its A.I. advances available as free, open-source releases or the technology was simply “embedded in our products.” “This is the first time we are taking our new, general A.I. models and making them accessible to the developer community with an API,” he said.

Google’s press release on its new products touted the company’s commitment to “Responsible AI” and it tried to position its release under this rubric, noting that Vertex AI and Generative AI App Builder include tools to “inspect, understand, and modify model behavior” and that the information retrieval aspects of the new systems used traditional search algorithms, lessening the risk of inaccurate answers. But Kurian did not say exactly what sort of guarantees Google could offer customers that its large language models could not be prompted in ways that would elicit inaccurate responses—or worse, might morph their chatbot from a friendly assistant into a petulant, abusive, and threatening “devil-on-your-shoulder,” as testers discovered with Microsoft’s Bing. It also did not address whether Google was planning to take any steps to prevent users of its very popular Workspace tools from using the new generative A.I. features to deliberately churn out misinformation or to cheat on school essays.

Concern about this is growing.One reason may be that most of those researchers are now embedded inside big tech companies and if they step out of line, they get fired. Tech news site The Verge and Casey Newton’s The Platformer just revealed that Microsoft recently disbanded its A.I. ethics and society team—a central group that had been trying to raise concerns about many of the advanced A.I. systems Microsoft was building and had been urging the company to slow down the speed of its generative A.I. roll out. Some of the ethics experts were assigned to other teams. Some were fired. An audio recording of a Microsoft manager addressing the team about its restructuring that leaked to Newton made it clear that there was pressure from CEO Satya Nadella and CTO Kevin Scott to roll out OpenAI’s advanced A.I. technology throughout the company as quickly as possible and that questioning that decision or its pace was not appreciated.

Now Microsoft still has another corporate Office of Responsible AI, but its role is more to set high-level principals, frameworks, and processes—not to conduct the actual safety and ethical checks. The disbanding of the A.I. ethics group is further evidence of why the tech industry should not be trusted to self-regulate when it comes to A.I. ethics or safety and why government regulation is urgently needed.

財(cái)富中文網(wǎng)所刊載內(nèi)容之知識產(chǎn)權(quán)為財(cái)富媒體知識產(chǎn)權(quán)有限公司及/或相關(guān)權(quán)利人專屬所有或持有。未經(jīng)許可,禁止進(jìn)行轉(zhuǎn)載、摘編、復(fù)制及建立鏡像等任何使用。
0條Plus
精彩評論
評論

撰寫或查看更多評論

請打開財(cái)富Plus APP

前往打開

            主站蜘蛛池模板: 汉中市| 常山县| 阳江市| 蒙自县| 准格尔旗| 延庆县| 磐石市| 桐庐县| 广元市| 衢州市| 江油市| 陇川县| 休宁县| 班玛县| 东城区| 彩票| 南宫市| 东城区| 张家港市| 辽源市| 云和县| 孝昌县| 湛江市| 郎溪县| 海安县| 普陀区| 龙川县| 房产| 孝义市| 亚东县| 政和县| 鄂尔多斯市| 会昌县| 来宾市| 白水县| 朝阳县| 镇远县| 潜江市| 久治县| 九台市| 扬中市|