谷歌(Google)的首席決策師離職了。
卡西·科濟里科夫曾經擔任谷歌的首席決策科學家,并幫助該公司開拓決策智能領域,如今她準備單干,致力于幫助商業領袖應對人工智能領域的棘手難題。
在人工智能變得更加強大并且更廣泛地應用于各行各業之際,科濟里科夫將針對如何做出明智的決策推出她在LinkedIn的首套課程、出版一本書,以及發表一系列的主題演講。科濟里科夫對《財富》雜志表示,其目標是為領導者在思考如何利用人工智能時提供工具,并幫助公眾督促人工智能決策者對影響數百萬人的選擇負起責任。
科濟里科夫在谷歌任職10年,其中五年擔任首席決策科學家。她負責指導公司領導者在人工智能領域做出明智且負責任的決策等。
科濟里科夫說:“我一直相信谷歌的初衷是好的。”然而,谷歌是一家大公司,因此外界有時會把她在某個話題上的個人觀點與谷歌的立場等同起來。科濟里科夫告訴《財富》雜志,在其新崗位上,她將不必擔心自己的主張會對她所代表的公司產生影響。
人工智能正在經歷漫長的發展期,這引起了一些人對未來的擔憂。人工智能領域的某些頂尖人才最近警告稱,正如我們所知,人工智能可能會終結人類。此時此刻讓人感覺像是科技世界的一個拐點。科濟里科夫表示,至關重要的是要有接受過決策教育的領導者和可以督促他們負責任的消費者。
科濟里科夫在南非長大,獲得了芝加哥大學(University of Chicago)的經濟學學士學位。她還擁有北卡羅來納州立大學(North Carolina State University)的數理統計學碩士學位以及完成了部分的杜克大學(Duke University)心理學和神經科學博士課程。在加入谷歌之前,她從事了10年的獨立數據科學顧問。
科濟里科夫從2018年開始擔任谷歌的首席決策科學家,在此期間,谷歌的人工智能部門迅猛發展。谷歌的首席執行官桑達爾·皮查伊推出了谷歌助手(Google Assistant)的附加組件Duplex,它能夠代表用戶撥打電話,旨在幫助他們安排約會、預訂餐廳和預約其他活動。谷歌在根據提示生成文本、圖像和視頻方面取得了飛躍式的進展,目前正在開發可以自己編寫代碼的機器人。該公司還發布了能夠與ChatGPT相媲美的大型語言模型Bard。與其他人工智能公司的情況類似,谷歌的許多發展成果也在員工和學者中引發了道德問題討論。谷歌沒有回應置評請求。
由于簽訂了保密協議,科濟里科夫并未對其在谷歌促成的決策發表評論,但不難想象該公司在人工智能領域曾經面臨哪些艱難抉擇。在構建Bard的過程里,谷歌必須決定是否抓取受版權保護的信息來訓練該人工智能模型。今年7月,一項針對谷歌的訴訟就指控訴該公司的這種做法。此外,谷歌還需要決定在什么時候發布這項技術,以確保維持與ChatGPT的競爭力,同時又不損害其聲譽。在發布Bard的演示視頻后(視頻中這一聊天機器人給出了錯誤答案),谷歌立即遭到猛烈抨擊。
科濟里科夫的工作圍繞這樣一種觀點展開:個人做出的選擇可能會影響很多人,而那些高層未必接受過決策實踐方面的教育。她說:“人們很容易會認為技術發展是自發性的。但實際上技術背后有其推動者,他們無論有或沒有相關技能,都會做出非常主觀的決策,從而影響著數百萬人的生活。”
長期以來,人類一直在努力尋求做出決策的最佳方法,因而推動這類方法不斷演變。科濟里科夫稱:“當需要解答重要問題時,我們可以運用本杰明·富蘭克林在300年前提出的贊成/反對模型,但也有更先進的方法。”雖然科濟里科夫的目標對象是商業領袖,但她的方法也能夠用于做出其他重要的人生決定,比如去哪里上大學,以及是否要生孩子。
決策者應該問問自己:怎樣才可以改變我的想法?他們還應該利用數據信息,但在看到數據之前,首先要規定好面對不同的數據結果要采取怎樣的做法。這有助于決策者避免證真偏差,即利用數據來證實他們已有的觀點。科濟里科夫表示,記錄做出重要決定的過程(包括當時能夠獲取的信息)也有助于在選擇做出后評估其優劣水平。(財富中文網)
譯者:中慧言-劉嘉歡
谷歌(Google)的首席決策師離職了。
卡西·科濟里科夫曾經擔任谷歌的首席決策科學家,并幫助該公司開拓決策智能領域,如今她準備單干,致力于幫助商業領袖應對人工智能領域的棘手難題。
在人工智能變得更加強大并且更廣泛地應用于各行各業之際,科濟里科夫將針對如何做出明智的決策推出她在LinkedIn的首套課程、出版一本書,以及發表一系列的主題演講。科濟里科夫對《財富》雜志表示,其目標是為領導者在思考如何利用人工智能時提供工具,并幫助公眾督促人工智能決策者對影響數百萬人的選擇負起責任。
科濟里科夫在谷歌任職10年,其中五年擔任首席決策科學家。她負責指導公司領導者在人工智能領域做出明智且負責任的決策等。
科濟里科夫說:“我一直相信谷歌的初衷是好的。”然而,谷歌是一家大公司,因此外界有時會把她在某個話題上的個人觀點與谷歌的立場等同起來。科濟里科夫告訴《財富》雜志,在其新崗位上,她將不必擔心自己的主張會對她所代表的公司產生影響。
人工智能正在經歷漫長的發展期,這引起了一些人對未來的擔憂。人工智能領域的某些頂尖人才最近警告稱,正如我們所知,人工智能可能會終結人類。此時此刻讓人感覺像是科技世界的一個拐點。科濟里科夫表示,至關重要的是要有接受過決策教育的領導者和可以督促他們負責任的消費者。
科濟里科夫在南非長大,獲得了芝加哥大學(University of Chicago)的經濟學學士學位。她還擁有北卡羅來納州立大學(North Carolina State University)的數理統計學碩士學位以及完成了部分的杜克大學(Duke University)心理學和神經科學博士課程。在加入谷歌之前,她從事了10年的獨立數據科學顧問。
科濟里科夫從2018年開始擔任谷歌的首席決策科學家,在此期間,谷歌的人工智能部門迅猛發展。谷歌的首席執行官桑達爾·皮查伊推出了谷歌助手(Google Assistant)的附加組件Duplex,它能夠代表用戶撥打電話,旨在幫助他們安排約會、預訂餐廳和預約其他活動。谷歌在根據提示生成文本、圖像和視頻方面取得了飛躍式的進展,目前正在開發可以自己編寫代碼的機器人。該公司還發布了能夠與ChatGPT相媲美的大型語言模型Bard。與其他人工智能公司的情況類似,谷歌的許多發展成果也在員工和學者中引發了道德問題討論。谷歌沒有回應置評請求。
由于簽訂了保密協議,科濟里科夫并未對其在谷歌促成的決策發表評論,但不難想象該公司在人工智能領域曾經面臨哪些艱難抉擇。在構建Bard的過程里,谷歌必須決定是否抓取受版權保護的信息來訓練該人工智能模型。今年7月,一項針對谷歌的訴訟就指控訴該公司的這種做法。此外,谷歌還需要決定在什么時候發布這項技術,以確保維持與ChatGPT的競爭力,同時又不損害其聲譽。在發布Bard的演示視頻后(視頻中這一聊天機器人給出了錯誤答案),谷歌立即遭到猛烈抨擊。
科濟里科夫的工作圍繞這樣一種觀點展開:個人做出的選擇可能會影響很多人,而那些高層未必接受過決策實踐方面的教育。她說:“人們很容易會認為技術發展是自發性的。但實際上技術背后有其推動者,他們無論有或沒有相關技能,都會做出非常主觀的決策,從而影響著數百萬人的生活。”
長期以來,人類一直在努力尋求做出決策的最佳方法,因而推動這類方法不斷演變。科濟里科夫稱:“當需要解答重要問題時,我們可以運用本杰明·富蘭克林在300年前提出的贊成/反對模型,但也有更先進的方法。”雖然科濟里科夫的目標對象是商業領袖,但她的方法也能夠用于做出其他重要的人生決定,比如去哪里上大學,以及是否要生孩子。
決策者應該問問自己:怎樣才可以改變我的想法?他們還應該利用數據信息,但在看到數據之前,首先要規定好面對不同的數據結果要采取怎樣的做法。這有助于決策者避免證真偏差,即利用數據來證實他們已有的觀點。科濟里科夫表示,記錄做出重要決定的過程(包括當時能夠獲取的信息)也有助于在選擇做出后評估其優劣水平。(財富中文網)
譯者:中慧言-劉嘉歡
Google’s chief decision woman is out.
Cassie Kozyrkov, who has served as the internet company’s chief decision scientist and helped pioneer the field of decision intelligence, is going solo and working on projects to help business leaders navigate the tricky waters of artificial intelligence.
As AI becomes more powerful and more prevalent across industries, Kozyrkov will launch her first LinkedIn course, publish a book, and give keynote speeches about how to make informed decisions. Her goal is to give leaders the tools to think about how they deploy AI, and to help the public hold AI decision-makers accountable for the choices that impact millions of people, she told Fortune.
She spent 10 years at Google, five of which as chief decision scientist. Among her responsibilities, she guided company leaders to make informed and responsible decisions regarding AI.
“I’ve always believed Google’s heart is in the right place,” Kozyrkov said. But it is a large company, and outsiders sometimes equated her personal opinions with Google’s stance on a topic. In her new role, she won’t have to worry about how her advocacy impacts a company she represents, she told Fortune.
AI is undergoing a massive period of growth, which has caused anxieties about the future for some. Top minds in the AI space recently warned it could end humanity as we know it. This point in time feels like an inflection point in the world of tech. It is essential to have leaders in place that are educated in decision-making and consumers that can hold them accountable, according to Kozyrkov.
Kozyrkov, who grew up in South Africa, received a bachelor’s degree in economics from the University of Chicago. She also has a master’s degree in mathematical statistics from North Carolina State University and a partially completed PhD in psychology and neuroscience from Duke University. Prior to working at Google, she spent 10 years as an independent data science consultant.
During Kozyrkov’s time as chief decision scientist, which began in 2018, Google’s AI division grew substantially. CEO Sundar Pichai unveiled Duplex, an add-on to Google Assistant that can make phone calls on behalf of a user, intended to help schedule appointments, restaurant reservations, and other engagements. Google has made leaps in generating text, images, and videos from prompts, and it is developing robots that can write their own code. It also released Bard, its large language model rivaling ChatGPT. Many of Google’s developments have raised ethical questions from employees and academics, which isn’t unlike what’s happening at other AI companies. Google didn’t respond to requests for comment.
Kozyrkov would not comment on decisions she helped make at Google because of her nondisclosure agreement, but it’s not difficult to think of areas where the company has faced difficult choices when it comes to AI. In building Bard, Google had to decide whether to scrape copyrighted information to train the AI model. A lawsuit filed against Google in July accuses the company of doing so. Google also had to decide at what point to release the technology to remain competitive with ChatGPT but not damage its reputation. It came under fire right after it published the Bard demo video in which the chatbot gave an incorrect answer.
Kozyrkov’s work revolves around the idea that individuals can make choices that affect a lot of people, and those at the top aren’t necessarily educated in the practice of decision-making. “It is easy to think of technology as autonomous,” she said. “But there are people behind that technology making very subjective decisions, with or without skill, to affect millions of lives.”
The best way to make a decision is something humans have long grappled with, and which continues to evolve. There’s Benjamin Franklin’s three-century-old pro/con model, but there are also more advanced ways to answer important questions, Kozyrkov said. While she is targeting business leaders, her methods can also be used to make other important life decisions, like where to go to college or whether to start a family.
Decision-makers should ask themselves: What would it take to change my mind? They should also use data, but prior to seeing it, set criteria for what they will do based on what the data says. This helps decision-makers avoid confirmation bias, or using data to confirm an opinion they already have. It is also helpful to document the process of coming to an important decision—including the information available at the time—to evaluate the quality of a choice after it is made, according to Kozyrkov.