小程序
传感搜
传感圈

Who is Sam Altman, the man who co-founded OpenAI “for the benefit of humanity”?

2023-12-06 11:52:12
关注

  •  

Sam Altman, 38, is by most accounts the leading figure in the world of AI. Like many tech leaders before him whose legacies are well established – Steve Jobs, Jack Dorsey, Mark Zuckerberg, Michael Dell, Bill Gates, to name a few – Altman dropped out of university after just one year of studying computer science at Stanford University. The same year, in 2005, he co-founded the location-based social networking app Loopt, which was sold to banking company Green Dot in 2012 for $43.4m.

Sam Altman the co-founder of OpenAI for "the benefit of humanity"
Sam Altman co-founded OpenAI “for the benefit of humanity”. (Photo by Dia TV/Shutterstock)

Altman then went from partner to president of the start-up accelerator Y Combinator and YC Group. He left the accelerator programme group in 2019 when he became CEO of OpenAI, which he co-founded in 2015 with current company president Greg Brockman. Other founding members included the likes of Elon Musk, Trevor Blackwell, John Schulman and Vicki Cheung.

Altman co-founded OpenAI with a seemingly honourable mission: to ensure that “artificial general intelligence benefits all of humanity”. In other words, it aimed – in theory – to do good with artificial general intelligence (AGI) before any competitors reach a situation whereby the innovation of machines smarter than humans comes to harm us. 

Sam Altman and ‘effective altruism’

Unlike most of his AI peers, Sam Altman is – or perhaps was – a relatively trusted public face. Not only does he regularly speak to the press and public, but he also openly expresses concerns about AI ethics and the important balance between innovation and regulation. His AI regulation world tour in the summer of 2023 confirmed his dedication to at least present as a conscientious tech leader.

Altman has, in the past, been accused by some of being a closet “accelerationist” intent on pushing AI research and development as fast as possible with as few restrictions as possible. The OpenAI chief executive denied this recently on the New York Times Hard Fork podcast. “I think what differentiates me [from] most of the AI companies is [that] I think AI is good,” said Altman. “I am a believer that this is a tremendously beneficial technology and that we have got to find a way, safely and responsibly, to get it into the hands of the people [and] to confront the risks, so that we get to enjoy the huge rewards.”

Altman endorses effective altruism (EA), a philosophy based on the idea of “doing good better”, not least by promoting a balance between the innovation and regulation of AI. EA’s core value is to maximise the impact of problem-solving, or, in other words, to adopt a capitalist approach to charity and support. Its effectiveness as a philosophy is open to debate: one of its loudest advocates, crypto entrepreneur Sam Bankman-Fried, was recently convicted of running an $8bn financial fraud scheme.

In his private life at least, Altman is an enthusiastic practitioner of EA’s “quick-and-safe” logic: he is a vegetarian and a prepper. In 2016, he told the New Yorker “I have guns, gold, potassium iodide, antibiotics, batteries, water, gas masks from the Israeli Defense Force, and a big patch of land in Big Sur I can fly to.”

What happened between Sam Altman and OpenAI?

On paper, it almost sounds like Altman’s view of AI could be the solution to all our problems and sufferings. His recent dispute with OpenAI, however, demonstrated the practical limitations of this argument.

Content from our partners

AI is transforming efficiencies and unlocking value for distributors

AI is transforming efficiencies and unlocking value for distributors

Collaboration along the entire F&B supply chain can optimise and enhance business

Collaboration along the entire F&B supply chain can optimise and enhance business

Inside ransomware's hidden costs

Inside ransomware’s hidden costs

On 17 November, an official statement released by OpenAI announced Altman and Brockman’s sacking by stating that the former “was not consistently candid in his communications with the board, hindering its ability to exercise its responsibilities”. 

View all newsletters Sign up to our newsletters Data, insights and analysis delivered to you By The Tech Monitor team

After five days in exile, they were back as OpenAI leaders under a new board of directors when 95% of OpenAI employees threatened to quit if both weren’t reinstated. Their open letter also called for the resignation of all current board members, as the signatories felt they were “unable to work for or with people that lack competence, judgement and care”.

While the details of the disagreements remain unknown, there are rumours that they could be related to Altman exploring how to set up a chip venture to rival Nvidia.

Another reason that could have carried weight in Altman’s ousting is the safety concerns raised within the company only days before the board’s decision. Some OpenAI researchers reportedly wrote to the board alarmed at the rumoured capabilities of their newly developed AI model Q* – pronounced ‘Q star’ – in solving complicated maths problems. While OpenAI’s interim CEO Emmett Shear – who lasted just four days in the role before Altman returned – denied that Altman’s sacking had anything to do with Q* safety concerns, OpenAI could, again, be put to the test of AI alignment and Altman’s principles. OpenAI has yet to publicly release any details of Q*, or even confirm its existence.

After his abrupt ouster, Altman was initially replaced with CTO Mira Murati, but two days later, the board appointed former Twitch boss Shear as their new CEO. Shear then rapidly became, as his Twitter bio says, “interim ex-CEO of OpenAI”.

What does the OpenAI saga mean for the future of AI?

Earlier this year, at the Wisdom 2.0 conference in San Francisco – an annual gathering dedicated to addressing human well-being in relationship to technological growth – Altman said humans should work together to define the limits of AI. But it seems unclear where he puts that limit. 

Although Altman is an accelerationist, he said he recognises the importance of aligning AI, that is, balancing the growth of AGI with humanity’s best interests, usually through slowing down (or at least, not rushing) AI development. However, while he said he agreed that more safety measures should be in place, he nonetheless refused to sign a letter promoted by Elon Musk about pausing AI development.

To judge by the number of CEOs that have been at the head of the most famous AI company in the world over just five days (Altman, Murati, Shear, Altman again), it seems slowing down to avoid chaos is not OpenAI’s priority either. 

Consequently, tech journalist Kara Swisher advocates for elected people to regulate the tech industry rather than a small group of powerful people “who have their own self-interest at heart”, she says. For her, “it’s always the people that are the problem, not the machines.”

Beatriz Valle, senior technology analyst at GlobalData, told Tech Monitor that the problems at OpenAI highlight the need for effective international regulation, to support technological development “without stifling innovation”.

Read more: UK and allies launch cybersecurity guidelines for AI developers

  •  

参考译文
谁是山姆·阿尔特曼——那个声称与他人共同创立OpenAI是为了“造福人类”的人?
38岁的山姆·阿尔特曼(Sam Altman)被广泛认为是人工智能领域的重要人物。与许多在其之前已建立稳固遗产的科技领袖——如史蒂夫·乔布斯、杰克·多尔西、马克·扎克伯格、迈克尔·戴尔、比尔·盖茨等——一样,阿尔特曼在斯坦福大学学习计算机科学仅仅一年后就退学了。同年,即2005年,他与人共同创立了基于位置的社交网络应用Loopt,该应用于2012年以4340万美元的价格出售给了银行公司Green Dot。阿尔特曼与他人共同创立了OpenAI,目的是“为了全人类的利益”。(由Dia TV/Shutterstock提供照片)随后,他从投资人成为初创企业孵化器Y Combinator及其集团YC Group的总裁。2019年,他离开孵化器项目组,成为OpenAI的首席执行官。他于2015年与现任公司总裁格雷格·布罗克曼(Greg Brockman)共同创立了OpenAI。其他创始成员还包括埃隆·马斯克(Elon Musk)、特雷弗·布莱克威尔(Trevor Blackwell)、约翰·舒尔曼(John Schulman)和维克西·张(Vicki Cheung)。阿尔特曼与人共同创立OpenAI的初衷看起来是高尚的:即确保“通用人工智能(AGI)使全人类受益”。换句话说,该组织的理论目标是在竞争对手尚未导致“机器比人类更聪明并因此带来危害”之前,率先以善意的方式来开发通用人工智能。阿尔特曼与“有效利他主义” 与大多数他的AI同行不同,山姆·阿尔特曼——或至少是过去的他——是一个相对可信的公众人物。他不仅经常与媒体和公众交流,还公开表达了对AI伦理的担忧,以及创新与监管之间平衡的重要性。2023年夏季,他进行的AI监管全球巡讲证实了他至少在表面上是一位有责任感的科技领袖。过去,他曾被部分人指责为一名“加速主义”者,即他主张尽可能快速地推进AI研究和开发,而尽量不施加限制。最近,他在《纽约时报》的Hard Fork播客中否认了这一说法。他说:“我认为我与大多数AI公司不同之处在于,我认为AI是好的。我相信这是一项非常有益的技术,我们必须找到一种安全且负责任的方式,将它交给人们手中,应对风险,从而享受巨大的回报。”阿尔特曼支持“有效利他主义”(EA)——这是一种基于“以更高效的方式做好事”的理念的哲学,特别是通过在AI的创新与监管之间取得平衡。EA的核心价值是最大化问题解决的影响力,换句话说,就是以资本主义的方式从事慈善与支持。这一哲学的有效性尚有争议:其最响亮的倡导者之一,加密货币企业家山姆·班克曼-弗里德(Sam Bankman-Fried)最近被裁定犯有80亿美元的金融欺诈案。至少在私人生活中,阿尔特曼热衷于实践EA的“快速与安全”理念:他是一名素食者,也是一名应急准备者。2016年,他在《纽约客》杂志上表示:“我有枪、黄金、碘化钾、抗生素、电池、水和以色列国防军提供的防毒面具,还有一块位于Big Sur的大片土地,我可以在那里飞过去。”山姆·阿尔特曼与OpenAI之间发生了什么? 从纸面来看,阿尔特曼对AI的看法几乎像是解决所有问题和苦难的方案。然而,他最近与OpenAI之间的分歧却显示了这一观点在实际中的局限。 我们的合作伙伴内容 AI正在提升效率并为分销商释放价值 在整个食品与饮料供应链上合作可以优化并增强业务 深入了解勒索软件隐藏的成本 11月17日,OpenAI发布了一份官方声明,宣布阿尔特曼和布罗克曼被解雇,并称阿尔特曼“在与董事会的沟通中缺乏一贯的诚实,妨碍了董事会履行其职责的能力。” 查看所有简报 注册我们的简报 由《科技观察》团队为您带来数据、洞察和分析。点击此处注册 在流放五天后,他们作为OpenAI的领导者回到了公司,当时95%的OpenAI员工威胁要辞职,除非两人被重新任命。签署公开信的员工还要求所有现任董事会成员辞职,因为他们认为“无法继续为或与缺乏能力、判断力和关爱的人共事。”尽管具体分歧细节尚未公开,但有传闻称这可能与阿尔特曼试图探索如何建立一个与英伟达(Nvidia)竞争的芯片业务有关。阿尔特曼被解雇的另一个可能原因,是公司内部在董事会决定之前提出的安全担忧。据报道,一些OpenAI研究人员曾向董事会发出警告,对于他们新开发的AI模型Q*——发音为“Q Star”——在解决复杂数学问题方面的能力感到担忧。尽管临时首席执行官埃米特·希亚(Emmett Shear)——在阿尔特曼回归前仅仅担任了四天CEO——否认了阿尔特曼的解雇与Q*的安全问题有关,但OpenAI再次面临AI对齐问题和阿尔特曼原则的考验。截至目前,OpenAI尚未公开披露Q*的任何细节,甚至没有确认其存在。在阿尔特曼被突然解雇后,最初由首席技术官米拉·穆拉蒂(Mira Murati)接替,两天后,董事会任命了前Twitch负责人希亚担任新任CEO。希亚随后迅速成为他的推特简介中所说的“OpenAI的临时前任CEO”。OpenAI的这场风波对未来AI意味着什么? 今年早些时候,在旧金山举行的智慧2.0(Wisdom 2.0)大会上——这是一个每年探讨技术发展与人类福祉关系的会议——阿尔特曼表示人类应共同努力定义AI的界限。但似乎尚不清楚他如何定义这一界限。尽管阿尔特曼是加速主义者,但他也承认对齐AI的重要性,即在通用人工智能(AGI)的发展中保持与人类最大利益的平衡,通常通过减缓(至少不急于推进)AI的发展来实现。然而,尽管他表示同意应制定更多安全措施,他仍然拒绝签署埃隆·马斯克推动的关于暂停AI发展的信件。仅五天内,就有四位CEO先后担任全球最著名的AI公司OpenAI的领导人(阿尔特曼、穆拉蒂、希亚、阿尔特曼再次回归),这表明减速以避免混乱似乎也不是OpenAI的优先事项。因此,科技记者卡拉·斯维舍尔(Kara Swisher)主张,应由选民来监管科技行业,而不是一小群以自身利益为重的强大人物。她认为:“问题总是出在人身上,而不是机器。” GlobalData的高级技术分析师比阿特丽斯·瓦列(Beatriz Valle)告诉《科技观察》,OpenAI的问题凸显了需要有效的国际监管,以支持技术发展“而不会扼杀创新”。 阅读更多:英国及其盟友推出针对AI开发者的网络安全指南
您觉得本篇内容如何
评分

评论

您需要登录才可以回复|注册

提交评论

广告
提取码
复制提取码
点击跳转至百度网盘