小程序
传感搜
传感圈

Will OpenAI really build its own chips?

2023-10-11 16:44:37
关注

  •  

OpenAI is reportedly looking to develop its own semiconductors to power its large language models and AI tools. Such a move would reduce the artificial intelligence lab’s reliance on costly GPUs from Nvidia, which are currently in high demand around the world as tech vendors and businesses rush to implement AI systems.

GPT-3 for businesses
OpenAI is reportedly looking to develop its own chips. (Photo by Rafapress/Shutterstock)

The company is currently exploring the possibility of developing chips in-house, according to a report from Reuters which cites sources familiar with the plans. Though no final decision has been made, it has apparently been evaluating a potential acquisition target to accelerate this process.

Since the launch of ChatGPT last year, OpenAI has become one of the most talked-about tech companies in the world. The success of the chatbot kicked off an AI revolution spanning every industry, and helped OpenAI secure billions of dollars of investment from Microsoft. News that it is considering a move into the chip market comes a week after it was reported to be working with former Apple design chief Jony Ive to develop a device that could become the “iPhone of AI”, backed by a $1bn investment from SoftBank owner Masayoshi Son.

The Nvidia conundrum – why OpenAI needs its own chips

Like most AI companies, OpenAI uses hardware produced by Nvidia for training and maintaining its models. Nvidia’s flagship AI GPUs, the A100 and its successor, the recently released H100, account for 80% of the AI chip market, and this market share could get even higher according to some analysts.

AI companies need thousands of these chips to run their systems, with the AI supercomputer built by Microsoft for OpenAI powered by 10,000 A100s, which come at a cost of $10,000 a time. The H100 is even more expensive, at some $30,000 per chip, and with Nvidia only expecting to ship 550,000 this year, supply problems are likely to abound. Indeed, OpenAI CEO Sam Altman has openly complained that a lack of GPU availability is hindering his company’s progress.

With this in mind it is no surprise that OpenAI is looking to take production in-house to “escape its dependence on an Nvidia struggling to meet demand”, says Mike Orme, who covers the semiconductor market for GlobalData. But, he says, the supply problem does not stem from Nvidia itself, but has occurred because “TSMC, which makes Nvidia’s chips, has run into a severe back-end chip packaging capacity problem, which is likely to take up to 18 months to solve.”

TSMC, the Taiwanese company that dominates chip manufacturing, is having issues with its chip-on-wafer-on-substrate packaging process (packaging, in its most simple terms, is the way components are arranged and connected in a chip), which is constraining supply. Speaking to Asia Nikkei last month, TSMC chairman Mark Liu said the company could only meet 80% of customer demand as a result of the issue, which is expected to persist into 2025.

Can OpenAI take its software success into hardware?

Designing a chip in-house could be a tall order for OpenAI which has so far been solely focused on software, rather than hardware.

Content from our partners

How Midsona accelerated efficiency and reduced costs with a modern ERP system

How Midsona accelerated efficiency and reduced costs with a modern ERP system

Streamlining your business with hybrid cloud

Streamlining your business with hybrid cloud

How automation is transforming finance departments

How automation is transforming finance departments

One company that been on this journey is Sambanova, which offers a full-stack AI solution and last month launched its new SN40L AI semiconductor. Alex White, general manager for the EMEA region at Sambanova, says the shortage of GPUs is “undoubtedly hurting OpenAI’s scaling ambitions”, so it would be no surprise to see the company turn to its own semiconductors.

View all newsletters Sign up to our newsletters Data, insights and analysis delivered to you By The Tech Monitor team

But White says that Altman and his team – as well as OpenAI’s clients – must be prepared for the long haul to bring such a plan to fruition. “There’s a clear advantage to owning the whole stack from hardware to software – including the models that run on top,” he says. “But designing and manufacturing chips doesn’t happen overnight, it requires huge levels of expertise, and resources that are in increasingly short supply.”

White adds: “It took OpenAI over five years to develop GPT-4, which may be too long to wait for customers, I wouldn’t be surprised if hardware took a similar amount of time.”

GlobalData’s Orme says OpenAI could build out its hardware team by poaching talent from companies like Sambanova – as well as engineers graduating from leading universities – and “in short, doing in specialty chips what it did in the AI software arena to create OpenAI in the first place.”

He believes that making an acquisition is unlikely to come cheap. “To acquire, directly, one of the leading start-ups as a short-cut would cost a pretty penny, as [AI chipmaker] Tenstorrent is valued at $4bn and Cerebras at $1bn,” Orme says.

Will OpenAI join the Big Tech custom silicon trend?

If OpenAI does make its own chips, it will become the latest major tech company to take processor production in-house. Apple has been using its own chips, the A1 and M1 lines, for recent iterations of the iPhone and its Macbook laptops, while Amazon has developed the Graviton range, which it deploys in the data centres of its cloud platform, AWS.

In the AI space, Google has managed to avoid being dependent on Nvidia with a dedicated range of chips, which it calls tensor processing units. It says it uses TPUs for 90% of its AI workloads, and earlier this year it claimed that an AI supercomputer running on a string of TPUs was able to outperform a similar machine powered by Nvidia A100s, though it is likely the H100’s performance would trump such a set-up.

Microsoft is said to be working on its own range of AI processors, codenamed Athena. The Information reported last week that the first chip in the range could debut at MSFT’s Ignite 2023 developer conference next month.

Read more: Have we reached peak generative AI?

Topics in this article : OpenAI

  •  

参考译文
OpenAI 真的会自主研发芯片吗?
据报道,OpenAI正在考虑开发自己的半导体芯片,以支持其大型语言模型和AI工具。如果实施这一举措,将减少这家人工智能实验室对昂贵Nvidia GPU的依赖,而Nvidia的GPU目前在全球范围内备受追捧,因为科技厂商和企业正在竞相部署人工智能系统。据报道,OpenAI正在考虑开发自己的芯片。(照片由Rafapress/Shutterstock提供)据路透社报道,引用了解其计划的消息人士称,该公司目前正在探讨在内部开发芯片的可能性。尽管尚未做出最终决定,但据称该公司正在评估潜在的收购目标,以加快这一进程。自从去年推出ChatGPT以来,OpenAI已成为全球最受关注的科技公司之一。聊天机器人ChatGPT的成功掀起了席卷各行各业的AI革命,并帮助OpenAI从微软那里获得了数十亿美元的投资。有消息称,它正在考虑进入芯片市场,这则消息是在此前一周,有报道称OpenAI正与前苹果设计主管Jony Ive合作,开发一款可能成为“AI版iPhone”的设备,并获得了由软银掌门人孙正义提供的10亿美元投资。Nvidia的困境——为什么OpenAI需要自己的芯片。和大多数AI公司一样,OpenAI使用Nvidia的硬件来训练和维护其模型。Nvidia的旗舰AI GPU,A100及其继任者、最近发布的H100占据了80%的AI芯片市场,一些分析师认为,这一市场份额甚至可能更高。AI公司需要成千上万个这样的芯片来运行其系统,微软为OpenAI打造的AI超级计算机就配备了1万个A100芯片,每个芯片价格为1万美元。而H100的价格更高,约每个3万美元,并且Nvidia预计今年仅能出货55万块,供应问题可能会层出不穷。事实上,OpenAI首席执行官Sam Altman公开抱怨说,GPU的短缺正在阻碍公司的发展。对此,GlobalData负责半导体市场的Mike Orme表示,OpenAI寻求内部生产芯片并不令人意外,以“摆脱对难以满足需求的Nvidia的依赖”。但他说,供应问题并非源于Nvidia本身,而是因为“为Nvidia制造芯片的台积电(TSMC)遇到了严重的后端芯片封装产能问题,这可能需要长达18个月的时间才能解决。”主导芯片制造的台湾公司TSMC目前正面临其晶圆衬底封装(最简单的说法,封装就是芯片中元件的排列和连接方式)流程的问题,这限制了供应。上个月在接受《亚洲时报》采访时,TSMC董事长Mark Liu表示,由于这一问题,该公司目前只能满足客户80%的需求,并预计这一问题将持续到2025年。OpenAI能否将软件成功扩展至硬件?对于OpenAI来说,内部设计芯片可能是一个巨大的挑战,因为这家公司的重点一直放在软件,而不是硬件。来自我们的合作伙伴的内容:如何通过现代ERP系统提升Midsona的效率并降低成本;如何通过混合云优化业务;自动化如何改变财务部门。一家已经走过这条路的公司是Sambanova,它提供完整的AI解决方案,并在上个月推出了新的SN40L AI芯片。Sambanova欧洲、中东和非洲地区总经理Alex White表示,GPU的短缺“无疑正在阻碍OpenAI的扩张计划”,所以看到OpenAI转向自身半导体并不令人意外。但他表示,Altman及其团队——以及OpenAI的客户——必须做好长期准备,以实现这一计划。“从硬件到软件、包括运行其上的模型,拥有整个栈显然是有优势的。”他说。“但设计和制造芯片不是一蹴而就的,它需要大量的专业知识和资源,而这些资源正变得越来越稀缺。”White补充道:“OpenAI花了五年多的时间才开发出GPT-4,这可能对客户来说等待太久,如果硬件也需要类似的时间,我也不会感到惊讶。”GlobalData的Orme表示,OpenAI可以通过从Sambanova等公司挖人才,以及从顶尖大学毕业的工程师中招聘,来组建其硬件团队,并“简而言之,就像它在AI软件领域创建OpenAI一样,在专用芯片领域重现这一过程。”他认为直接收购一家领先的初创公司作为捷径,成本不会低。“直接收购一家领先的初创公司,可能需要付出相当大的代价,因为AI芯片制造商Tenstorrent的估值为40亿美元,而Cerebras为10亿美元,”Orme说。OpenAI是否会加入大科技公司定制硅片的趋势?如果OpenAI真的制造自己的芯片,它将成为最新一家将处理器生产内部化的大型科技公司。苹果公司已经在最新的iPhone和MacBook笔记本电脑中使用了自家芯片,如A系列和M系列。亚马逊公司已经开发了Graviton系列芯片,并在其云计算平台AWS的数据中心中部署。在AI领域,谷歌公司则通过专用芯片系列规避了对Nvidia的依赖,它将这类芯片称为张量处理单元。该公司表示,其90%的AI工作负载都使用TPUs运行,并在今年早些时候声称,一套TPUs驱动的AI超级计算机可以超越由Nvidia A100芯片驱动的类似机器,尽管H100芯片的性能很可能会超过这一设置。据称,微软正在开发自己的AI处理器系列,代号为Athena。《The Information》上周报道,该系列的第一款芯片可能会在下个月的MSFT Ignite 2023开发者大会上发布。阅读更多:我们是否已经达到了生成式AI的巅峰?本文主题:OpenAI
您觉得本篇内容如何
评分

评论

您需要登录才可以回复|注册

提交评论

广告
提取码
复制提取码
点击跳转至百度网盘