小程序
传感搜
传感圈

Microsoft reveals custom chips to power AI workloads on Azure

2023-11-20 10:16:29
关注

  •  

Microsoft has unveiled its first custom silicon, launching two chips, the Maia 100 (M100) AI accelerator and the Cobolt 100 CPU, designed to handle artificial intelligence and general purpose workloads on its Azure cloud platform.

Microsoft has unveiled two custom chips for its Azure Cloud platform. (Photo courtesy of Microsoft)

The two chips represent a first foray into semiconductors for Microsoft and see the company follow in the footsteps of its public cloud rivals, Amazon’s AWS and Google Cloud, which run their own chips in their data centres alongside those provided by vendors such as Nvidia.

Maia 100 AI accelerator and Cobolt 100 CPU unveiled

Both the new chips will be available early next year, and are designed on Arm architecture, which is increasingly being deployed in cloud datacentres as an alternative to semiconductors built using Intel’s x86 process, the long-time market leader. Microsoft already offers some Arm-based CPUs on Azure, having struck a partnership with Ampere Computing last year, but claims Cobolt will deliver a 40% performance increase compared to Ampere’s chips.

The Maia 100 will apparently “power some of the largest internal AI workloads running on Microsoft Azure”, such as the Microsoft Copilot AI assistant and the Azure OpenAI Service, which allows MSFT’s cloud customers to access services from AI lab OpenAI, the creator of ChatGPT.

Microsoft is “building the infrastructure to support AI innovation, and we are reimagining every aspect of our data centres to meet the needs of our customers”, said Scott Guthrie, executive vice-president of the company’s cloud and AI group. “At the scale we operate, it’s important for us to optimise and integrate every layer of the infrastructure stack to maximize performance, diversify our supply chain and give customers infrastructure choice.”

Customers will be able to choose from a wider range of chips from other vendors, too, with Microsoft introducing virtual machines featuring Nvidia’s H100 Tensor Core GPUs, the most powerful AI chip currently on the market. It also plans to add the vendor’s H200 Tensor Core GPU, launched this week, to its fleet next year to “support larger model inferencing with no reduction in latency”.

It is also adding accelerated virtual machines using AMD’s top-of-the-range MI300X design to Azure.

Microsoft was an early adopter of AI tools through its partnership with OpenAI, in which it invested billions of dollars earlier this year. OpenAI CEO Sam Altman is enthusiastic about the M100’s potential, and said: “We were excited when Microsoft first shared their designs for the Maia chip, and we’ve worked together to refine and test it with our models.”

Content from our partners

Collaboration along the entire F&B supply chain can optimise and enhance business

Collaboration along the entire F&B supply chain can optimise and enhance business

Inside ransomware's hidden costs

Inside ransomware’s hidden costs

The future of data-driven enterprises is transforming data centre requirements

The future of data-driven enterprises is transforming data centre requirements

Altman added that he believes Azure’s AI architecture “paves the way for training more capable models and making those models cheaper for our customers.”

View all newsletters Sign up to our newsletters Data, insights and analysis delivered to you By The Tech Monitor team

Will Microsoft’s new chips give it an AI edge?

Microsoft is the last of the public cloud market’s big three to launch its own processors. Amazon offers its own range of Arm-based Graviton processors as an option to AWS customers, while Google uses in-house tensor processing units, or TPUs, for its AI systems.

James Sanders, principal analyst for cloud and infrastructure at CCS Insight, said: “Microsoft notes that Cobalt delivers up to 40% performance improvement over current generations of Azure Arm chips. Rather than depend on external vendors to deliver the part Microsoft needs, building this in-house and manufacturing it at a partner fab provides Microsoft greater flexibility to gain the compute power it needs.”

Sanders argues that the benefits of developing the Maia 100 are clear. He said: “With Microsoft’s investment in OpenAI, and the burgeoning popularity of OpenAI products such as ChatGPT as well as Microsoft’s Copilot functionality in Office, GitHub, Bing, and others, the creation of a custom accelerator was inevitable. At the scale which Microsoft is operating, bringing this computing capacity online while delivering the unit economics to make this direction viably profitable, requires a custom accelerator.”

Read more: Google Cloud launches Vertex AI data residency regions

  •  

参考译文
微软公布专为Azure平台AI工作负载打造的定制芯片
微软发布了其首款定制芯片,推出了两款芯片——Maia 100(M100)人工智能加速器和Cobalt 100 CPU,旨在处理其Azure云平台上的人工智能和通用计算负载。微软推出了两款专为其Azure云平台设计的定制芯片。(图片由微软提供)这两款芯片标志着微软首次进军半导体领域,紧随其公共云竞争对手亚马逊AWS和谷歌云的脚步,它们都在数据中心使用自己的芯片,同时使用英伟达等供应商提供的芯片。Maia 100人工智能加速器和Cobalt 100 CPU已推出,两款新芯片将于明年年初上市,并采用ARM架构设计。ARM架构正日益成为云数据中心中替代英特尔x86制程(长期市场领导者)半导体的选择。微软此前已与Ampere Computing达成合作,已经在Azure上提供一些基于ARM的CPU,但表示Cobalt的性能相比Ampere的芯片提升了40%。Maia 100将为“Microsoft Azure上一些最大的内部人工智能工作负载”提供动力,例如微软Copilot人工智能助手和Azure OpenAI服务,该服务使微软的云客户能够访问人工智能实验室OpenAI(ChatGPT的开发者)的服务。微软云和人工智能部门的执行副总裁斯科特·盖瑟(Scott Guthrie)表示:“我们正在构建支持人工智能创新的基础设施,我们正在重新设想数据中心的每一个方面,以满足客户的需求。”“在我们运营的规模下,优化和整合基础设施堆栈的每一层非常重要,这样可以最大化性能、多样化我们的供应链,并为客户提供基础设施选择。”客户还可以从其他供应商那里选择更广泛的芯片,微软引入了搭载英伟达H100 Tensor Core GPU的虚拟机,H100是目前市场上最强大的人工智能芯片。微软还计划在明年将其本周发布的新款H200 Tensor Core GPU加入其产品阵容,以“支持更大的模型推理,同时不会增加延迟”。微软还计划在其Azure上添加搭载AMD顶级MI300X设计的加速虚拟机。微软通过与OpenAI的合作,成为人工智能工具的早期采用者,并在今年早些时候向OpenAI投资了数十亿美元。OpenAI首席执行官山姆·阿尔特曼对M100的潜力表示热情,并表示:“当微软首次分享Maia芯片的设计时,我们感到非常兴奋。我们已共同合作,对其进行优化和测试。”合作内容来自我们的合作伙伴。沿整个食品和饮料供应链的协作可优化并提升业务。勒索软件背后的隐藏成本。数据驱动型企业的未来正在重塑数据中心需求。阿尔特曼补充道,他相信Azure的人工智能架构“为训练更强大的模型铺平了道路,并为我们的客户提供了更便宜的模型。”查看所有通讯。订阅我们的通讯,获取数据、洞察和分析,由《科技观察》团队呈献。点击此处注册。微软的新芯片能否为其带来人工智能优势?微软是公共云市场三大巨头中最后一个推出自家处理器的企业。亚马逊为其AWS客户提供一系列基于ARM的Graviton处理器,而谷歌则为其人工智能系统使用内部研发的张量处理单元(TPUs)。CCS Insights的云和基础设施首席分析师詹姆斯·桑德斯(James Sanders)表示:“微软指出Cobalt的性能相比当前Azure上ARM芯片的上一代产品提高了高达40%。与其依赖外部供应商来提供所需的部件,微软选择自主研发并在合作伙伴工厂制造,这为其带来了更大的灵活性以获得所需的计算能力。”桑德斯认为,开发Maia 100的好处显而易见。他表示:“鉴于微软对OpenAI的投资,以及OpenAI产品(如ChatGPT)和微软在Office、GitHub、必应等平台的Copilot功能日益受欢迎,打造定制加速器成为必然。在微软目前的运营规模下,要上线这种计算能力,同时实现单位经济的可行性,必须依靠定制加速器。”阅读更多:谷歌云推出Vertex AI数据驻留区域
您觉得本篇内容如何
评分

评论

您需要登录才可以回复|注册

提交评论

广告
提取码
复制提取码
点击跳转至百度网盘