钛媒体 3小时前
Amazon Rolls Out First 3nm AI Chip Trainium3, Challenging Nvidia and Google
index_new5.html
../../../zaker_core/zaker_tpl_static/wap/tpl_font3.html

 

TMTPOST --   Amazon.com Inc. ’ s cloud   unit Amazon Web Services    ( AWS )   unveiled Trainium3, its first 3-nanometer artificial intelligence   ( AI )   chip, at its annual re:Invent conference Tuesday, intensifying competition with Nvidia Corp. and Google in the lucrative market for AI computing hardware. The cloud computing giant also previewed Trainium4, currently under development, which will support Nvidia's NVLink Fusion interconnect technology for enhanced interoperability.

AI Generated Image

The accelerator was recently installed in a few data centers and became available to customers Tuesday, marking a rapid one-year turnaround from its predecessor. "As we get into early next year we'll start to scale out very, very quickly," Dave Brown, an AWS vice president, said in an interview.

Amazon shares rose as much as 2.2% in morning New York trading. Nvidia shares pared gains, while rival Advanced Micro Devices dropped to a session low.

The chip push represents a critical element of Amazon's strategy to stand out in AI. While AWS dominates in rented computing power and data storage, it has struggled to replicate that leadership among AI tool developers, as companies increasingly opt for Microsoft, which maintains close ties to OpenAI, or Google.

Performance Gains Target Cost-Conscious Customers

Trainium3 delivers substantial improvements over its predecessor. Each chip provides 2.52 petaflops of FP8 compute, with memory capacity increased 1.5 times and bandwidth boosted 1.7 times to 144 GB of HBM3e memory. Trn3 UltraServers deliver up to 4.4 times higher performance, 3.9 times greater memory bandwidth and four times better performance per watt compared to Trn2 systems.

The systems scale to 144 Trainium3 chips per server, with capabilities to link thousands of UltraServers providing access to up to 1 million chips — 10 times the previous generation. AWS emphasized the chips are 40% more energy efficient than prior versions, promising lower costs for cloud customers.

The upcoming Trainium4 will deliver another significant performance increase and crucially will support Nvidia's NVLink Fusion high-speed interconnect technology, allowing the systems to interoperate with Nvidia graphics processing units ( GPUs ) while maintaining Amazon's lower-cost server architecture. Amazon provided no timeline for Trainium4's release, though previous patterns suggest details may emerge at next year's conference.

Software Gap Limits Adoption Despite Price Advantage

Amazon's accelerators face a significant hurdle: they lack the deep software libraries that enable quick deployment of Nvidia's graphics processing units. Bedrock Robotics, which uses AWS servers for infrastructure, relies on Nvidia chips for building AI models to guide autonomous construction equipment. "We need it to be performant and easy to use. That's Nvidia," said Chief Technology Officer Kevin Peterson.

Many Trainium chips deployed today serve Anthropic in data centers across Indiana, Mississippi and Pennsylvania. AWS said it had connected more than 500,000 chips for the AI startup and aims to dedicate 1 million chips by year-end. However, Amazon has announced few other major customers, leaving analysts struggling to assess Trainium's market effectiveness. Anthropic also uses Google's Tensor Processing Units and secured a deal earlier this year for tens of billions of dollars worth of Google computing power.

"We've been very pleased with our ability to get the right price performance with Trainium," Brown said, positioning the chips as capable of powering intensive AI calculations more cheaply and efficiently than Nvidia's market-leading processors.

宙世代

宙世代

ZAKER旗下Web3.0元宇宙平台

一起剪

一起剪

ZAKER旗下免费视频剪辑工具

相关标签

google fusion the aws
相关文章
评论
没有更多评论了
取消

登录后才可以发布评论哦

打开小程序可以发布评论哦

12 我来说两句…
打开 ZAKER 参与讨论