Source of this article and featured image is TechCrunch. Description and key fact are generated by Codevision AI system.
Amazon Web Services unveiled its latest AI training chip, Trainium3, during re:Invent 2025, highlighting significant performance improvements over previous generations. The new chip, part of AWS’s roadmap, includes a Trainium4 prototype designed to work with Nvidia’s technology, expanding interoperability options. The system offers 4x faster processing, 4x more memory, and 40% better energy efficiency, addressing growing data center demands. AWS claims these advancements reduce costs for clients like Anthropic and SplashMusic, demonstrating practical benefits for AI applications. Julie Bort, the article’s author, provides insights into AWS’s strategic moves in the competitive AI chip market.
Key facts
- Trainium3 delivers 4x faster performance and 4x more memory compared to its predecessor.
- The chip’s energy efficiency improves by 40%, aligning with AWS’s sustainability goals.
- Trainium4, in development, will support Nvidia’s NVLink Fusion technology for interoperability.
- UltraServers can scale to 1 million Trainium3 chips, 10x the previous generation’s capacity.
- AWS claims cost savings for clients using the new hardware for AI training and inference.
TAGS:
#AI Chips #AWS #Cloud Computing #Cloud Infrastructure #data centers #energy efficiency #machine learning #Nvidia #server technology
