AI: Pushing Infra boundaries - Memory is a key factor
In recent years, hyperscale data centers have been optimized for scale-out stateless applications and zettabyte storage, with a focus on CPU-centric platforms. However, as the infrastructure shifts towards next-generation AI applications, the center of gravity is moving towards GPU/accelerators. This transition from "millions of small stateless applications" to "large AI applications running across clusters of GPUs" is pushing the limits of accelerators, network, memory, topologies, rack power, and other components. To keep up with this dramatic change, innovation is necessary to ensure that hyperscale data centers can continue to support the growing demands of AI applications. This keynote speech will explore the impact of this evolution on Memory use cases and highlight the key areas where innovation is needed to enable the future of hyperscale data centers.
Presented by Manoj Wadekar, Hardware Systems Technologist, Meta
at the SNIA Compute, Memory, and Storage Summit
Learn More:
SNIA Compute, Memory, and Storage Summit: https://www.snia.org/cms-summit
SNIA Website: https://snia.org/ SNIA
Educational Library: https://snia.org/library
X: https://twitter.com/SNIA
LinkedIn: https://www.linkedin.com/company/snia