TensorWave, a cloud-platform for AI workloads powered by AMD's MI Instinct accelerators, is kicking off the Beyond CUDA Summit starting today. The event focuses on the concept of the 'CUDA moat,' and how developers can optimize their AI-centric workloads using other alternatives. Attendees can expect to see demonstrations, hot takes, panels, and expert opinions from influential leaders in the AI field including computer architect icons like Jim Keller and Raja Koduri.
It's no secret that Nvidia-built GPUs constitute the majority of hardware in the AI space. Although AMD's Instinct accelerators offer performance comparable to Nvidia hardware, the already-established and mature CUDA ecosystem is indispensable to some users / organizations. Nvidia realized the potential of parallel computing on its GPUs early on and developed a proprietary platform dubbed CUDA, which is now the de facto standard for GPU-accelerated computing.
Through continuous efforts, optimizations, and the sudden rise of AI which coincidentally is powered by GPUs, Nvidia has positioned itself as a leading solution provider. In fact, 90% of Nvidia's revenue is now driven by its data-center offerings, with CUDA being a central selling point. This creates a vendor lock-in situation, where CUDA (software) effectively confines the industry to Nvidia's hardware, limiting innovation and competition.
The industry is shifting gears to a more open-source and hardware-agnostic future, but that's easier said than done. We have OpenCL, ROCm, oneAPI, and Vulkan as alternatives, however, each trails Nvidia in one or many aspects. Enter Beyond CUDA, where key figures in the AI field have rallied up to congregate and develop a more diverse and heterogeneous future. Hosted by TensorWave, the Beyond CUDA Summit will address the many challenges the AI computing industry faces, such as hardware flexibility, cost efficiency, and exploring the available alternatives to CUDA.
Platforms like ROCm require significant developments to achieve parity with CUDA. Even now, ROCm only supports a small selection of modern GPUs while CUDA maintains compatibility with hardware dating back to 2006. AMD's latest RDNA 4 GPUs are still not officially supported by ROCm. Developers have long bemoaned AMD's slow adoption of new features and support on new hardware. On the positive side, Strix Halo is now ROCm-compatible, though only on Windows.
If you live in San Jose, buckle up as the summit takes place at The Guildhouse, which is with notable irony just three blocks away from the McEnery Convention Center, the site of Nvidia's GTC, which also commences today. Participants have the opportunity to win an AMD Instinct MI210 GPU with 64GB of HBM2e memory. The event runs from 12 PM to 10 PM PDT, with four time slots for various sessions. You can learn more details about the summit here.