Artwork

Kandungan disediakan oleh Dan Turchin. Semua kandungan podcast termasuk episod, grafik dan perihalan podcast dimuat naik dan disediakan terus oleh Dan Turchin atau rakan kongsi platform podcast mereka. Jika anda percaya seseorang menggunakan karya berhak cipta anda tanpa kebenaran anda, anda boleh mengikuti proses yang digariskan di sini https://ms.player.fm/legal.
Player FM - Aplikasi Podcast
Pergi ke luar talian dengan aplikasi Player FM !

364: Inside the AI Infrastructure Race: TensorWave CEO Darrick Horton on Power, GPUs and AMD vs NVIDIA.

36:14
 
Kongsi
 

Manage episode 522142985 series 2986762
Kandungan disediakan oleh Dan Turchin. Semua kandungan podcast termasuk episod, grafik dan perihalan podcast dimuat naik dan disediakan terus oleh Dan Turchin atau rakan kongsi platform podcast mereka. Jika anda percaya seseorang menggunakan karya berhak cipta anda tanpa kebenaran anda, anda boleh mengikuti proses yang digariskan di sini https://ms.player.fm/legal.

Darrick Horton is the CEO and co-founder of TensorWave, the company making waves in AI infrastructure by building high-performance compute on AMD chips. In 2023, he and his team took the unconventional path of bypassing Nvidia, a bold bet that has since paid off with nearly $150 million raised from Magnetar, AMD Ventures, Prosperity7, and others. TensorWave is now operating a dedicated training cluster of around 8,000 AMD Instinct MI325X GPUs and has already hit a $100 million revenue run rate.
Darrick is a serial entrepreneur with a track record of building infrastructure companies. Before TensorWave, he co-founded VMAccel, sold Lets Rolo to LifeKey, and co-founded the crypto mining company VaultMiner.
He began his career as a mechanical engineer and plasma physicist at Lockheed Martin’s Skunk Works, where he worked on nuclear fusion energy. While he studied physics and mechanical engineering at Andrews University, he left early to pursue entrepreneurship and hasn’t looked back since.
In this conversation we discussed:

  • Why Darrick chose AMD over Nvidia to build TensorWave’s AI infrastructure, and how that decision created a competitive advantage in a GPU-constrained market
  • What makes training clusters more versatile than inference clusters, and why TensorWave focused on the former to meet broader customer needs
  • How Neocloud providers like TensorWave can move faster and innovate more effectively than legacy hyperscalers in deploying next-generation AI infrastructure
  • Why power, not GPUs, is becoming the biggest constraint in scaling AI workloads, and how data center architecture must evolve to address it
  • Why Darrick predicts AI architectures will continue to evolve beyond transformers, creating constant shifts in compute demand
  • How massive increases in model complexity are accelerating the need for green energy, tighter feedback loops, and seamless integration of compute into AI workflows

Resources:

  continue reading

316 episod

Artwork
iconKongsi
 
Manage episode 522142985 series 2986762
Kandungan disediakan oleh Dan Turchin. Semua kandungan podcast termasuk episod, grafik dan perihalan podcast dimuat naik dan disediakan terus oleh Dan Turchin atau rakan kongsi platform podcast mereka. Jika anda percaya seseorang menggunakan karya berhak cipta anda tanpa kebenaran anda, anda boleh mengikuti proses yang digariskan di sini https://ms.player.fm/legal.

Darrick Horton is the CEO and co-founder of TensorWave, the company making waves in AI infrastructure by building high-performance compute on AMD chips. In 2023, he and his team took the unconventional path of bypassing Nvidia, a bold bet that has since paid off with nearly $150 million raised from Magnetar, AMD Ventures, Prosperity7, and others. TensorWave is now operating a dedicated training cluster of around 8,000 AMD Instinct MI325X GPUs and has already hit a $100 million revenue run rate.
Darrick is a serial entrepreneur with a track record of building infrastructure companies. Before TensorWave, he co-founded VMAccel, sold Lets Rolo to LifeKey, and co-founded the crypto mining company VaultMiner.
He began his career as a mechanical engineer and plasma physicist at Lockheed Martin’s Skunk Works, where he worked on nuclear fusion energy. While he studied physics and mechanical engineering at Andrews University, he left early to pursue entrepreneurship and hasn’t looked back since.
In this conversation we discussed:

  • Why Darrick chose AMD over Nvidia to build TensorWave’s AI infrastructure, and how that decision created a competitive advantage in a GPU-constrained market
  • What makes training clusters more versatile than inference clusters, and why TensorWave focused on the former to meet broader customer needs
  • How Neocloud providers like TensorWave can move faster and innovate more effectively than legacy hyperscalers in deploying next-generation AI infrastructure
  • Why power, not GPUs, is becoming the biggest constraint in scaling AI workloads, and how data center architecture must evolve to address it
  • Why Darrick predicts AI architectures will continue to evolve beyond transformers, creating constant shifts in compute demand
  • How massive increases in model complexity are accelerating the need for green energy, tighter feedback loops, and seamless integration of compute into AI workflows

Resources:

  continue reading

316 episod

Todos os episódios

×
 
Loading …

Selamat datang ke Player FM

Player FM mengimbas laman-laman web bagi podcast berkualiti tinggi untuk anda nikmati sekarang. Ia merupakan aplikasi podcast terbaik dan berfungsi untuk Android, iPhone, dan web. Daftar untuk melaraskan langganan merentasi peranti.

 

Panduan Rujukan Pantas

Podcast Teratas
Dengar rancangan ini semasa anda meneroka
Main