The Road to Intel Panther Lake: Client AI

 

Panther Lake represents the next chapter for Intel AI PCs. In this episode of Talking Tech, we look back on the development of hardware and software to support executing AI workloads locally on client PCs. Learn about how our XPU strategy combines the CPU with NPU and GPU accelerators to provide a diverse array of engines tuned for different types of AI workloads, how models derived from data centers are optimized for local inference, how we work closely with OS and software providers to enable compelling user experiences, and what OpenVINO brings to the table along with Windows ML.