Intel Lunar Lake Running Local Small Language Model with RAG

 

Talking Tech is making the rounds at the Intel Tech Tour in Taiwan, and today we’re showcasing Microsoft’s Phi-3 small language model, or SLM, running locally on a Lunar Lake-powered system, without the need for an internet connection or access to the cloud. In addition to showcasing Phi-3's efficiency and speed on Lunar Lake, the demo shows how retrieval-augmented generation (RAG) allows users to supplement the model’s knowledge with their own data to enable hyper-specific answers from trusted sources.