Conversely, Intel’s goal is to enable “AI everywhere” with a major focus on edge and device, especially PC AI (Intel Core Ultra). NVIDIA has been less focused in that area as data center/cloud GPUs are still a fast-expanding market, Heyden noted.
Smaller LLMs — an opening for Intel?
The LLMs used for genAI tools can consume vast amounts of processor cycles and be costly to use. Smaller, more industry- or business-focused models can often provide better results tailored to business needs, and many end-user organizations and vendors have signaled this is their future direction.
“Intel [is] very bullish around the opportunity of smaller LLMs and [is] looking to embed this within their ‘AI everywhere’ strategy. ABI Research agrees that enabling AI everywhere requires lower power consumption, less cost-intensive genAI models, coupled with power efficient, low TCO hardware,” Hayden said.