Meta and Qualcomm Collaborate to Implement High-performance A.I. Models on Mobile Devices

Cristiano Amon, president and CEO of Qualcomm, spoke during the Milken Institute Global Conference on May 2, 2022, in Beverly Hills, California.

Patrick T. Fallon | AFP | Getty Images

Qualcomm and Meta have announced today that they will enable their collaboration on the social networking company’s new large language model, Llama 2, to run on Qualcomm chips on phones and PCs starting in 2024.

Up until now, large language models (LLMs) have mainly operated in large server farms, powered by Nvidia graphics processors, due to their significant computational and data requirements, which has resulted in a boost for Nvidia stock, experiencing a more than 220% increase this year. However, the AI revolution has largely overlooked companies like Qualcomm, which produce cutting-edge processors for smartphones and PCs. While Qualcomm’s stock has risen about 10% in 2023, it trails behind the NASDAQ’s gain of 36%.

With the announcement made on Tuesday, Qualcomm aims to position its processors as highly suitable for AI applications “on the edge,” meaning on the device itself rather than in the cloud. If large language models can be run on phones rather than in expensive data centers, it could significantly reduce the cost of AI model computation and potentially lead to the development of more efficient and faster voice assistants and other applications.

Qualcomm plans to make Meta’s open-source Llama 2 models available on its devices, believing that this compatibility will enable the creation of intelligent virtual assistants and other innovative applications. While Meta’s Llama 2 can perform many of the same functions as ChatGPT, it can be compressed into a smaller program, allowing it to run on a phone.

Qualcomm’s chips include a specialized “tensor processor unit” (TPU) that is highly suitable for the complex calculations required by AI models. However, the processing power available on a mobile device is significantly less compared to that of a data center equipped with state-of-the-art GPUs.

Meta’s Llama stands out because the company has made its “weights,” a set of numbers crucial for the functioning of a specific AI model, publicly available. This accessibility allows researchers and eventually commercial enterprises to use these AI models on their own computers without seeking permission or paying for it. In contrast, other notable LLMs such as OpenAI’s GPT-4 or Google’s Bard are closed-source, with their weights kept as closely guarded secrets.

Qualcomm has previously collaborated closely with Meta, particularly on chip development for Meta’s Quest virtual reality devices. Qualcomm has also showcased the performance of some AI models running on its chips, such as the open-source image generator Stable Diffusion.

Reference

Denial of responsibility! VigourTimes is an automatic aggregator of Global media. In each content, the hyperlink to the primary source is specified. All trademarks belong to their rightful owners, and all materials to their authors. For any complaint, please reach us at – [email protected]. We will take necessary action within 24 hours.
Denial of responsibility! Vigour Times is an automatic aggregator of Global media. In each content, the hyperlink to the primary source is specified. All trademarks belong to their rightful owners, and all materials to their authors. For any complaint, please reach us at – [email protected]. We will take necessary action within 24 hours.
DMCA compliant image

Leave a Comment