July 19, 2024

Euro Global Post- Latest News and Analysis | UK News | Business News

European news, UK news, political news, breaking news, lifestyle and entertainment news.

Microsoft, Qualcomm, and MediaTek Lead Push Towards AI-Specific Chips

Microsoft’s recent Ignite developers’ conference showcased dedicated AI chips, a trend echoed by Qualcomm and MediaTek in their upcoming smartphone chipsets. Departing from conventional CPUs, these AI-specific chips are meticulously crafted for on-device AI tasks, particularly executing large language models (LLMs). This shift signifies a broader industry evolution towards specialised processing units tailored for AI model execution.

Understanding AI Chips and Their Distinction from CPUs

AI chips form a distinct category of semiconductors designed for on-device AI capabilities, focusing on executing complex AI models. Adopting a ‘system-on-chip’ (SoC) configuration, these chips encompass various functions beyond the central processing unit (CPU) responsible for general processing.

Practical Working of AI:

On-device AI involves specialised processing segments due to the limitations of conventional CPUs, which are proficient only in serial processing. The practical application of AI, such as a smartphone camera recognising a dog, relies on dedicated chips capable of parallel computing to process multiple tasks simultaneously. GPUs, while able to handle such workloads, are not designed specifically for AI, necessitating the use of dedicated AI chips.

How AI chips differ from CPUs:

AI chips achieve enhanced computations per unit of energy by integrating numerous smaller transistors that operate faster and consume less energy. Unlike general-purpose CPUs, AI chips are optimized for specific processing methodologies, utilizing parallel computing for simultaneous task execution. This parallel approach results in faster and more efficient processing, providing a notable advantage over CPUs.

Types of AI Chips:

Various AI chips serve diverse purposes. GPUs are instrumental in AI algorithm development, while Field Programmable Gate Arrays (FPGAs) apply trained algorithms to real-world data. Application-specific integrated circuits offer design flexibility for both training and inference tasks.

In summary, the push towards AI-specific chips by Microsoft, Qualcomm, and MediaTek signifies a transformative shift in semiconductor design. These chips, tailored for on-device AI capabilities, contribute to swifter and more efficient processing, advancing AI algorithm training and inference.