As part of its third annual Intel Innovation event, the company has presented a series of technologies to bring artificial intelligence (AI) everywhere and make it more accessible in all workloads, from the client to the edge to the network and the cloud.
“AI represents a generational shift, ushering in a new era of global expansion in which computing is even more critical to achieving a better future for everyone,” said Intel CEO Pat Gelsinger. “For developers, it offers enormous social and business opportunities to push the boundaries of what is possible, create solutions to the world’s biggest challenges and improve the lives of everyone on the planet.”
In the inaugural presentation of the event aimed at developers,
Gelsinger showed how Intel is incorporating AI capabilities into its hardware products and making them accessible through open, multi-architecture software solutions. Also, he highlighted how AI is helping to drive “Siliconomics”, a “growing economy enabled by the magic of silicon and software.” Today, silicon fuels a 574 billion dollars which, in turn, drives a global technology economy valued at almost $8 trillion.
New Advances in Silicon, Packaging and Multi-Chiplet Solutions
The path begins with silicon innovation. According to Gelsinger, Intel’s four-year, five-node process development program is progressing apace: Intel 7 is already in mass production, Intel 4 is ready for manufacturing, and Intel 3 is planned for later this year.
Gelsinger also showed off an Intel 20A wafer with the first test chips for the processor. Arrow Lake from Intel, aimed at the client computing market in 2024. Intel 20A will be the first process node to include PowerVia, Intel’s backside power supply technology, and the new gate-all-around transistor design called RibbonFET. Intel 18A, which also takes advantage of PowerVia and RibbonFET, is on track to be ready for manufacturing in the second half of 2024.
Another way Intel is pushing Moore’s Law is with new materials and new packaging technologies, such as glass substrates, a development Intel announced this week. When introduced later this decade, glass substrates will enable seamless scaling of transistors in a package to help meet the need for high-performance, data-intensive workloads such as AI, and will maintain the Law of Moore well beyond 2030.
Intel also showed off a test chip package built with Universal Chiplet Interconnect Express (UCIe). The next wave of Moore’s Law will come with multi-chiplet packages, Gelsinger said, or sooner if open standards can reduce the friction of integrating IP. The UCIe standard, formed last year, will allow chiplets from different vendors to work together, enabling new designs for the expansion of diverse AI workloads. The open specification is supported by more than 120 companies.
The test chip combined an Intel UCIe IP chiplet built on Intel 3 and a Synopsys UCIe IP chiplet built on TSMC’s N3E process node. The chiplets are connected using advanced EMIB (integrated multi-die interconnect bridge) packaging technology. The demonstration highlights the commitment of TSMC, Synopsys and Intel Foundry Services to supporting an open standards-based chiplet ecosystem with UCIe.
Increase performance and extend AI everywhere
Additionally, Gelsinger highlighted the range of AI technology currently available to developers on Intel platforms and how it will increase dramatically over the next year.
The recent MLPerf AI inference performance results further reinforce Intel’s commitment to addressing every phase of the AI continuum, including the largest and most challenging generative AI and large language models. The results also highlight the accelerator Intel Gaudi2 as the only viable alternative on the market for AI computing needs. Gelsinger announced that a large AI supercomputer will be built entirely with Intel Xeon processors and 4,000 Intel Gaudi2 AI hardware accelerators, with Stability AI as the primary customer.
Zhou Jingren, chief technology officer of Alibaba Cloud, explained how Alibaba applies 4th generation Intel Intel technology, he noted, results in “notable improvements in response times, with an average speedup of 3x.”1
Intel also previewed the next generation of Intel Xeon processors, revealing that 5th Generation Intel® when they launch on December 14. Sierra Forest, with E-core efficiency and arriving in the first half of 2024, will offer 2.5 times more rack density and 2.4 times more performance per watt than the 4th generation Xeon and will include a version with 288 cores2. And Granite Rapids, with P core performance, will closely follow the launch of Sierra Forest, offering 2 to 3 times better AI performance compared to the 4th generation Xeon2.
Looking ahead to 2025, the next generation of Xeon with E core, codenamed Clearwater Forest, will arrive on the Intel 18A process node.
Launch of PC with AI through Intel Core Ultra processors
AI is also about to get more personal. “AI will fundamentally transform, reshape and restructure the PC experience, unleashing more personal productivity and creativity thanks to the power of the cloud and the PC working together,” said Gelsinger. “We are ushering in a new era of the PC with AI.”
This new PC experience comes with the upcoming Intel Core Ultra processors, codenamed Meteor Lake, featuring Intel’s first integrated neural processing unit, or NPU, for AI acceleration and local inference on the PC that save energy. Gelsinger confirmed that Core Ultra processors will also launch on December 14.
Core Ultra represents a turning point in Intel’s client processor roadmap as it is the first client chip design enabled by Foveros packaging technology. In addition to the NPU and significant advances in power efficiency thanks to Intel 4 process technology, the new processor offers discrete-level graphics performance with integrated Intel Arc graphics.
At the presentation, Gelsinger showed off a number of new AI-enabled PC use cases, and Jerry Kao, Acer’s chief operating officer, offered a preview of an upcoming Core Ultra-equipped Acer laptop. “We have been co-developing with Intel teams a set of Acer AI applications to take advantage of the Intel Core Ultra platform,” Kao explained, “developing with the OpenVINO toolkit and co-developed AI libraries to bring the hardware”.
Siliconomics for developers
According to Gelsinger, “in the future, AI will need to offer more access, scalability, visibility, transparency and trust to the entire ecosystem.” To help developers unlock this future, Intel has announced:
Intel Developer Cloud General Availability: Intel Developer Cloud helps developers accelerate AI using the latest Intel hardware and software innovations – including Intel Gaudi2 processors for deep learning – and provides access to the latest Intel hardware platforms, such as Intel® Xeon® processors Scalable 5th generation and Intel® Data Center Max Series 1100 and 1550 GPUs.
Using Intel Developer Cloud, developers can build, test, and optimize AI and HPC applications. They can also run small to large scale AI training, model optimization and inference workloads that are deployed with performance and efficiency. Intel Developer Cloud is built on an open software foundation with oneAPI – a multi-architecture, multi-vendor open programming model – to deliver hardware choice and freedom from proprietary programming models to support accelerated computing and hardware reuse and portability. code.
Intel’s OpenVINO Toolkit distribution version 2023.1: OpenVINO is Intel’s AI deployment and inference runtime of choice for developers on client and edge platforms. The release includes pre-trained models optimized for integration into various operating systems and cloud solutions, including many generative AI models, such as Meta’s Llama 2 model.
At the event, companies like ai.io and Fit:match demonstrated how they use OpenVINO to accelerate their applications: ai.io to evaluate the performance of any potential athlete; Fit:match to revolutionize the retail and wellness industries to help consumers find the best-fitting garments.
Project Strata, and the development of an edge-native software platform: The platform will launch in 2024 with modular blocks, premium services and support offerings. This is a horizontal approach to scaling the infrastructure needed for intelligent edge and hybrid AI, and will bring together an ecosystem of vertical applications from Intel and third parties. The solution will enable developers to build, deploy, run, manage, connect and secure distributed infrastructure and applications.