NVIDIA founder and CEO Jensen Huang said Tuesday that chip manufacturing is the “perfect application” for NVIDIA’s accelerated computing and artificial intelligence.
Detailing how recent advances in computing are accelerating “the world’s most important industry,” Huang spoke at the ITF World 2023 Semiconductor Conference in Antwerp, Belgium.
Huang delivered his remarks via video to a gathering of leaders from across the semiconductor, technology and communications industries.
“I am very excited to see how NVIDIA’s accelerated computing and artificial intelligence are used in the global chip manufacturing industry,” said Huang, detailing how advances in accelerated computing, artificial intelligence and semiconductor manufacturing intersect.
AI, Step Up Accelerated Computing
Exponential increases in CPU performance have been the driving force behind the technology industry for nearly four decades, Huang said.
But CPU design has become more mature in the past few years, he said. The rate at which semiconductors are becoming more powerful and efficient is slowing, even as the demand for computing power soars.
“As a result, global demand for cloud computing is driving data center energy consumption to skyrocket,” Huang said.
The pursuit of net zero while maintaining the “invaluable benefits” of more computing power requires a new approach, Huang said.
This challenge is natural for NVIDIA, which pioneered accelerated computing by combining the parallel processing capabilities of GPUs with CPUs.
This acceleration, in turn, sparked the AI revolution. Ten years ago, deep learning researchers such as Alex Kryzhevsky, Ilya Sutzkever, and Jeffrey Hinton discovered that GPUs could be cost-effective supercomputers.
Since then, NVIDIA has reimagined its computing stack for deep learning, opening up “multi-trillion opportunities in robotics, autonomous vehicles and manufacturing,” Huang said.
By offloading and accelerating computationally intensive algorithms, NVIDIA routinely speeds up applications by a factor of 10-100 while reducing power and cost by orders of magnitude, Huang explained.
Together, artificial intelligence and accelerated computing are transforming the technology industry. “We are experiencing two simultaneous transitions between platforms — accelerated computing and generative artificial intelligence,” Huang said.
Artificial intelligence, accelerated calculations are coming to the production of microcircuits
Huang explained that advanced chip manufacturing requires more than 1,000 steps that create elements the size of a biomolecule. Every step has to be near perfect to get a functional result.
“Sophisticated computational science is performed at each step to calculate the characteristics to be patterned and detect defects for ongoing process control,” Huang said. “Chip manufacturing is an ideal application for NVIDIA’s accelerated computing and artificial intelligence.”
Huang gave several examples of how NVIDIA GPUs are becoming an increasingly integral part of chip manufacturing.
Companies like D2S, IMS Nanofabrication, and NuFlare are building masking devices—machines that create photomasks, stencils that transfer patterns to wafers—using electron beams. NVIDIA GPUs accelerate the complex computational tasks of pattern rendering and mask process correction for these mask authors.
Semiconductor manufacturer TSMC and equipment suppliers KLA and Lasertech use extreme ultraviolet light, known as EUV, and deep ultraviolet light, or DUV, to inspect masks. Here, NVIDIA GPUs also play a crucial role in processing classical physical modeling and deep learning to generate synthetic reference images and detect defects.
KLA, Applied Materials and Hitachi High-Tech use NVIDIA GPUs in their electron beam and optical wafer inspection and viewing systems.
And in March, NVIDIA announced that it is working with TSMC, ASML and Synopsys to accelerate computational lithography.
Computer lithography models Maxwell’s equations for the behavior of light passing through optics and interacting with photoresists, Huang explained.
Computational lithography is the largest computational burden in chip design and manufacturing, consuming tens of billions of processor hours per year. Large data centers work 24/7 creating reticles for new chips.
Presented in March NVIDIA cuLitho is a software library with optimized tools and algorithms for GPU-accelerated computational lithography.
“We’ve already sped up processing by a factor of 50,” Huang said. “Tens of thousands of CPU servers can be replaced by a few hundred NVIDIA DGX systems, reducing power and cost by orders of magnitude.”
The savings would reduce carbon emissions or allow new algorithms to go beyond 2 nanometers, Huang said.
What is the next wave of AI? Huang described a new kind of AI—“embodied AI,” or intelligent systems that can understand, reason about, and interact with the physical world.
Examples include robotics, autonomous vehicles and even chatbots that are smarter because they understand the physical world, he said.
Juan gave his audience a look at NVIDIA VIMA, Multimodal Artificial Intelligence. According to Huang, VIMA can perform tasks with visual text prompts, such as “rearranging objects according to this scene.”
It can learn concepts and act accordingly, like “This is a widget,” “This is a thing,” and then “Place this widget in that thing.” It can also learn from demonstrations and stay within defined boundaries, Huang said.
VIMA is powered by NVIDIA’s artificial intelligence, and its digital counterpart works as well NVIDIA Omniverse, a 3D design and modeling platform. Huang said that physics-based artificial intelligence can learn to emulate physics and make predictions that obey physical laws.
Researchers are creating systems that combine information from the real and virtual worlds on a massive scale.
NVIDIA is building a digital double of our planet called Earth-2, which will first predict weather, then long-term weather, and finally climate. The NVIDIA Earth-2 team created FourCastNet, an AI physics model that emulates global weather conditions 50-100,000 times faster.
FourCastNet is powered by NVIDIA AI, and Earth-2’s digital twin is built into NVIDIA Omniverse.
Such systems promise to solve the greatest problem of our time, such as the need for cheap, clean energy.
For example, researchers at the UK Atomic Energy Authority and the University of Manchester are creating a digital double of their fusion reactor, using physics-AI to simulate plasma physics and robotics to control reactions and maintain the burning plasma.
Huang said scientists can explore hypotheses by testing them in a digital twin before activating a physical reactor, improving energy output, predictive maintenance and reducing downtime. “The reactor plasma physics AI is powered by NVIDIA AI, and its digital counterpart is powered by NVIDIA Omniverse,” Huang said.
Such systems promise further development of the semiconductor industry. “I look forward to seeing Omniverse’s physical-artificial intelligence, robotics and digital twins help drive the future of chip manufacturing,” Huang said.