The marriage of AI and 3D graphics is pushing the boundaries of visual computing, and engineering could change forever.
If you haven’t been closely following the work of chipmaker NVIDIA, you could be forgiven for labeling it a computer graphics company. After all, NVIDIA got its start designing graphics processing units (GPUs), and to this day it’s still a prominent provider of GPUs for both gamers and visualization professionals.
But in the last few years, NVIDIA has evolved beyond its graphics roots. The company’s website describes it as an “accelerated computing company,” but there’s a more direct way to pin down its new focus.
“NVIDIA is now first and foremost an AI company,” Carl Flygare, NVIDIA professional visualization marketing manager for PNY Technologies, told engineering.com.
That change in focus has not dampened NVIDIA’s enthusiasm for graphics; rather, it’s helping evolve it. NVIDIA and other hardware and software developers are eagerly exploring opportunities to improve computer graphics with artificial intelligence (AI). Welcome to the world of neural graphics.
“Neural graphics is going to fundamentally raise the bar across all industries,” Flygare says. “Engineers need to be aware of it.”
Engineering.com spoke with Flygare to learn why neural graphics is transformative and what it can do for engineers and others.
What is neural graphics?
There isn’t one concrete definition of neural graphics, nor does it have a distinct boundary from either AI or computer graphics. The term broadly refers to the fusion of the two, and signals the direction of both graphics hardware and the algorithms that run on it.
“Neural graphics is at the very edge of what’s possible to do with both AI—aka deep learning—and graphics,” Flygare says. “The goal is to enhance and streamline various aspects of the computer graphics process—or entire pipeline—and push the boundaries of what is possible with visual computing.”
Those boundaries, it seems, can be pushed quite far. For example, NVIDIA researchers have used AI to learn how light reflects from certain materials in order to deliver more realistic 3D visuals. They’ve also developed a technique of “neural texture compression” that provides significantly more texture detail without needing any extra graphics memory. Yet another AI-based innovation reduces by a hundredfold the memory needed for volumetric data. Those are just three of a score of research papers NVIDIA presented this summer at the SIGGRAPH 2023 computer graphics conference.
NVIDIA isn’t the only company looking to push the boundaries of visual computing. Intel, a seasoned chipmaker but relative newcomer to the discrete graphics market, also recently presented research on how AI can help improve visual quality through improved real-time path tracing.
More realistic rendering is one of the most promising benefits of neural graphics, Flygare says, as better visuals help cross the “uncanny valley” in which something on screen looks almost real—but subtly, unnervingly off base. For many use cases, such as architectural visualization or certain simulations, it’s important to have graphics that are convincingly photoreal.
“Neural graphics contributes significantly to that goal by fully simulating natural phenomenon, particularly the physical behavior of light,” Flygare says.
How neural graphics can help engineers
Neural graphics doesn’t just contribute to better, more realistic graphics—it can help make graphics more efficient, as well. This translates to more efficient workflows for anyone who heavily relies on visualization.
“The efficiency gains offered by neural graphics have a lot of far-reaching implications,” Flygare says, such as “accelerating the design process for architects and engineers.” He adds that neural graphics technology can help industries across the board save time and resources while doing better, more creative design work.
To illustrate the potential, Flygare gives the example of computational fluid dynamics: “You may be doing extremely sophisticated simulations and a very minor change in, say, the cross section of an airfoil can make a huge difference. Or a design to a flap system for that wing could make a very significant difference in drag or lift characteristics.” The ability to run those simulations more efficiently, and with greater visual fidelity, would allow an engineer to test more designs and have more confidence in the solution.
Better and more efficient graphics unlock other opportunities, as well. Augmented and virtual reality (AR and VR) experiences rely on having good and responsive visuals, and neural graphics can help drive these technologies—which are already proving their value to engineers—even further. Neural graphics can also advance how engineers visualize data, in AR/VR or otherwise.
“Neural graphics really has the potential to transform how we visualize and then interact with complex data,” Flygare says. “A neural graphics assist can let data scientists or people doing data analytics view the data in new ways that gives them insights.”
Taking advantage of neural graphics
Neural graphics is one of many examples of artificial intelligence impacting the engineering profession. In all cases, we’re still in the early days of the technology. It’s hard to predict exactly how AI will evolve—and how engineering will evolve with it.
But for engineers eager to find out, the best move is to ensure you’re prepared for the changes to come. In the case of neural graphics, that means ensuring you have access to capable graphics hardware.
“Engineers should be investing in graphics cards that have a powerful AI component,” Flygare says. “If they work for an organization that wants to use virtual GPUs, they should make sure that they have access to NVIDIA vWS [virtual workstation], a virtual GPU client. Those will be increasingly used by the graphics programs engineers take for granted in their toolkit, from CAD to BIM.”
Someday soon, engineers may be taking neural graphics for granted as well—and efficient, interactive and photorealistic visuals will be a common sight.