Skip to main content

ICS News Archive

If looking for the latest news, go here

Return to News Archive List
April 6, 2023

Shuang Zhao Receives NSF CAREER Award

Shuang Zhao headshot

Shuang Zhao received a Faculty Early Career Development (CAREER) award, a prestigious National Science Foundation (NSF) award for early-career faculty serving as role models in research and education and leading advances in their field.

As an assistant professor of computer science in the Donald Bren School of Information and Computer Sciences (ICS), Zhao is developing numerical algorithms with applications in many areas, including computer vision, computational imaging, robotics, and virtual/augmented reality. In particular, this five-year, $600,000 NSF award supports his Physics-Based Differentiable and Inverse Rendering project, which involves developing new computational tools to infer physical parameters from images.

“One traditional direction in computer graphics and computational photonics is to simulate light. Given an environment, the simulation solves for the distribution of light — how it would get reflected, refracted and scattered,” explains Zhao. “There’s an inverse version of this problem, which we argue is more important: We are given the distribution of light, which can be measured using one or multiple photos or X-ray/infrared images, and what we want to know is the environment. Can we reconstruct the environment that resulted in the measured images?”

Application Areas
One application area of this work is content creation for 3D environments. “Yes, you can hire an artist, working for days and weeks to create a couple of models, but that’s not very scalable,” he says. “To create a rich environment, you need hundreds, if not thousands, of different objects.”

He hopes to automate such work with software that can take a photo (or individual frames of a short video clip) as input to create digital assets. “Instead of manually doing 3D modeling, you could just use a cell phone and go around an object and then get a digital twin of it.”

A camera phone capturing an object, a dozen different photos of the object, a reconstructed digital twin, and the digital twin with a light instead of black backdrop.
An example of automated creation of digital twins (from left to right): capturing the object (1) with a group of photos (2) allows for the reconstruction of a digital twin (3), which can then be re-rendered with different lighting (4).

Some of his students exemplified this potential in a demo they created during their internship at Meta. The demo appears in the Meta Connect Keynote 2022, with Mark Zuckerberg talking about using inverse rendering to create virtual objects for the metaverse.

Two plants and a decorative pumpkin on a table in a virtual world.
Inverse rendering was used to scan these objects and create digital twins for the Metaverse. (See the demo as presented during the Meta Connect Keynote 2022.)

This also has potential applications for medical imaging. “Broadly speaking, if you take a CT scan, what’s going on is basically inverse light transport. So we are trying to reconstruct what’s happening in that environment. Is there a tumor?” While Zhao notes that he’s not diagnosing cancer with this work, he stresses that this exemplifies the wide variety of potential applications. “That’s the scope of the work, which we believe is going to be very useful, because taking images and trying to reason about the environment is a very fundamental task.”

Another example he provides is computer vision, whether that’s in robotics or a self-driving car. “If a robot is looking around, what it gets are images,” he says, “so the task boils down to gaining an understanding of the environment. Is it safe to go that direction? These are all related to taking an image as input and then trying to reason about it and ultimately make decisions about where to go.”

Going a step further, he says another interesting direction is non-line-of-site imaging. “The high-level explanation is to look around the corner,” he says. “Can we reason about something that’s not directly visible based on the limited information that we have? That’s also an inverse transport problem.”

Building the Foundation
The project involves three main components. The first step is to develop the fundamental

math and algorithms for analyzing and generalizing how “infinitesimal changes in a virtual scene affect the distribution of light,” supporting new technology.

The second step is to then develop software systems that can efficiently leverage that technology. “We need computer software that is fast, because one obstacle we are still facing is performance,” says Zhao. “For interactive applications, performance is even more crucial. We don’t want a robot to stand there, thinking for an hour before it makes its next move.”

The final step is to test some early applications — those that are more forgiving to error, such as digitizing objects. “If you make small error, it’s okay. The digital twin will just look slightly different from the real object,” says Zhao. “With medical applications, accuracy is critical.” Exploring more scientific and medical applications, while beyond the scope of this specific project, is the longer-term goal.

Shani Murray