Finally, the system uses virtual pixels on the object’s surface to model the 3D environment from the point of view of the object. Then, for each image from the real camera, ORCa uses machine learning to convert the surface of the object into a virtual sensor that captures light and reflections that strike each virtual pixel on the object’s surface. First, they take pictures of an object from many vantage points, capturing multiple reflections on the glossy object. Their technique, known as ORCa (which stands for Objects as Radiance-Field Cameras), works in three steps. The researchers found a way to overcome these challenges. Plus, reflections are two-dimensional projections of a three-dimensional world, which makes it hard to judge depth in reflected scenes. In addition, the glossy object may have its own color and texture that mixes with reflections. This distortion depends on the shape of the object and the world that object is reflecting, both of which researchers may have incomplete information about. Getting useful information out of these reflections is pretty hard because reflections give us a distorted view of the world,” says Dave. “In real life, exploiting these reflections is not as easy as just pushing an enhance button. The heroes in crime television shows often “zoom and enhance” surveillance footage to capture reflections - perhaps those caught in a suspect’s sunglasses - that help them solve a crime. The research will be presented at the Conference on Computer Vision and Pattern Recognition. Tiwary is joined on the paper by co-lead author Akshat Dave, a graduate student at Rice University Nikhil Behari, an MIT research support associate Tzofi Klinghoffer, an MIT graduate student Ashok Veeraraghavan, professor of electrical and computer engineering at Rice University and senior author Ramesh Raskar, associate professor of media arts and sciences and leader of the Camera Culture Group at MIT. This can be applied in many different areas,” says Kushagra Tiwary, a graduate student in the Camera Culture Group at the Media Lab and co-lead author of a paper on this research. “We have shown that any surface can be converted into a sensor with this formulation that converts objects into virtual pixels and virtual sensors. For instance, it could enable a self-driving car to use reflections from objects it passes, like lamp posts or buildings, to see around a parked truck. This method could be especially useful in autonomous vehicles. One could use this technique to see around corners or beyond objects that block the observer’s view. The AI system maps these reflections in a way that enables it to estimate depth in the scene and capture novel views that would only be visible from the object’s perspective. Using images of an object taken from different angles, the technique converts the surface of that object into a virtual sensor which captures reflections. Their method uses reflections to turn glossy objects into “cameras,” enabling a user to see the world as if they were looking through the “lenses” of everyday objects like a ceramic coffee mug or a metallic paper weight. As a car travels along a narrow city street, reflections off the glossy paint or side mirrors of parked vehicles can help the driver glimpse things that would otherwise be hidden from view, like a child playing on the sidewalk behind the parked cars.ĭrawing on this idea, researchers from MIT and Rice University have created a computer vision technique that leverages reflections to image the world.
0 Comments
Leave a Reply. |
AuthorWrite something about yourself. No need to be fancy, just an overview. ArchivesCategories |