Revolutionizing Light Control: Caltech’s Mind-Bending 3D-Printed Optical Devices

Ad

Somaderm


3D Printing New Nanoscale Optical DevicesResearchers at Caltech are evolving optical devices through algorithms, creating three-dimensional nanostructures with advanced light-manipulation capabilities. Credit: Caltech

Caltech’s new optical devices, evolved by algorithms and crafted via precise 3D printing, offer advanced light-manipulation for applications like augmented reality and cameras.

Researchers at Caltech have developed a groundbreaking technology that “evolves” optical devices and fabricates them using a specialized 3D printer. These devices, composed of optical metamaterials, gain their unique properties from nanometer-scale structures. This innovation could enable cameras and sensors to detect and manipulate light in ways previously impossible at such small scales.

The research was conducted in the lab of Andrei Faraon, the William L. Valentine Professor of Applied Physics and Electrical Engineering and was published in the journal Nature Communications.

Breaking Into Three Dimensions

While Faraon has worked with optical metamaterials before, this marks the first time the materials have been adapted into fully three-dimensional structures.

“Generally, most of these things are done in a thin layer of material. You take a very thin piece of silicon or some other material and you process that to get your device,” Faraon explains. “However, [the field of] optics lives in a three-dimensional space. What we are trying to investigate here is what is possible if we make three-dimensional structures smaller than the wavelength of light that we are trying to control.”

Sorting Light by Wavelength and Polarization

As a demonstration of the new design technique, Faraon’s lab has created tiny devices that can sort incoming light, in this case infrared, by both wavelength and polarization, a property that describes the direction in which the light waves vibrate.

Though devices that can separate light in this way already exist, the devices made in Faraon’s lab could be made to work with visible light and small enough that they could be placed directly over the sensor of a camera and direct red light to one pixel, green light to another, and blue light to a third. The same could be done for polarized light, creating a camera that can detect the orientation of surfaces, a useful ability for the creation of augmented and virtual reality spaces.

Unexpected Organic Designs

A glance at these devices reveals something rather unexpected. Whereas most optical devices are smooth and highly polished like a lens or prism, the devices developed by Faraon’s lab look organic and chaotic, more like the inside of a termite mound than something you would see in an optics lab. This is because the devices are evolved by an algorithm that continually tweaks their design until they perform in the desired way, similar to how breeding might create a dog that is good at herding sheep, says Gregory Roberts, graduate student in applied physics and lead author of the paper.

Optimization Algorithms and “Evolved” Designs

“The design software at its core is an iterative process,” Roberts says. “It has a choice at every step in the optimization for how to modify the device. After it makes one small change, it figures out how to make another small change, and, by the end, we end up with this funky-looking structure that has a high performance in the target function that we set out in the beginning.”

Faraon adds: “We actually do not have a rational understanding of these designs, in the sense that these are designs that are produced via an optimization algorithm. So, you get these shapes that perform a certain function. For example, if you want to focus light to a point—so basically what a lens does—and you run our simulation for that function, you most likely will get something that looks very similar to a lens. However, the functions that we are targeting—splitting wavelengths in a certain pattern—are quite complicated. That’s why the shapes that come out are not quite intuitive.”

From Model to Physical Device With TPP Lithography

To turn these designs from a model on a computer into physical devices, the researchers made use of a type of 3D printing known as two-photon polymerization (TPP) lithography, which selectively hardens a liquid resin with a laser. It’s not unlike some of the 3D printers used by hobbyists, except it hardens resin with greater precision, allowing structures with features smaller than a micron to be built.

Faraon says that the work is a proof of concept but that with a bit more research, it could be made with a practical manufacturing technique.

Reference: “3D-patterned inverse-designed mid-infrared metaoptics” by Gregory Roberts, Conner Ballew, Tianzhe Zheng, Juan C. Garcia, Sarah Camayd-Muñoz, Philip W. C. Hon and Andrei Faraon, 13 May 2023, Nature Communications.
DOI: 10.1038/s41467-023-38258-2

Additional co-authors are Conner Ballew, formerly of Caltech and now with JPL, which Caltech manages for NASA; Tianzhe Zheng, graduate student in applied physics; Sarah Camayd-Muñoz, formerly of Caltech and now with Johns Hopkins University; and Juan C. Garcia and Philip W. C. Hon of Northrop Grumman.

Funding for the research was provided by the Defense Advanced Research Projects Agency (DARPA), the Rothenberg Innovation Initiative, the Clinard Innovation Fund at Caltech, and the Army Research Office.


Ad

Somaderm