FoodSterilizera.com

Food Sterilizer,Food Processing Machinery Parts,Fish Processing Machines

FoodSterilizera.com

Food Sterilizer,Food Processing Machinery Parts,Fish Processing Machines

Editor: Little salted fish So sleepy

Recently, researchers at Princeton proposed the world’s first high-quality micron-level optical imaging device “Neural Nano-Optics” and appeared in Nature Communications. Compared with a camera lens that is 500,000 times larger, the imaging effect can be comparable in many scenes!

Look, a blue flower is slowly blooming, and the layers of petals are like the waves of the ocean.

The moment another purple flower blooms, it is also beautiful.

But the real striking thing about these images is not the flowers, but the camera lenses that photograph them.

That lens in hand, it’s like this.

A tiny square slice with a side length of about the width of a few fingerprints!

There are more exaggerated ones.

Traditional optical lenses, such as Apple’s iPhone X, have a rear lens that looks like this.

Looking closer, in a rear lens, there are actually several layers of lenses stacked.

Therefore, such large-sized optical components will inevitably be heavy, and will also occupy a lot of area.

How about comparing this small square sheet with a conventional camera equipped with a large-volume composite refractive lens?

The result is amazing!

In microscopic shooting, this small sheet is no less reproduced to objects than a bunch of lenses stacked together, and even the picture is brighter.

In the wide-angle shooting, this small thin film did not finally match the large-sized traditional camera, but the outline of the building was also restored.

The small sheet is called “Neural Nano-Optics” and scientific name is “metasurface optics”, which means metasurface optics.

Did you see a white circle in the middle, yes, this is the imaging device.

It’s only the size of a grain of coarse salt!

In terms of imaging results, the researchers said that “Neural Nano-Optics” and the Edmund Optics 50mm F2.0 lens that is 500,000 times larger can be comparable in many scenarios (not what I said, but on par with).

How godlike is a little camera?

In recent decades, the miniaturization of photosensitive elements has enabled cameras to be used in a wide range of applications, including medical imaging, smartphones, robotics, and autonomous driving.

However, if optical imagers can be an order of magnitude smaller than they are today, they could open up many new applications in nanorobotics, in vivo imaging, augmented reality/virtual reality, and health monitoring.

Recently, researchers at Princeton proposed the world’s first high-quality ultra-small optical imaging device “Neural Nano-Optics” and published it in Nature Communications.

The camera has panchromatic (400 to 700 nm) coverage, 40-degree wide-field imaging, and an F2.0 aperture.

Address: https://light.cs.princeton.edu/wp-content/uploads/2021/11/NeuralNanoOptics.pdf

Project address: https://github.com/princeton-computational-imaging/Neural_Nano-Optics

“Neural Nano-Optics” outperforms all existing state-of-the-art metasurface lens designs and is the world’s first metasurface optical imager to achieve high-quality, wide-field color imaging.

Metasurface is an ultra-thin artificial structure with a thickness smaller than wavelength, which can realize flexible and effective regulation of electromagnetic wave polarization, amplitude, phase, polarization mode, propagation mode and other characteristics.

Typically, conventional optics are physically large.

This is because conventional lenses work by bending light waves. As light waves pass through the lens, it refracts at different angles in different parts of the lens.

Typically, engineers stack multiple individual lenses on top of each other (called composite lenses) to direct and control light in a specific way.

A typical convex (converging) lens bends light waves to converge at the focal point (Source: Mini Physics)

Since traditional imaging systems must consist of a series of refractive elements that correct for aberrations, these bulky lenses necessarily make the camera more restrictive.

Another big obstacle to traditional imaging is the difficulty of reducing the focal length, which leads to greater chromatic aberration.

Metasurface optics interact with light through their different self-structures, and the same effect can be achieved with a thin layer of plane structure.

The problem of lens narrowing is solved, but what about the sensor?

In fact, submicron pixel optical sensors have long existed, but their imaging effects are limited by the theory of traditional optics.

Therefore, simply making the sensor small will not completely solve the problem. Due to the limitation of aperture and aperture, existing submicron pixel sensors achieve far inferior image quality to large optical cameras.

So how did the Princeton researchers solve this difficulty?

The answer has to be AI!

In this work, the Neural Nano-Optics proposed by the authors, the surface is just a small imaging device, but there is a completely differentiable deep learning framework behind it, combined with the image reconstruction algorithm based on neural features, and then learns the physical structure of the metasurface, achieving a reconstruction error of an order of magnitude lower than SOTA.

Metasurface surrogate model

The authors learned by using an efficient differentiable proxy function that maps phase values to spatially varying PSFs (PSF is short for point spread function, or point spread function).

The proposed differentiable metasurface image formation model (Figure e) consists of three sequential stages of differentiable tensor operations: metasurface phase determination, PSF simulation and convolution, and sensor noise.

In their model, the polynomial coefficients that determine the metasurface phase are the variables that can be optimized.

The φ of the optimizeable metasurface phase function, with the distance r from the optical axis as the independent variable, is written as follows:

where {a0,… an} is the coefficient of optimization, R is the radius of the phase mask, and n is the number of polynomial terms.

Optimizing metasurfaces on the basis of this phase function, rather than on a pixel-by-pixel basis, is intended to avoid local minimums.

When this differentiable method, Neural Nano-Optics compares with optional forward simulation methods such as time-domain finite difference simulation, the accuracy is approximate, but the differentiable method Neural Nano-Optics is 3000 times more efficient and more memory-efficient than full-wave simulation methods such as FTDT.

In addition to these, the differentiable framework Neural Nano-Optics has several technical highlights.

Feature-based deconvolution

To recover images from the measured data, the authors propose a feature-based neural deconvolution method that combines learned prior knowledge that can be generalized to unseen test data.

Specifically, the method uses a differentiable inverse filter and neural network for feature extraction and refinement. This method can learn effective features, so as to use the knowledge of the power spectrum to improve the physics-based deconvolution ability, thereby improving the generalization ability.

Formally, feature propagation deconvolutional networks do the following:

From the results, the reconstruction ability of feature propagation deconvolutional networks has made a qualitative leap compared with the previous methods.

End-to-end learning

With metasurface agents and neural deconvolution models, as well as a fully differentiable imaging pipeline, nanocameras can be designed end-to-end.

The learning method of the differentiable framework Neural Nano-Optics and the corresponding optimization process

Neural Nano-Optics’ end-to-end training optimization process looks like this.

Training optimization diagram

After training, using the Neural Nano-Optics system, high-quality panchromatic images can be obtained.

Compared to existing state-of-the-art designs, Neural Nano-Optics can generate high-quality wide-FOV reconstruction images

Chromatic aberration cancellation (DLAC)

None of the previous metaoptical lenses could simultaneously combine the characteristics of large aperture, wide field of view, small f-number, and large bandwidth.

The optimized ultra-optical design allows this ultra-compact camera to reach unprecedented levels, capable of capturing panchromatic images over a wide field of view, and achieves an aperture of up to 500 microns, the largest meta-optical lens available, which can increase the amount of light collected.

To formally quantify the design specifications, the researchers came up with a new metric called diffractive lens chromatic aberration cancellation (DLAC).

Neural Nano-Optics once again achieved an excellent result, ranking first with 250 DLAC.

Neural Nano-Optics applications

The advent of Neural Nano-Optics has the potential to revolutionize cameras, displays, and other optical devices.

Some exciting potential applications include:

AR/VR/MR—XR system developers are still grappling with the challenges of integrating large hardware systems into headsets. Neural Nano-Optics offers us hope for integrating tiny optics into small, high-performance, lightweight headsets and smart glasses.

Medical—Neural Nano-Optics’ enhanced optics enable more accurate diagnostic imaging than ever before, and can also be mounted on higher-resolution imaging tools such as endoscopes and new microscopes, enabling radiologists, physicians, and lab technicians to see details that were previously invisible.

Resources:

https://light.princeton.edu/publication/neural-nano-optics/

http://www.sim.cas.cn/xwzx2016/kjqy2016/202109/t20210901_6179216.html

https://www.nature.com/articles/s41467-021-26443-0

https://www.radiantvisionsystems.com/zh-hans/blog/going-meta-how-metalenses-are-reshaping-future-optics