Bulletin of the American Physical Society
APS March Meeting 2021
Volume 66, Number 1
Monday–Friday, March 15–19, 2021; Virtual; Time Zone: Central Daylight Time, USA
Session S61: Deep Learning and Computer VisionFocus Live
|
Hide Abstracts |
Sponsoring Units: GDS Chair: Mohammad Soltanieh-Ha, Boston University; Jie Ren, Merck & Co. |
Thursday, March 18, 2021 11:30AM - 12:06PM Live |
S61.00001: Can (Almost) Unsupervised Artificial Intelligence Learn Chemistry and Physics from Microscopic Observations? Invited Speaker: Sergei Kalinin Machine learning has emerged as a powerful tool for the analysis of mesoscopic and atomically resolved |
Thursday, March 18, 2021 12:06PM - 12:18PM Live |
S61.00002: RG-Flow: A hierarchical and explainable flow model based on renormalization group and sparse prior Hong-Ye Hu, Dian Wu, Yizhuang You, Bruno Olshausen, Yubei Chen Flow-based generative models have become an important class of unsupervised learning approaches. In this work, we incorporate the key idea of renormalization group (RG) and sparse prior distribution to design a hierarchical flow-based generative model, called RG-Flow, which can separate different scale information of images with disentangle representations at each scale. We demonstrate our method mainly on the CelebA dataset and show that the disentangled representation at different scales enables semantic manipulation and style mixing of the images. To visualize the latent representation, we introduce the receptive fields for flow-based models and find receptive fields learned by RG-Flow are similar to convolutional neural networks. In addition, we replace the widely adopted Gaussian prior distribution by sparse prior distributions to further enhance the disentanglement of representations. From a theoretical perspective, the proposed method has O(log L) complexity for image inpainting compared to previous flow-based models with O(L2) complexity. |
Thursday, March 18, 2021 12:18PM - 12:30PM Live |
S61.00003: High throughput detection and quantification of Giardia lamblia cysts using holographic imaging flow-cytometry and deep learning Zoltan Gorocs, David Baum, Fang Song, Kevin de Haan, Hatice Ceylan Koydemir, Yunzhe Qui, Zilin Cai, Thamira Skandakumar, Spencer Peterman, Miu Tamamitsu, Aydogan Ozcan Annually >200 million people contract giardiasis, a diarrheal illness caused by Giardia lamblia, a microscopic waterborne parasite. To provide a cost-effective water screening tool, we created a field-portable holographic imaging flow-cytometer that can acquire in-focus phase and amplitude images of microscopic objects in water samples with a half-pitch resolution of <2µm and a liquid throughput of 100 mL/h. This computational imaging cytometer is controlled by a laptop, which is used to segment and reconstruct all the microscopic objects within the flow and can in real-time identify and count Giardia lamblia cysts using a trained convolutional neural network, achieving a detection limit of <10 cysts per 50 mL. This unique device is cost effective, compact (19 × 19 × 16 cm), lightweight (1.6 kg) and is entirely label-free, making it highly suitable for testing of drinking water supplies or for monitoring the integrity of filters in water treatment systems. |
Thursday, March 18, 2021 12:30PM - 12:42PM Live |
S61.00004: Machine learning the Biot-Savart law from quantum sensor data Mark Ku, Matthew J Turner, Danyal Bhutto, Bo Zhu, Matthew Rosen, Ronald L Walsworth We use a supervised neural network to reconstruct current distributions from magnetic field maps provided by a quantum diamond microscope (QDM). The neural network employs a U-Net architecture. We train the network with more than 104 simulated and real training data sets consisting of QDM magnetic images of 2D patterns of current-carrying wires. We find that the trained network can reproduce with high fidelity a heretofore unseen current distribution from the associated QDM magnetic image, thereby learning the Biot-Savart law. We anticipate that this Q4ML technology (quantum data for machine learning) will have wide-ranging applications, including the study of hydrodynamic electron flow in graphene, activity within integrated circuits, and electrical activity in biological systems. |
Thursday, March 18, 2021 12:42PM - 12:54PM Live |
S61.00005: Trainable Diffractive Surfaces for Spectral Encoding of Spatial Information Jingxi Li, Deniz Mengu, Nezih Tolga Yardimci, Yi Luo, Xurong Li, Muhammed Veli, Yair Rivenson, Mona Jarrahi, Aydogan Ozcan We demonstrate a deep-learning based single-pixel optical machine vision framework, where multiple diffractive surfaces are used to transform and encode the spatial information of objects into the power spectrum of the diffracted light. Specifically, by predetermining a set of wavelengths each representing a data class, we trained diffractive surfaces to maximize the power of the diffracted wavelength corresponding to the correct data class to perform all-optical object classification through a single-pixel detector. Using a plasmonic nanoantenna-based spectroscopic detector and 3D-printed diffractive layers, we experimentally validated this design by successfully classifying handwritten-digits using a snap-shot broadband illumination. Further, we combined this all-optical spectral encoding scheme with a separately trained shallow artificial neural network to improve the inference accuracy through a feedback between the optical and electronic networks. The same electronic network was also used to reconstruct the images of the input objects solely based on the power of target spectral components, demonstrating the success of our framework as a resource-efficient, data-specific machine vision platform. |
Thursday, March 18, 2021 12:54PM - 1:06PM Live |
S61.00006: Ensemble learning enhances the inference accuracy of diffractive deep neural networks Md Sadman Sakib Rahman, Jingxi Li, Deniz Mengu, Yair Rivenson, Aydogan Ozcan Diffractive deep neural networks (D2NNs) form an optical computing framework, which utilize deep learning-based optimization to design diffractive surfaces that collectively execute a desired optical mapping or statistical inference between an input and output plane. Here, we demonstrate the use of ensemble learning and feature engineering to significantly improve the inference performance of diffractive optical systems for object recognition. We trained an initial collection of 1252 unique D2NNs, where miscellaneous object plane and Fourier plane filters were utilized to engineer and diversify the spatial and spectral features of the input object wavefront. Then, to reduce the size and complexity of the final D2NN ensemble, we designed a pruning algorithm, the basis of which is iterative elimination of D2NNs based on optimized weights assigned to them. This algorithm resulted in diffractive ensembles of 14 and 30 D2NNs, achieving a blind testing accuracy of >61% and >62%, respectively, on CIFAR-10 dataset, which constitute the highest inference accuracies achieved to date by any diffractive optical system on the same dataset. |
Thursday, March 18, 2021 1:06PM - 1:18PM Live |
S61.00007: Increased Computation Speed of Neural Network-Aided Computer Vision Via Coded Diffraction of Off-Axis Optical Vorticies Altai Perry, Baurzhan Muminov, Luat T. Vuong
|
Thursday, March 18, 2021 1:18PM - 1:30PM Live |
S61.00008: Misalignment Insensitive Diffractive Optical Networks Deniz Mengu, Yifan Zhao, Nezih Tolga Yardimci, Yair Rivenson, Mona Jarrahi, Aydogan Ozcan Diffractive Deep Neural Networks (D2NNs) utilize deep learning-designed diffractive surfaces to compute a desired statistical inference task through diffraction of light between an input and output field-of-view. The multi-layer architecture of diffractive networks has been shown to improve the optical signal contrast and the capacity of generalization to unseen data, achieving e.g., >98% blind inference accuracy for hand-written digit classification. On the other hand, the use of multiple diffractive surfaces poses fabrication and alignment challenges for the physical implementation of these optical machine learning platforms. Here, we demonstrate a new training scheme that formulates the layer-to-layer misalignments and fabrication artefacts through continuous random variables embedded into the forward training model enabling accurate optical inference over a large range of physical misalignments. Extending this training strategy differential diffractive networks and hybrid (optical-electronic) networks further enhances the resilience of these diffractive systems against misalignments and fabrication tolerances. |
Thursday, March 18, 2021 1:30PM - 1:42PM Live |
S61.00009: Terahertz Pulse Engineering Using Diffractive Optical Networks Muhammed Veli, Deniz Mengu, Nezih Tolga Yardimci, Yi Luo, Jingxi Li, Yair Rivenson, Mona Jarrahi, Aydogan Ozcan Deep learning is driving a new transformation in optics by providing non-intuitive solutions to a diverse set of problems. As a newly established deep-learning based physical design strategy, diffractive optical neural networks bridge deep learning and wave optics to all-optically implement different tasks including e.g., image classification. Here, we demonstrate a diffractive optical network with a small footprint that is trained to shape input pulses into various desired optical waveforms. The synthesis of different output pulses of interest was demonstrated at THz part of the electromagnetic spectrum using deep learning designed passive diffractive layers that are engineered to precisely control the amplitude and phase of each spectral component across a broad range of frequencies. These results constitute the first direct pulse shaping demonstration in terahertz spectrum without using any optical pump or optical-to-terahertz converters. Moreover, a lego-like physical transfer learning technique was utilized to demonstrate the modularity of this framework, by achieving pulse width tunability. A wide-range of applications in e.g., tele-communications, spectroscopy and ultra-fast imaging can benefit from this learning based diffractive pulse engineering framework. |
Thursday, March 18, 2021 1:42PM - 1:54PM On Demand |
S61.00010: Early detection and classification of live bacteria using holography and deep learning Hongda Wang, Hatice Ceylan Koydemir, Yunzhe Qiu, Bijie Bai, Yibo Zhang, Yiyin Jin, Sabiha Tok, Enis Cagatay Yilmaz, Esin Gumustekin, Yair Rivenson, Aydogan Ozcan Early identification of pathogenic bacteria in large volume and complex samples such as drinking water and bodily fluids is a major challenge. Traditional methods used to detect the viability of bacteria are based on plate counting or molecular analysis, and suffer from disadvantages in terms of the detection time, cost, and limited portability for use in field-settings. Here we present a live bacteria detection system that captures time-lapse holographic images of a 60 mm-diameter agar plate followed by differential image analysis and deep neural network-based processing for specific and sensitive detection of bacterial growth and classification of the growing species. We demonstrated the performance of our computational imaging system using water samples spiked with Escherichia coli and total coliform bacteria, and achieved >12 h time savings compared to the EPA-approved methods. Our system is label-free and is able to automatically detect ~1 colony-forming unit (CFU)/L in less than 9 h of total test time, including sample preparation, pre-incubation of the samples and automated image processing and colony counting. This label-free and high-throughput platform is cost-effective and field-portable, making it especially suitable for use in resource limited settings. |
Follow Us |
Engage
Become an APS Member |
My APS
Renew Membership |
Information for |
About APSThe American Physical Society (APS) is a non-profit membership organization working to advance the knowledge of physics. |
© 2024 American Physical Society
| All rights reserved | Terms of Use
| Contact Us
Headquarters
1 Physics Ellipse, College Park, MD 20740-3844
(301) 209-3200
Editorial Office
100 Motor Pkwy, Suite 110, Hauppauge, NY 11788
(631) 591-4000
Office of Public Affairs
529 14th St NW, Suite 1050, Washington, D.C. 20045-2001
(202) 662-8700