Deep learning and self-design for dynamic 3D imaging
Technion researchers have developed an innovative microscopic method based on deep learning and self-design of the optical system, enabling the study of dynamic 3D images at super-resolution. The ability to study the dynamics in whole cells over such times scales was rarely possible until now.
Researchers at Technion – Israel Institute of Technology present a breakthrough in 3D super-resolution microscopy of cells in the journal Nature Methods. The innovative system significantly shortens 3D image acquisition time by using a neural network and deep learning. The researchers experimentally demonstrated system efficiency in 3D mapping of mitochondria (the cell’s energy maker) and volumetric imaging of fluorescently labeled telomeres (chromosome edge regions, which are responsible, among other things, for cell division in the body) in live cells.
The research was carried out by Asst. Prof. Yoav Shechtman and Ph.D. student Elias Nehme, of the Faculty of Biomedical Engineering together with Asst. Prof. Tomer Michaeli of the Viterbi Faculty of Electrical Engineering.
A major challenge in biology today is the super-resolution mapping of dynamic biological processes in living cells. That is, mapping with a resolution 10 times greater than that of a standard optical microscope.
Microscopes, as a rule, produce two-dimensional images. Information is innately missing from such images as the world is three-dimensional. Currently, 3D images are usually obtained through layer scanning – the imaging of different layers in the sample and their computerized integration into a 3D image. This process is problematic as it requires a long scanning time, during which the object being examined must be static. In addition, in classical optical microscopy, the level of resolution (separation capacity) is limited by the “diffraction limit” formulated by German physicist Ernst Karl Abbe in 1873.
Enter DeepSTORM3D – a super-resolution 3D mapping system developed by the researchers. According to Asst. Prof. Yoav Shechtman, who led the development of DeepSTORM3D, “To get depth information from a 2D image we use wavefront shaping – an optical method that encodes the depth of each molecule in the image obtained on the camera. The problem with this method is that if several molecules are close by, their images overlap on the camera, and this drastically impairs spatial and temporal resolution, to the point that some samples cannot produce useful images at all.”
To address this challenge, researchers harnessed the field of deep learning and developed an artificial neural network – a system that performs computational tasks at unprecedented performance and speed. Together with Asst. Prof. Tomer Michaeli, an expert in this field, the researchers developed a neural network able to generate and train itself using a large number of virtual samples, and then produce super-resolution 3D images from microscopy data of real samples.
According to Shechtman, “The new technology has advanced us towards realizing one of the holy grails of biological research – mapping biological processes in living cells in super-resolution. It is important that the life sciences benefit from our instrumentation, and we maintain close relationships with biologists who explain their needs to us.”
Shechtman used the neural networks not only to analyze the images but to also improve the instrumentation. “This is perhaps the most exciting direction to emerge from the current development – the neural network has provided us with the optimal physical design of the optical system. In other words – the computer not only analyzes the data but has shown us how to build the microscope. This concept can also be applied in non-microscopy-related fields, and we are working on it.”
Research participants include Dr. Daniel Freedman from Google Research, and researchers and students from the Faculty of Biomedical Engineering, the Lorry I. Lokey Interdisciplinary Center for Life Sciences and Engineering, and the Russell Berrie Nanotechnology Institute: Racheli Gordon, Boris Ferdman, Dr. Lucien E. Weiss, Dr. Onit Alalouf, Tal Naor, and Reut Orange. The research was conducted with the support of the European Research Council Horizon 2020 Program, Google, the Israel Science Foundation and the Zuckerman Foundation. Asst. Prof. Yoav Shechtman is a Zuckerman Faculty Scholar and Dr. Lucien E. Weiss is a Zuckerman Postdoctoral Fellow.