{"id":85019,"date":"2022-12-21T13:28:03","date_gmt":"2022-12-21T11:28:03","guid":{"rendered":"https:\/\/www.technion.ac.il\/blog\/computers-reimagined\/"},"modified":"2022-12-21T13:28:03","modified_gmt":"2022-12-21T11:28:03","slug":"computers-reimagined","status":"publish","type":"post","link":"https:\/\/www.technion.ac.il\/en\/blog\/computers-reimagined\/","title":{"rendered":"Computers Reimagined"},"content":{"rendered":"
Since they became part of our lives some 80 years ago, computers have become faster and smaller, but their basic architecture hasn\u2019t changed. There\u2019s still one part that stores information \u2013 that\u2019s the memory (e.g., RAM, hard drive), and another part that processes information \u2013 that\u2019s the CPU or processor. Now Associate Professor Shahar Kvatinsky presents an architectural alternative. Bringing the \u201cthinking\u201d and the \u201cremembering\u201d functionalities together into one unit, he has built a neural network right into the hardware of a chip, and as a proof of concept \u2013 taught it to recognize handwritten letters. The results of his study were recently been published in Nature Electronics<\/em>.<\/p>\n \u201cWe like to describe a computer as a \u2018brain\u2019, but entirely separate hardware for storing information and for using it is not how an organic brain works,\u201d explains Prof. Kvatinsky, who is a member of the Andrew and Erna Viterbi Faculty of Electrical and Computer Engineering at the Technion – Israel Institute of Technology. Prof. Kvatinsky develops neuromorphic hardware – electronic circuits inspired by neuro-biological architectures present in the nervous system. The idea of such computers was first developed in the 1980s at the California Institute of Technology, but it is modern technological developments that enabled considerable advances in that field.<\/p>\n One might think modern computers are already surpassing the human brain \u2013 has not a computer already defeated the best human chess and Go players? Although the answer is \u201cyes,\u201d AlphaGo, the program that defeated multiple Go masters, relied on 1500 processors, and accrued a $3000 electricity bill per game. The human players\u2019 energy consumption for the same game amounted to a sandwich, more or less, and that same player is also capable of talking, driving, and performing countless other functionalities. Computers still have a long way to go.<\/p>\n In collaboration with Tower Semiconductor, Prof. Kvatinsky and his team designed and built a computer chip that, like an organic brain, does everything: stores the information and processes it. This chip is also hardware-only, meaning its programming isn\u2019t separate; it is integrated into the chip. What this chip does is learn; specifically, learning handwriting recognition, a feat achieved through deep-belief algorithms. Unlike most of the neuromorphic chips investigated these days, that use emerging unconventional technologies, this chip is based on commercial technology available in Tower Semiconductor foundries. Presented with multiple handwritten examples of each letter, the chip learnt which one is which, and achieved 97% accuracy in recognition with extremely low energy consumption.<\/p>\n Artificial neural networks learn in a way similar to living brains: they are presented with examples (examples of handwritten letters, in this particular study), and \u201cfigure out\u201d on their own the elements that make one letter different from others, but similar to the same letter in different handwriting. When the neural network is implemented as hardware, the learning process strengthens the conductivity of some nodes. This is very similar to how, when we learn, the connections between neurons in our brains are strengthened.<\/p>\n There are countless potential uses for chips that do everything. For example, Prof. Kvatinsky says, such a chip could be incorporated into the camera sensor of smartphones and similar devices, eliminating the conversion of analogue data into digital \u2013 a step that all such devices perform before any form of enhancement applied to the image. Instead, all processing could be performed directly on the raw image, before it is stored in a compressed digital form.<\/p>\n \u201cCommercial companies are in a constant race to improve their product,\u201d Prof. Kvatinsky explains, \u201cthey cannot afford to go back to the drawing board and reimagine the product from scratch. That\u2019s an advantage the academia has \u2013 we can develop a new concept that we believe could be better and, release it when it can compete with what\u2019s already on the market.\u201d<\/p>\n This study was led by two researchers in Prof. Kvatinsky\u2019s lab: postdoctoral fellow Dr. Wei Wang, who now heads his own research group in Shenzhen, China, conceived the theoretical concepts of the hardware-based deep-belief network and performed the experimental measurements, while Phd student Loai Danial, who has since completed his doctoral studies, and is working in Mobileye, designed the physical chip and led the steps involved in its fabrication. The work was supported by the European Research Council (ERC) and the EU Horizon 2020 Future and Emerging Technologies (FET)-OPEN program.<\/p>\n <\/p>\n","protected":false},"excerpt":{"rendered":" Since they became part of our lives some 80 years ago, computers have become faster and smaller, but their basic architecture hasn\u2019t changed. There\u2019s still one part that stores information \u2013 that\u2019s the memory (e.g., RAM, hard drive), and another part that processes information \u2013 that\u2019s the CPU or processor. Now Associate Professor Shahar Kvatinsky… Continue Reading Computers Reimagined<\/span><\/a><\/p>\n","protected":false},"author":8,"featured_media":0,"comment_status":"open","ping_status":"open","sticky":false,"template":"","format":"standard","meta":{"_acf_changed":false,"footnotes":""},"categories":[24],"tags":[],"class_list":["post-85019","post","type-post","status-publish","format-standard","hentry","category-uncategorized"],"acf":[],"yoast_head":"\n<\/a>
<\/a>
<\/a>
<\/a>
<\/a>