Harvard researchers develop a silicon image sensor that computes

2022-09-03 20:39:19 By : Ms. Angela Zhang

In-depth and nuanced coverage of leading trends in AI One

Latest updates in the world of AI

Information repositories on AI for your reference

A collection of the most relevant and critical research in AI today

Read the latest case studies in the field of AI

Curated sets of data to aid research initiatives

The best of AI brought to you in bite-sized videos

World-class policy developments and accepted standards in AI development

Roles spanning various verticals and domains in big data and AI

Latest events in AI locally and internationally

Pieces covering the most current and interesting topics

VCs, PEs and other investors in AI today

Top educational institutions offering courses in AI

Profiles of visionary companies leading AI research and innovation

India's brightest and most successful minds in AI research and development

A glimpse into research, development & initiatives in AI shaping up in countries round the world

Read all about the various AI initiatives spearheaded by the Government of India

Latest initiatives, missions & developments by GoI to drive AI adoption

In-sensor image processing, in which essential features are extracted from raw data by the image sensor itself instead of the separate microprocessor, can speed up the usual processing.

Researchers from the Harvard John A. Paulson School of Engineering and Applied Science have developed the first in-sensor processor that could be integrated into commercial silicon imaging sensor chips- complementary metal-oxide-semiconductor (CMOS) image sensors. These are used in nearly all commercial devices that are needed to capture visual information. 

In-sensor image processing, in which essential features are extracted from raw data by the image sensor itself instead of the separate microprocessor, can speed up the usual processing. However, to date, demonstrations of in-sensor processing have been limited to emerging research materials which are, at least for now, difficult to incorporate into commercial systems.  

According to Donhee Ham, the Gordon Mckay Professor of Electrical Engineering and Applied Physics at SEAS and senior author of the paper, their work can harness the mainstream semiconductor electronics industry to rapidly bring in-sensor computing to a wide variety of real-world applications.  

Ham and his team developed a silicon photodiode array. Unlike commercially available image sensing chips, the team’s photodiodes are electrostatically doped. As a result, the sensitivity of individual photodiodes or pixels to incoming light can be tuned by voltages. An array that connects multiple voltage-tunable photodiodes can perform an analogue version of multiplication and addition operations central to many image processing pipelines, extracting the relevant visual information as soon as the image is captured.  

A postdoctoral fellow at SEAS opined that these dynamic photodiodes could concurrently filter images as they are captured, allowing the first stage of vision processing to move from the microprocessor to the sensor itself.  

It can be programmed into different image filters to remove unnecessary details or noise for various applications. An imaging system in an autonomous vehicle, for example, may call for a high-pass filter to track lane markings, while other applications may call for a filter that blurs for noise reduction.  

Going ahead, the team aims to increase the density of photodiodes and integrate them with silicon-integrated circuits. 

Google AI Chatbot is now available for public testing

Bits to p-bits: Advances in Probabilistic Computing

Join our newsletter to know about important developments in AI space