
Project Title
Intelligent Vision Sensors for Self-driven Automotives
Tentative titles of Sub-projects
1. Design of 3-D image sensor
2. Design on Convolutional Nural Network (CNN) for image/video processing
3. High throughput data transmission over chip-let interconnect between sensor and CNN.
4. On-chip built-in self test for the sensor IC.
5. System development using sensor IC and CNN ICs.
Desired Technical Skills
Depends on the specific projects that students are interested in. In general they need to have good circuit background covered in 2507.
Project Description
Self-driving capabilities are a common trend in today’s automotive industry where electronic hardware plays a major role. CMOS-based integrated circuits provide functionality and performance with excellent energy efficiency and low cost. This is already evident by the rapid improvement of the quality of pictures on cell phones. However, for self-driving automotive we need much faster frame rate.
An example case is shown in Fig. 1 where the next bikes are using AI-based image recognition is used to recognize traffic in real time. The three main components are SPAD array, time to digital converter and followed by CNN-based AI engine. This is a multi-year project that aims to build such systems with optimized custom IC design, testing and verification. In addition to building and demonstrating such systems, the project aims to train the students on the design, test and system development aspects of such advanced automotive systems. Therefore, this effort will be divided in to 4 to 5 sub-projects that will be listed at the end of the document.
In a conventional camera light intensity is main image capture mechanism – a scene is projected onto the image plane, where a multipixel sensor captures its intensity. In a time-correlated camera, in addition to light intensity, each pixel evaluates the phase or time-of-arrival (TOA) of impinging photons. SPAD, or single photon avalanche diode, constitutes a pixel structure designed to amplify the electrons from a single incident photon, known as the “avalanche amplification.” It is characterized by the nano-order accuracy of detection even under faint light. Applying this to LiDAR as the photodiode of a dToF sensor enables long-distance ranging with high accuracy. The dToF sensor measures the distance based on the time elapsed as the light emitted from the light source is reflected on the object and detected by the sensor, hence “time of flight” or ToF as shown in Fig. 2.
Even though they have been known for several decades, APDs have been proposed for time-resolved sensing relatively recently. Besides being solid-state devices, APDs can be advantageously operated in Geiger mode when biased above breakdown. In this mode of operation they are known as single-photon avalanche diodes (SPADs). SPADs can reliably detect single photons and their TOA precisely. The timing properties of SPADs have been studied extensively. However, only recently researchers have shown that SPADs may be built in CMOS technologies.
A bank of 32 TDCs was integrated on the same integrated circuit (Fig. 3). TDCs are utilized to compute time-interval measurements between a global start signal and photon arrival times in individual pixels. A row selection decoder is used to activate one row of 128 pixels that have access to the bank of TDCs. In this design, the 32 TDCs are shared among 128 pixels in a given row. The sharing scheme is based on a 4:1 event-driven readout that allows the 128 pixels in a row to operate simultaneously. Since every TDC is shared among four pixels, time-to-digital conversion time was highly optimized at design phase so as to improve throughput. Calibration of TDCs is implemented on-chip based on a master delay-locked loop (DLL) that locks against an external reference frequency, generated by a crystal oscillator. As a result, TDC resolution and linearity is maintained over process, voltage and temperature (PVT) variations. At the bottom of the TDC array, a high-speed digital readout circuit handles the data generated by all the TDCs. It consists of a pipelined 4:1 time-multiplexer that operates at a frequency at least our times faster than the data generated in every TDC. A readout controller generates all the signals utilized internally and implements a readout protocol interface. Most of digital building blocks in the TDC and readout circuitry provide duplicated (shadow) registers that may be read and written via an on-chip JTAG controller. The JTAG controller offered advantageous testing and characterization possibilities.
Finally, the fully integrated rangefinder sensor can be evaluated by taking a depth snapshot of a human-size mannequin face. The model was placed at 1 m from the sensor. In order for the light source to illuminate the model completely, a diffuser inducing a field-of-view of 30 was installed. Fig. 4 shows the depth map result of the model as well as the picture of the model, captured with a standard digital camera. The next step of the project is to integrate a convolutional neural network (CNN) along with the image sensor to real-time process the video feed and detect objects to initiate driving assistance. An example case is shown in Fig. 5 that explains how to conceptually the images will be classified.
Here are the list of sub-projects:
- Design of 3-D image sensor
- Design on Convolutional Nural Network (CNN) for image/video processing
- High throughput data transmission over chip-let interconnect between sensor and CNN.
- On-chip built-in self test for the sensor IC.
- System development using sensor IC and CNN IC.
Note that this is not an exhaustive list, based on the discussion with the student groups the sub-projects can be modified to fit the skill set of the groups.