-
Notifications
You must be signed in to change notification settings - Fork 51
Sensor
- The first part of ISETCam uses computer graphics to model scene radiance.
- The second part of ISETCam uses optics to model the transformation of scene radiance to sensor irradiance.
- The third stage of ISETCam uses device physics to model how irradiance creates the sensor response.
With the development of CMOS image sensors in the 1990s, sensor structures and electronics have become increasingly sophisticated. Modern sensors comprise multiple components with a wide range of geometric and electronic properties. Simulation requires accounting for these properties.
Image sensors are an array of pixels. Each pixel contains one (or more) photodetectors which convert the photons into electrons. The pixels, which spatially sample the irradiance, are almost always behind a microlenses and a color filter. The photodetector signals are processed both on the sensor and computers to manage focus, exposure duration, and high dynamic range imaging, and color processing.
ISETCam groups the IR filter, color filters, and photodetector parameters as part of the sensor model. The parameters accounts for both geometric and electrical properties of the sensor. Sophisticated microlens properties, however, are part of the optics calculations in ISET3d. See the section on Lightfields.
Like other fundamental ISETCam structures, the sensor is managed using several functions (sensorCreate, sensorCompute, sensorSet/Get, sensorWindow, sensorPlot). A large number of sensor<TAB> functions are implemented for analyzing sensor properties. There are multiple sensor tutorials and scripts that illustrate various simulations.
This code creates a simple scene, renders it through wavefront optics, and then captures it using a Sony IMX363 sensor model.
scene = sceneCreate; scene = sceneSet(scene,'fov',10);
oi = oiCreate('wvf'); oi = oiCompute(oi,scene,'crop',true);
sensor = sensorCreate('imx363'); sensor = sensorSet(sensor,'fov',10,oi);
sensor = sensorCompute(sensor,oi);
We can visualize the sensor data this way
sensorWindow(sensor);
The left side of the window shows the properties of the pixel, including its size, fill factor, electrical noise, and so forth. The right side summarizes the properties of the sensor as a whole, including the number of pixels, color filter pattern, dark signal nonuniformity (DSNU), photoresponse nonuniformity (PRNU), and so forth.
There are many different sensor parameters stored in the sensor structure. For example, the pixel characteristics are stored within a slot, sensor.pixel. The sensorCompute() command calculates the voltage at each pixel, which is stored in the sensor.data.volts slot.
We programmatically interact with the sensor structure using sensorSet and sensorGet commands. For example, to get information about the pixel size we can use
>> sensorGet(sensor,'pixel size','um')
ans =
1.4000 1.4000
Or, to ask how many electrons are encoded at each of the pixels in the sensor array, we can use
>> e = sensorGet(sensor,'electrons');
>> size(e)
ans =
362 484
We can control the sensor properties using sensorSet. For example, to set the exposure time in seconds, we can type
>> sensor = sensorSet(sensor,'exp time',0.050);
Notice that when we call sensorSet, we need the output to be the sensor structure.
In this case, we can confirm the new exposure time with a sensorGet. (Notice this time I specified the time unit).
>> sensorGet(sensor,'exp time','ms')
ans =
50
For all of the ISETCam structures, there are many different parameters that we can get. There are also many, but fewer, parameters that can be set. We choose a minimum set that we can set so they do not conflict with one another. The larger number we can get includes parameters that are derived from the ones that are set.
A deeper dive into the sensor structure and sensor computations, is on the Sensor model page. There are many example scripts and tutorials on the Sensor Scripts and Tutorials Page. The full range of sensor scripts and tutorials can be found from the Matlab prompt, type either t_sensor<TAB> or s_sensor<TAB>.
For many years, we also modeled human visual encoding using portions and specialization in ISETCam. Around 2015, Dave Brainard, Joyce Farrell, and Brian Wandell decided that the many specializations of the human encoding needed their own implementation. Until about 2023, we maintained parallel repositories for ISETCam and ISETBio.
In 2023-2024 we refactored the code in the two repositories, making ISETCam the base and ISETBio a specialization that built on ISETCam. These days the way we calculate for the human specializations is by using ISETBio and including both ISETCam and ISETBio on the Matlab path.
ISETcam development is led by Brian Wandell's Vistalab group at Stanford University and supported by contributors from other research institutions and industry.