Motion Cam is a camera application for Android that replaces the entire camera pipeline. It consumes RAW images and uses computational photography to combine multiple images to reduce noise. Additionally, it uses a single underexposed image to recover highlights and increase dynamic range.
You can install the latest version from GitHub or a slightly out of date version from the Play Store
Capture RAW video up to 30 FPS and convert to a sequence of DNGs.
Dual exposure is similar to the feature found in the Google Camera. The two sliders control the exposure compensation and tonemapping.
Photo mode captures RAW images in the background. It captures a single underexposed image when the shutter button is pressed which is used to recover highlights and increase dynamic range.
Night mode increases the shutter speed of the camera up to 1/3 of a second and captures more RAW images to further reduce noise.
The denoising algorithm uses bayer RAW images as input. Motion Cam treats the RAW data as four colour channels (red, blue and two green channels). It starts by creating an optical flow map between a set of images and the reference image utilising Fast Optical Flow using Dense Inverse Search. Then, each colour channel is fused using a simplified Gaussian pyramid.
Motion Cam uses the GPU to generate a real time preview of the camera from its RAW data. It uses a simplified pipeline to produce an accurate representation of what the final image will look like. This means it is possible to adjust the tonemapping, contrast and colour settings in real time.
Most modern cameras use a bayer filter. This means the RAW image is subsampled and consists of 25% red, 25% blue and 50% green pixels. There are more green pixels because human vision is most sensitive to green light. The output from the denoising algorithm is demosaiced and colour corrected into an sRGB image. Motion Cam uses the algorithm Color filter array demosaicking: New method and performance measures by Lu and Tan.
Motion Cam uses the algorithm exposure fusion for tonemapping. The algorithm blends multiple different exposures to produce an HDR image. Instead of capturing multiple exposures, it artificially generates the overexposed image and uses the original exposure as inputs to the algorithm. The shadows slider in the app controls the overexposed image.
The details of the image are enhanced and sharpened using an unsharp mask with a threshold to avoid increasing the noise.
Install the following dependencies:
brew install cmake llvm python
Set the environment variables:
export ANDROID_NDK=[Path to Android NDK]
export LLVM_DIR=/usr/local/Cellar/llvm/[Installed LLVM version]
Run the ./setupenv
script to compile the dependencies needed by the project.
Install the following dependencies:
apt install git build-essential llvm-dev cmake clang libclang-dev
Set the environment variables:
export ANDROID_NDK=[Path to Android NDK]
export LLVM_DIR=/usr
Run the ./setupenv
script to compile the dependencies needed by the project.
After setting up the environment, open the project MotionCam-Android with Android Studio. It should compile and run.
MotionCam uses Halide to generate the code for most of its algorithms. The generators can be found in libMotionCam/generators
. If you make any changes to the generator sources, use the script generate.sh
to regenerate them.