Image stitching for arbitrary camera motion

Stitched image obtained from image 1-7

This article covers algorithm for stitching multiple images obtained from arbitrary motion of camera. Most of us are aware of panorama image. Panorama is obtained by stitching multiple images obtained from horizontal motion of a camera. The algorithm presented in this article is a generalization to arbitrary camera motions.

Feature points (keypoints) of an image

Feature points (also called interest points or keypoints) are distinct locations in an image. They typically represent areas with high contrast or structural uniqueness — like corners, blobs, or junctions. Good keypoints are invariant to changes in scale, rotation, and (to some extent) illumination. Feature points are used as reference points to match adjacent images. We use SURF (Speeded-up robust feature) algorithm to detect feature points.

Feature descriptors

A descriptor is a numerical vector that describes the appearance (local texture) around a keypoint. It encodes the local image structure in a way that allows comparison across different images. Descriptors are designed to be robust to scale, rotation, noise, and illumination changes. We use SURF algorithm to generate both feature points and feature descriptors. When two keypoints from different images have similar descriptors, they are considered a match.

Algorithm: Image Stitching for arbitrary camera motion

Input:

Output:

Steps:

  1. Preprocessing
    Load the sequence of images and resize each image using the factor \( s \) to reduce computation. Also, convert images to grayscale, if required, for feature extraction.
  2. Feature detection and extraction
    • Detect keypoints in the first image \( I_1 \) using a robust detector (e.g., SURF).
    • Extract descriptors from the detected keypoints.
    • Initialize transformation list with identity: \(T_1 = I\)
  3. Image registration
    • For each image \( I_i \) where \( i = 2 \) to \( N \):
      • Detect keypoints and extract descriptors.
      • Match descriptors with those from \( I_{i-1} \).
      • Estimate projective transformation \( H_i \) using matched keypoints.
      • Update transformation for \(I_i\). \[T_i = T_{i-1} \cdot H_i\]
  4. Transformation normalization
    • Select reference image index \( m = \lceil N/2 \rceil \).
    • Compute inverse transformation T_m^{-1}
    • Normalize all transformations:
      \[T_i \leftarrow T_m^{-1} \cdot T_i \quad \text{for all } i\]
  5. Output canvas estimation
    Compute output limits of each image under transformation \( T_i \). Determine global minimum and maximum in \( x \) and \( y \) directions to estimate canvas size.
  6. Stitched image construction
    • Initialize a blank canvas of computed size.
    • For each image \( I_i \):
      • Warp the image using \( T_i \).
      • Overlay it onto the canvas using alpha blending.

Remarks:

Use advanced blending techniques to adjust lighting conditions.

Demonstration

For demonstration, a set of 7 images of a messy hostel room were snapped by a mobile camera. The images were scaled down to reduce processing time.

Input images

Image 1
Image 1
Image 2
Image 2
Image 3
Image 3
Image 4
Image 4
Image 5
Image 5
Image 6
Image 6
Image 7
Image 7

Stitched image

Stitched image obtained from image 1-7
Stitched image

Source code (MATLAB)

MATLAB code for stitching images with arbitrary camera motion

Author

Anurag Gupta is an M.S. graduate in Electrical and Computer Engineering from Cornell University. He also holds an M.Tech degree in Systems and Control Engineering and a B.Tech degree in Electrical Engineering from the Indian Institute of Technology, Bombay.


Comment

* Required information
1000
Drag & drop images (max 3)
Captcha Image
Powered by Commentics

Past Comments

No comments yet. Be the first!

Similar content