Mez Gebre

Computer scientist, solution oriented, super persistent and always curious.

HomographyNet: Deep Image Homography Estimation

Introduction

Today we are going to talk about a paper I read a month ago titled Deep Image Homography Estimation. It is a paper that presents a deep convolutional neural network for estimating the relative homography between a pair of images.

What is a Homography?

In projective geometry, a homography is an isomorphism of projective spaces, induced by an isomorphism of the vector spaces from which the projective spaces derive. - Wikipedia

If you understood that, go ahead and skip directly to the deep learning model discussion. For the rest of us, let’s go ahead and learn some terminology. If you can forget about that definition from Wikipedia for a second, let’s use some anologies to at least get a high-level idea.

Isomorphism?

If you decide to bake a cake; the process of taking the ingredients and following the recipe to create the cake could be seen as morphing. The recipe allowed you to morph/transition from ingredients to a cake. If you’re thinking wtf, just stick with me a bit longer. Now, imagine the same recipe could allow you to take a cake and morph/transition back to the ingredients if you follow it in reverse! In a nutshell, that is the definition of an isomorphism. A recipe that can allow you to transition between the two, loosely speaking.

Isomorphism in mathematics is morphism or a mapping (recipe) that can also give you the inverse, again loosely speaking.

Projective geometry?

I don’t have a clever analogy here, but I’ll give you a simple example to tie it all together. Imagine you’re driving and your dash cam snapped a picture of the road in front of you (let’s call this pic A); at the same time, imagine there was a drone right above you and it also took a picture of the road in front of you (let’s call this pic B). You can see that pic A and B are related, but how? They’re both pictures of the road in front you, only difference is the perspective! The big question….

Is there a recipe/isomorphism that can take you from A to B and vice versa?

There you go, the question you just asked is what a Homography tries to answer. Homography is an isomorphism of perspectives. A 2D homography between A and B would give you the projection transformation between the two images! It is a 3x3 matrix that descibes the affine transformation. Entire books are written on these concepts, but hopefully we now have the general idea to continue.

NOTE

There are some constraints about a homography I have not mentioned such as:

  1. Usually both images are taken from the same camera.
  2. Both images should be viewing the same plane.

Motivation

Where is homography used?

To name a few, the homography is an essential part of the following

  • Creating panoramas
  • Monocular SLAM
  • 3D image reconstruction
  • Camera calibration
  • Augmented Reality
  • Autonomous Cars

How do you get the homography?

In traditional computer vision, the homography estimation process is done in two stages.

  1. Corner estimation
  2. Homography estimation

Due to the nature of this problem, these pipelines are only producing estimates. To make the estimates more robust, practitioners go as far as manually engineering corner-ish features, line-ish features etc as mentioned by the paper.

Robustness is introduced into the corner detection stage by returning a large and over-complete set of points, while robustness into the homography estimation step shows up as heavy use of RANSAC or robustification of the squared loss function. - Deep Image Homography Estimation

It is a very hard problem that is error-prone and requires heavy compute to get any sort of robustness. Here we are, finally ready to talk about the question this paper wants to answer.

Is there a single robust algorithm that, given a pair of images, simply returns the homography relating the pair?

HomographyNet: The Model

HomographyNet is a VGG style CNN which produces the homography relating two images. The model doesn’t require a two stage process and all the parameters are trained in an end-to-end fashion!
alt HomographyNet Model

HomographyNet as descibed in the paper comes in two flavors, classification and regression. Based on the version you decide to use, they have their own pros and cons. Including different loss functions.

alt Classification HomographyNet vs Regression HomographyNet.

The regression network produces eight real-valued numbers and uses the Euclidean (L2) loss as the final layer. The classification network uses a quantization scheme and uses a softmax as the final layer. Since they stick a continuous range into finite bins, they end up with quantization error; hence, the classification network additionally also produces a confidence for each of the corners produced. The paper uses 21 quantization bins for each of the eight output dimensions, which results in a final layer of 168 output neurons. Each of the corner’s 2D grid of scores are interpreted as a distribution.

alt Corner Confidences Measure

The architecture seems simple enough, but how is the model trained? Where is the labeled dataset coming from? Glad you asked!

As the paper states and I agree, the simplist way to parameterize a homography is with a 3x3 matrix and a fixed scale. In the 3x3 homography matrix, [H11:H21, H12:H22] are responsible for the rotation and [H13:H23] handle the translational offset. This way you can map each pixel at position [u,v,1] from the image against the homograpy like the figure below, to get the new projected transformation [u',v',1].
alt HomographyNet 3x3 matrix

However! If you try to unroll the 3x3 matrix and use it as the label (ground truth), you’d end up mixing the rotational and translational components. Creating a loss function that balances the mixed components would have been an unnecessary hurdle. Hence, it was better to use well known loss functions (L2 and Cross-Entropy) and instead figure out a way to re-parameterize the homography matrix!

The 4-Point Homography Parameterization

The 4-Point parameterization is based on corner locations, removing the need to store rotational and translation terms in the label! This is not a new method by any means, but the use of it as a label was clever! Here is how it works.

alt 4-Point Homography Parameterization

Earlier we saw how each pixel [u,v,1] was transformed by the 3x3 matrix to produce [u',v',1]. Well if you have at least four pixels where you calculate delta_u = u' - u and delta_v = v' - v then it is possible to reconstruct the 3x3 homography matrix that was used! You could for example use the getPerspectiveTransform() method in OpenCV. From here on out we will call these four pixels (points) as corners.

Training Data Generation

Now that we have a way to represent the homography such that we can use well known loss functions, we can start talking about how the training data is generated.

Training Data Generation

As a sidenote, HAB denotes the homorgraphy matrix between A and B

Creating the dataset was pretty straightforward, so I’ll only highlight some of the things that might get tricky. The steps as the describe in the above figure are:

  1. Randomly crop a 128x128 patch at position p from the grayscale image I and call it patch A. Staying away from the edges!
  2. Randomly perturb the four corners of patch A within the range [-rho, rho] and let’s call this new position p'. The paper used rho=32
  3. Compute the homography of patch A using position p and p' and we’ll call this HAB
  4. Take the inverse of the homography (HAB)-1 which equals HBA and apply that to image I, calling this new image I'. Crop a 128x128 patch at position p from image I' and call it patch B.
  5. Finally, take patch A and B and stack them channel-wise. This will give you a 128x128x2 image that you’ll use as input to the models! The label would be the 4-point parameterization of HAB.

They cleverly used this five step process on random images from the MS-COCO dataset to create 500,000 training examples. Pretty damn cool (excuse my english).

The Code

If you’d like to see the regression network coded up in Keras and the data generation process visualized, you’re in luck! Follow the links below to my Github repo.

Final thoughts

That pretty much highlights the major parts of the paper. I am hoping you now have an idea of what the paper was about and learned something new! Go read the paper because I didn’t talk about the results etc.

I’d like to think easy future improvements could be to swap out the heavy VGG network for squeezenet! Giving you the improvement of a smaller network. I’ll maybe experiment with this idea and see if I can match or improve on their results. As for possible uses today, I could see this network being used as a cascade classifier. The traditional methods are very compute heavy, so if we could maybe have this network in front to filter out the easy wins, we could cut down on compute cost.

Stay tuned for the next paper and please comment with any corrections or thoughts. That is all folks!

Read More

SqueezeDet: Deep Learning for Object Detection

Why bother writing this post?

Often, examples you see around computer vision and deep learning is about classification. Those class of problems are asking what do you see in the image? Object detection is another class of problems that ask where in the image do you see it?

Classification answers what and Object Detection answers where.

Object detection has been making great advancement in recent years. The hello world of object detection would be using HOG features combined with a classifier like SVM and using sliding windows to make predictions at different patches of the image. This complex pipeline has a major drawback!

Cons:

  1. Computationally expensive.
  2. Multiple step pipeline.
  3. Requires feature engineering.
  4. Each step in the pipeline has parameters that need to be tuned individually, but can only be tested together. Resulting in a complex trial and error process that is not unified.
  5. Not realtime.

Pros:

  1. Easy to implement, relatively speaking…

Speed becomes a major concern when we are thinking of running these models on the edge (IoT, mobile, cars). For example, a car needs to detect where other cars, people and bikes are to name a few; I could go on… puppies, kittens… you get the idea. The major motivation for me is the need for speed given the constraints that edge computes have; we need compact models that can make quick predictions and are energy efficient.

Read More

MainSqueeze: The 52 parameter model that drives in the Udacity simulator

Introduction

What a time to be alive! The year is 2017, Donald Trump is president of the United States of America and autonomous vehicles are all the rage. Still at its infancy, the winning solution to dominate the mass production of autonomous vehicles are ongoing. The two main factions currently are the robotics approach and the end-to-end neural networks approach. Like the four seasons, the AI winter has come and gone. It’s Spring and this is the story of one man’s attempt to explore the pros and cons of the end-to-end neural networks faction in a controlled environment. The hope is to draw some conclusions that will help the greater community advance as a whole.

The Controlled Environment




The Udacity Simulator which is open sourced will be our controlled environment for this journey. It has two modes, training and autonomous mode. Training mode is for the human to drive and record/collect the driving. The result would be a directory of images from three cameras (left, center, right) and a driver log CSV file that records the image along with steering angle, speed etc. Autonomous mode requires a model that can send the simulator steering angle predictions.




The goals of this project are:

  1. Use the simulator to collect data of good driving behavior.
  2. Construct a convolution neural network in Keras that predicts steering angles from images.
  3. Train and validate the model with a training and validation set.
  4. Test that the model successfully drives around track one without leaving the road!
  5. Draw conclusions for future work.
Read More