CSCI 5561: Assignment #2 Registration (Python代写,北美程序代写,美国程序代写,CSCI5561代写,University of Minnesota代写)

You will complete Registration.py that contains the following functions

联系我们
微信: biyeprodaixie 欢迎联系咨询

本次CS代写的主要涉及如下领域: Python代写,北美程序代写,美国程序代写,CSCI5561代写,University of Minnesota代写

Registration

1 Submission

  • Assignment due: Oct 11 (11:55pm)
  • Individual assignment
  • Submission through Canvas.
  • You will complete Registration.py that contains the following functions:
    • find_match
    • align_image_using_feature
    • warp_image
    • align_image
    • track_multi_frames
The code can be downloaded from
https://www-users.cs.umn.edu/~hspark/csci5561_F2019/Registration.zip.
  • The function that does not comply with its specification will not be graded (no credit).
  • The code must be run with Python 3 interpreter.
  • You are not allowed to use computer vision related package functions unless ex- plicitly mentioned here. Please consult with TA if you are not sure about the list of allowed functions.
  • Place code and a two-page summary write-up with resulting visualization (in pdf format; more than 2 pages will be automatically returned.) into a folder, compress it, and submit.

Registration

2 Sift Feature Extraction

(a) Image (b) SIFT
Figure 1: Given an image (a), you will extract SIFT features using OpenCV.

One of key skills to learn in computer vision is the ability to use other open source code, which allow you not to re-invent the wheel. We will use OpenCV library for SIFT extraction given your images.

(Note) You will use this library only for SIFT feature extraction and its visualization. All following visualizations and algorithms must be done by your code. Using OpenCV, you can extract keypoints and associated descriptors as shown in Figure 1.

(Note) The function for SIFT feature extraction lie in the contrib module of OpenCV library, so you need to install opencv-contrib-python package additionally. Also, in newer versions of OpenCV, SIFT module is not available due to patent issue. But you can reinstall opencv-python and opencv-contrib-python package to an earlier version 3.4.2.16 easily through following two command lines.

  • pip3 install opencv-python==3.4.2.
  • pip3 install opencv-contrib-python==3.4.2.

(SIFT visualization) UseOpenCVto visualize SIFT features with scale and orientation as shown in Figure 1 (OpenCV may different colors to visualize). You may want to follow the following tutorial: https://docs.opencv.org/3.4.2/da/df5/tutorial_py_sift_intro.html

Registration

3 SIFT Feature Matching

(a) Template (b) Target (c) SIFT matches with ratio test

Figure 2: You will match points between the template and target image using SIFT features.

The SIFT is composed of scale, orientation, and 128 dimensional local feature descriptor (integer),f∈Z^128. You will use the SIFT features to match between two images,I 1 andI 2. Use two sets of descriptors from the template and target, find the matches using nearest neighbor with the ratio test. You may useNearestNeighborsfunction imported fromsklearn.neighbors(You can installsklearnpackage easily by ”pip install -U scikit-learn”).

def find_match(img1, img2): ... return x1, x Input: two input gray-scale images withuint8format. Output:x1andx2aren×2 matrices that specify the correspondence. Description:Each row ofx1andx2contains the (x,y) coordinate of the point corre- spondence inI 1 adI 2 , respectively, i.e.,x1(i,:)↔x2(i,:).

(Note) You can only use SIFT module of OpenCVfor the SIFT descriptor extraction. Matching with the ratio test needs to be implemented by yourself.

Registration

4 Feature-based Image Alignment

Figure 3: You will compute an affine transform using SIFT matches filtered by RANSAC. Blue: outliers; Orange: inliers; Red: the boundary of the transformed tem- plate.

(Note) From this point, you cannot use any function provided byOpenCV, except for purely visualization purpose.

The noisy SIFT matches can be filtered by RANSAC with an affine transformation as shown in Figure 3.

def align_image_using_feature(x1, x2, ransac_thr, ransac_iter): ... return A Input: x1 andx2 are the correspondence sets (n×2 matrices). ransac_thrand ransac_iterare the error threshold and the number of iterations for RANSAC. Output: 3 ×3 affine transformation. Description: The affine transform will transformx 1 tox 2 , i.e.,x 2 =Ax 1. You may visualize the inliers and the boundary of the transformed template to validate your implementation.

Registration

5 Image Warping

(a) Image (b) Warped im-
age
(c) Template (d) Error map

Figure 4: You will use the affine transform to warp the target image to the template using the inverse mapping. Using the warped image, the error map|Itpl−Iwrp|can be computed to validate the correctness of the transformation whereItplandIwrpare the template and warped images.

Given an affine transformA, you will write a code to warp an imageI(x)→I(Ax).

def warp_image(img, A, output_size): ... return img_warped Input:Iis an image to warp,Ais the affine transformation from the original coordinate to the warped coordinate,output_size=[h,w]is the size of the warped image where wandhare the width and height of the warped image. Output:img_warpedis the warped image with the size ofoutput_size. Description: The inverse mapping method needs to be applied to make sure the warped image does not produce empty pixel. You are allowed to useinterpnfunction imported fromscipy.interpolatefor bilinear interpolation (scipypackage can be easily installed through ”pip3 install scipy” if you have not install it yet).

(Validation) Using the warped image, the error map|Itpl−Iwrp|can be computed to validate the correctness of the transformation whereItplandIwrpare the template and warped images.

Registration

6 Inverse Compositional Image Alignment

(a) Template (b) Initialization (c) Aligned image

Figure 5: You will use the initial estimate of the affine transform to align (i.e., track) next image. (a) Template image from the first frame image. (b) The second frame image with the initialization of the affine transform. (c) The second frame image with the optimized affine transform using the inverse compositional image alignment.

Given the initial estimate of the affine transformAfrom the feature based image align- ment (Section 4) as shown in Figure 5(b), you will track the next frame image using the inverse compositional method (Figure 5(c)). You will parametrize the affine transform with 6 parametersp= (p 1 ,p 2 ,p 3 ,p 4 ,p 5 ,p 6 ), i.e.,

W(x;p) =

p 1 + 1 p 2 p 3
p 4 p 5 + 1 p 6
0 0 1

u
v
1

=A(p)x (1)

whereW(x;p) is the warping function from the template patch to the target image.

x=

u
v
1

is the coordinate of the point before warping, andA(p) is the affine transform

parametrized byp.

def align_image(template, target, A): ... return A_refined Input: gray-scale templatetemplate and target imagetarget; the initialization of 3 ×3 affine transformA, i.e.,xtgt=Axtplwherextgtandxtplare points in the target and template images, respectively. Output:A_refinedis the refined affine transform based on inverse compositional im- age alignment Description: You will refine the affine transform using inverse compositional image alignment, i.e.,A→A_refined. The pseudo-code can be found in Algorithm 1. Tip: You can validate your algorithm by visualizing their error map as shown in Fig- ure 6(d) and 6(h). Also you can visualize the error plot over iterations, i.e., the error must decrease as shown in Figure 6(i).

Registration

(a) Template (b) Initial warp (c) Overlay (d) Error map
(e) Template (f) Opt. warp (g) Overlay (h) Error map
0 50 100 150 200 250 300 350 400 450
Iteration
20
22
24
26
28
30
32
Error ||I
  • Itpl
||tgt
(i) Error map

Figure 6: (a,e) Template images of the first frame. (b) Warped image based on the ini- tialization of the affine parameters. (c) Template image is overlaid by the initialization. (d) Error map of the initialization. (f) Optimized warped image using the inverse com- positional image alignment. (g) Template image is overlaid by the optimized warped image. (h) Error map of the optimization. (i) An error plot over iterations.

Algorithm 1Inverse Compositional Image Alignment 1: Initializep=p 0 from inputA. 2: Compute the gradient of template image,∇Itpl 3: Compute the Jacobian∂W∂p at (x; 0). 4: Compute the steepest decent images∇Itpl∂W∂p

5: Compute the 6×6 HessianH=

x

[

∇Itpl∂W∂p

]T[

∇Itpl∂W∂p

]

6: while‖p‖> do
7: Warp the target to the template domainItgt(W(x;p)).
8: Compute the error imageIerr=Itgt(W(x;p))−Itpl.
9: ComputeF=

x

[

∇Itpl∂W∂p

]T

Ierr.
10: Compute ∆p=H−^1 F.
11: UpdateW(x;p)←W(x;p)◦W−^1 (x; ∆p) =W(W−^1 (x; ∆p);p).
12: end while
13: ReturnA_refinedmade ofp.

Registration

7 Putting Things Together: Multiframe Tracking

(a) Frame 1 (b) Frame 2
(c) Frame 3 (d) Frame 4

Figure 7: You will use the inverse compositional image alignment to track 4 frames of images.

Given a template and a set of consecutive images, you will (1) initialize the affine transform using the feature based alignment and then (2) track over frames using the inverse compositional image alignment.

def track_multi_frames(template, img_list): ... return A_list Input: templateis gray-scale template. image_listis a list of consecutive image frames, i.e.,img_list[i]is theithframe. Output: A_listis the set of affine transforms from the template to each frame of image, i.e.,A_list[i]is the affine transform from the template to theithimage. Description: You will apply the inverse compositional image alignment sequentially to track over frames as shown in Figure 7. Note that the template image needs to be updated at every frame, i.e.,template←warp_image(img, A, template.shape).