**16-726 Assignment 2:
Gradient Domain Fusion**
**Eileen Li (chenyil)**
Submission Date: February 28, 2023
Overview =============================================================================== This assignment explores techniques and applications of gradient domain blending, focusing primarily on "poisson blending". The objective is to seamlessly blend an object from a source to a target image while keeping the features of the source object. The key insight is that people often care much more about the gradient of an image than the overall intensity. And so, the method blends a source image into a target image by keeping the gradients of the source image while anchoring on the intensity values of the target image. Lessons Learned ------------------------------------------------------------------------------- - Choosing a source image such that the **mask border intensities are not too far off from target image intensities** is key to more "seamless" blends. - I used both `scipy.sparse.linalg.lsqr` and `scipy.sparse.linalg.spsolve` in this assigment, though the latter is **much** faster. - With `scipy.sparse.linalg.spsolve`, my code can handle matrix size of up to 1M within seconds, though I still downsized large target images for efficiency. Toy Problem =============================================================================== As expected, the toy problem reconstruction returns an identical image.
Poisson Blending =============================================================================== In poisson blending, we formulate the objective as a least square problem:

Given the pixel intensities of the source image *s* and of the target image *t*, we want to solve for new intensity values *v* within the source region *S*. We take the following steps: 1. Preprocess source, target, and mask images so that they are all the same size. This can be done using the provided `masking_code.py` tool. 2. Generate a sparse matrix and solve using `scipy.sparse.linalg.spsolve`. For the pixels at the mask border, we use target image intensity as "anchors". For pixels inside the masked area, we try to match source image gradient by comparing with its 4 neighboring pixels:
3. Blend each color channel separately and combine. Instead of computing over the entire image, I compute the rectangular region that is affected by the mask and construct my matrix only according to the affected pixels. This is useful for large images if the blending only occurs in one small area of the image. Results ------------------------------------------------------------------------------- Below are some example images from our hiking / backpacking trips.
This image was taken in Yosemite, where we always miss our dog Aspen since pets aren't allowed in National Parks. I see her in every rock face!
This image was taken during our backpacking trip on the Colorado Trail, where we had a scary encounter with a wild moose. The moment wasn't actually captured on camera, but I imagine it looked something like this. While the result looks pretty good, there are still some artifacts. The mountain colors blend into the source image (since border is at mountain), which obstructs where the lake should be behind the moose.
Here we have another image from a snow camping trip in Yosemite (Sentinel Dome, highly recommend!). You can see Half Dome in the frame. I tried to add a black hole in the sky, but using poisson blending (and matching source image gradients) turns the black hole into a blue hole + golden ring in the sky.
Understanding a Failure Mode ------------------------------------------------------------------------------- Let's dig deeper into a failure mode. I want to put the house from the Pixar film "Up" into this beautiful photo of Crater Lake.

In my first attempt, the source image turned dark post-blend because the surrounding pixels of the source mask is *white*, meaning that to match the gradients of the source image, all interior pixels need to be darker.
In my second attempt, I found a source image also with a blue background, but the shade of blue is still lighter than the target image. The result looks better, but the source image still turned a weird shade darker.
For my third attempt, I picked a source image with a similar / slightly *darker* shade of blue than my target image. *Voila!* It works so much better, and even brightens the source image a little bit.
Comparing these three examples helped me gain intuition regarding how mismatch in background color between the source and target images can affect blending quality.
[Extra Credit] Bells & Whistles =============================================================================== Mixed Gradients ------------------------------------------------------------------------------- In this section, instead of always taking the source image gradients as the guiding gradients for the masked area, we instead take the **dominant** or larger gradients between source vs. target images. The effect is that some of the target image shows through, which can be useful for applications when we want the texture of the target image to show through (ex. drawing on brick wall).
Color2Gray ------------------------------------------------------------------------------- In this section, we explore an alternative to rgb2gray using gradient blending. Below I display the H (hue), S (saturation) and V (intensity) channels:

To help the numbers "pop out" more, I use the V channel as target and S channel as source in mixed gradient blending (taking the dominant gradient). The results are shown below: