*the link to the sketch vie GitHub here.
*the link to the full code
For my final project, as outlined in the post titled “Project B Proposal,” I hope to include a sketch that uses the web cam, and when the user looks into the web cam, they will look at a distorted image of themselves. The effect will be like looking into a carnival mirror. To begin thinking about how I want to create this much more complex sketch, I decided to create a sketch that used images instead, since they are easier to handle. The sketch takes two already made images and morphs/distorts the images together into a new image. Below are the sample photos I used: 

The final product, though unique every time since it utilizes noise(), creates an image like the following:
Let’s take a look at the code.
The first step is defining some key variables that are used throughout the sketch. 
The variable distortion is used to determine the noise value, created inside setup(). A large distortion value causes the image to become more messy and patchy, as the range the noise() function takes becomes wider. A very small distortion value then, would cause the image to look almost unchanged. The amount variable is a scalar that controls the degree to which the two images are blended together and distorted. Finally, portraits[] is an array that stores the two pre-existing images.
Inside the setup() function, we create a new canvas, and we create a new image. Then, the actual manipulating of the two images and creating the third image happens, as shown below:
The first for loop gets the x and y position of each pixel of the new image we have yet to create, then the variables noiseVal and imgDistort are created. For clarification, although both noiseVal and imgDistort use the noise() function and work to create the “distorted” effect, noiseVal controls the morphing of the two images, while imgDistort controls the actual distortion of the new image, causing features to be pulled apart slightly, scrunched, etc. Lines 29-30 also contribute to this, by slightly offsetting the x and y positions.
In lines 33-34, new color values are created for each pixel of each image respectively using a function called getColorFromPortrait. The code for this function is shown below: 
Because the pre-existing images are not exactly the size of the canvas, we need to remap the x and y values. Finally, in line 35, we get a new color object by using the function lerpColor() to blend the two colors from each portrait together. Then, the pixels of the new image are set and the image is complete.