Final Project Reflection: “Who Are You to Judge?” By Ashley Zhu (Rudi)

My partner Megan See and I created the interactive project called “Who Are You to Judge?” to teach users about the U.S Criminal Justice System as they play the role of a judge and navigate through various real cases and make decisions and final verdicts on whether or not to convict/punish the defendant. 

Final Code in Github: https://github.com/ashlelayy/IXLFinalProject/blob/master/WhoAreYouToJudge

Brainstorm & Questions: https://docs.google.com/document/d/1jjoqQw6s-It60DUhyJQ-1kpc4E0fh_xQ_T_JP1SYoU4/edit?usp=sharing

CONCEPTION AND DESIGN:

For this project, we were inspired by The Marshall Project‘s online journalisms about criminal justice, as the website creates stories about incarcerated people and tries to educate people about the on-going issues in hopes of changing the system and inspiring people to go out and vote, to change public policies. We hoped to also create a movement for people to learn more about U.S criminal justice, and to inspire change, especially in light of the upcoming 2020 election.

Initially, my partner and I wanted to create a game centered around who is most likely to become a criminal, to test their judgements. We wanted to create quizzes and a drag-and-drop game to allow users to interact and decide to incarcerate an individual based on a specific task they performed or based on their background description. Another idea we had was to have users decide their own faith through an interactive audiovisual game to punish themselves, as if they were a person who has committed a crime. Also, we were going to have users strapped down to a chair using velcro and a solenoid motor, so if they users make a wrong decision, they would be punished and get lightly ‘buzzed’ or ‘shocked’ on the chair, to simulate the electric chair effect.

However, after talking to our professor, we tossed the ideas and kept the quiz bowl concept. Instead, we made a judging audiovisual game to simulate a judge making a verdict in a court. We used a gavel and striking block to mimic a real court experience, as the users are essentially playing the role of a Judge and making decisions. Users navigate through the game by making verdicts on 9 real cases.  For instance, we used some famous U.S court cases such as the Central Park 5, the Troy Davis Case, as well as the Rodney Alcaca case (Dating Game Killer). Our cases range from innocent defendants who received wrongful convictions to serial killer and school shooting cases. Some of the choices users need to make include giving life/death sentences, convicting a defendant, and determining whether or not the defendant is guilty or innocent.

When users select the correct answer (the real verdict), they move on to the next case, and when users select the wrong choice (wrongful conviction), the game stops and the screen displays texts that read “Check your assumptions. Here’s what actually happened…” and the real verdict (explanation of the case) is displayed, to educate users on what actually happened, as well as the implications of the case. This way, while users are interacting with the project, they are stimulated positively through thinking about the cases and how their decisions will affect a person’s life forever. 

For the design aspect, we wanted to keep the project minimal and sleek, therefore, we employed black and white colors as the main colors for the project. In addition, we used the “TravellingTypewriter” font to match the typewriter music we have as the background music and plays in accordance with the displaying of text on the screen. 

We also only decided to center around U.S laws and cases because my partner Megan and I were both very interested in the U.S criminal justice department in particular, and wanted to create a project centered around how sometimes our preconceived notions can cloud our interpretations of a scenario and case. Given the 10 second countdown, users are forced to quickly make a decision based on given information and what they believe in as the correct decision. We intentionally made the countdown option with 10 seconds to create a nervous feeling for the user, as they are rushed to make a decision that would determine a person’s fate, similar to real life scenarios, where a judge would have to make a final verdict given limited evidence and pressure/time crunch to make a ruling to a case.

FABRICATION AND PRODUCTION:

For fabrication, we used a 3D printer to print our interactive countdown timer as well as LED lights to display response from the system (ex. Red lights up with buzzer sound effect when user selects the wrong choice, which also lights up when users run out of time (10seconds)) The timer was used to create a nervous and rushed effect, so when users run out of time, they lose the game. 

 

During user testing, we originally had the keyPressed function as a form of interaction, as users navigated through the game, they would press on the left and right arrows. Then, we switched that form of interaction to sensors for more interactivity. Some users also suggested using Chinese laws as part of the cases. However, since my partner and I had limited knowledge of the Chinese criminal justice system, and the fact that we wanted to concentrate on U.S criminology, we ended up not following that suggestion. But, users did suggest adding music when getting answers correct/wrong and we did adopt that recommendation later on.

CONCLUSIONS:

In conclusion, we accomplished our original goals of educating the masses about the U.S criminal justice system. Through interaction with our project, users act as judges and decide on the outcomes of individual cases, as well as the implications one could have over someone else’s life. In addition, we included the actual verdicts of the real cases for users to understand more about how and why a certain decision was made. In terms of interactivity, I think we did meet my expectations and aligns with my definition of interaction, which is input, processing and output. Users communicate with the project through hitting the gavel on the sensor pads to make a verdict after reading a case, while thinking about the right judgement to make. Then, the system processes it and displays output of texts on screen, informing users of different questions and outcomes of cases for them to understand more about the U.S criminal justice system. In terms of inconsistency, I think my project did not align with my definition of interactivity since users can only answer up to 9 questions/cases. I wanted the project to be more interactive, where users can continuously interact with the project in a loop, however, was limited in this project due to time shortage. 

Users interacted with our projects with a lot of concentration, as many people took a long time to make a final decision and to judge the case. Sometimes, they would run out of time and would start over again to get the answers correct. Many users also liked the headphone component, so they can really immerse themselves in the game and think about the decisions that they make, since they can only hear music and sounds from the project itself.

If we had more time, we would want to code more scenarios for users to interact with, and add motors to ‘buzz’ users as a form of punishment if they got the answers wrong, in a form of negative reinforcement. In addition, we also want to add more media (ex. pictures , documents, videos) of the actual cases in the explanation for a more concrete and complete explanation of the cases. Also, we would want to make a box to store all the wires, so it would be more visually pleasing and easier to transport.

In terms of accomplishments, I learned a lot of UI/UX in terms of how the users interact with the actual product, and revising the product to produce a more user-friendly and satisfactory output. Moreover, I also learned more about interaction using Arduino and processing, as well as the collaboration between the two. Some values I learned from the project is to think outside the box, since we did have many different project ideas at first, which were scattered and all over the place. I learned about how to take one idea and develop the concept in a more focused and concentrated setting and to bring that ideation to life.

Therefore, I am satisfied with our project, considering how much work and effort we have put in throughout the process, as well as the final feedback from our peers. We hoped that this project allowed more people to understand the impacts of the quick judgements on people, and to never judge a book by its cover. Furthermore, I hoped our project allowed our peers to learn more about the U.S Criminal Justice system, as well as the changes that should be made in the future, to spark activism and for people to go out and vote in the upcoming 2020 election.

Works Cited:

https://blog.oup.com/2016/08/criminal-justice-10-facts/

https://www.naacp.org/criminal-justice-fact-sheet/

https://www.sentencingproject.org/criminal-justice-facts/

https://deathpenaltyinfo.org/policy-issues/innocence/description-of-innocence-cases

https://www.justice.gov/civil/current-and-recent-cases

https://theintercept.com/2019/01/13/misdemeanor-justice-system-alexandra-natapoff/

Inspiration: The Marshall Project https://www.themarshallproject.org/

Recitation 5 Documentation – Jackson Simon

This documentation asked us to find a piece of image art, and copy it, or transform it. I decided to use Vera Molnar’s Untitled from 1952:

I enjoy the simplicity of it. It makes me think that everyone is the same, yet originality is still in all of us. It is quite thought provoking for such a simple image.

I decided to transform it a tiny bit. I made the non-rotated squares have a random fill of color (but they are all the same). And I decided the make the rotated square a white circle. I tried at first to rotate the rectangle, however ran into multiple issues (every time I rotated it it would disappear from the screen), and even when I tried translate() on the squares height and width, it would not work. However, I do think making the rotated square a different shape can have the same effect as Vera Molnar’s piece of art; but also could very well invoke the thought of being totally different from everything else, without any similarities (even though they are the same thing: they are both shapes).

My rendering:

Recitation 4 Documentation – Jackson Simon

This recitation was a little more tedious to execute. The goal, again with my partner Ruben, was to create a machine (using stepper motors) that could draw, while being controlled by potentiometers.

The first step was to complete the circuit, since it was a lot of wires it was a little taxing, but not too hard (it really was just a lot of wires). The stepper motor was simple to connect, since it comes with a set of colored wires indicating what needs to go where; the tougher part was connecting the potentiometer to it and making sure it controlled its movements. However, it was not too difficult.

After completing the circuit, the next step was to connect Rubens’ and my circuits to what can only be describe as stepper motor holders, and attach the pincers that would hold the pens!

Video of stepper motor and code:

In the end it was quite fun to be able to make a machine that controlled the movement of the pens, even though we were the ones controlling the machines!

Question 1:

When thinking about what type of machine I would like to make using actuators that manipulate art, my first thought was about the massive 3D printers that are being worked on to be able to print whole cities: http://theconversation.com/print-your-city-3d-printing-is-revolutionizing-urban-futures-112365 . However, this does not seem too appropriate considering the actuator is a machine itself, and less controlled by humans. A simpler, and maybe unusual example that I came up with, was to use the same stepper motors to control puppets. All over the world puppets are used in theater, especially string puppets. I think it would be interesting to connect them to these types of motors (however, the number would be much higher than just two!). Maybe then the dynamic would change in the scenes, and maybe the movements would be more fluid.

Question 2:

I quite enjoy the Mechanical Mirrors: Wooden Mirrors project by Daniel Rozin.  It seems to me that sensors are the actuators, that lead to the spinning of some type of motor, to change the side of the wood (hence change its color to a darker shade for contrast). It is interesting, in comparison to the recitation exercise, that the actuator is another machine that reads the human, however in a very different way than the potentiometer. The sensor in the art exhibit discovers the values on its own, the potentiometer for the drawing machines uses the values that we force upon it.

Recitation 3 Documentation – Jackson Simon

For this recitation the goal was to successfully use a sensor and have the values displayed on the Arduino’s Serial Monitor. Working with Ruben, we decided to use an infrared sensor (to detect the distance between the sensor and the first object it senses in its path).

The code was relatively simple: all that was needed was to make sure Arduino what pin the sensor was connected to. We then mapped its values to a smaller interval for ease of reading the values. Since we were able to finish this part relatively quickly, we decided to hook it up to a buzzer and have it activate if someone was close enough to the sensor!

Our code:

The values displayed:

The sensor turning on the buzzer (image, so no sound):

Question 1:

It was interesting to work with this type of sensor, because it can be quite useful in every day life. The goal was to make the sensor activate something once the distance requirements were meant, which is just like how some automatic doors work. These doors, in every Family Mart, allow for the consumer to walk in and feel welcomed (since once the sensor is activated it not only activates the doors, but the lovely Family Mart sound that I am sure we all know at this point).

Question 2:

I think code is compared to a recipe or tutorial because you have to follow the steps exactly the first time you use it to understand why it’s this way, and what each line of code actually does. Once you understand that, you can start to make your own recipes from previous experiences!

Question 3:

Computers are now used every single day, by most humans on the planet. A smartphone is basically just a tiny computer. Phones influence our human behavior in a drastic way: people are always glued to them and looking at other peoples’ social media, updating their own, or just doing random distracting things (with no real purpose). I would say computers have changed the way we communicate with each other drastically. Instead of talking face to face, we talk through FaceTime; instead of a lively debate over serious matters, we debate on who has the most likes. I am not saying that computers are necessarily bad, however, I do believe that the overuse of them leads to a sort of ‘dumbing’ (I use this word for the lack of a better term, however, I use it sparingly) of the general population.

Recitation 10: Workshops – Ariana Alvarez

For this week’s recitation, after the map() function workshop, I chose to attend the media manipulation workshop, as it was what aligned the most with my project. What I wanted to work on in this workshop, was to learn how to manipulate pixels in webcam. 

Initially, I attempted to change directly the RGB colors in the webcam, as during the workshop I was told that there may not be the possibility of adding a filter to it (similar with an image). As this process wasn’t being effective in creating a negative image effect,  I did some research and it was possible to add filters to webcam with cam.filter() function. 

After adding an inverse black and white filter effect on the webcam, I also attempted to make the image brighter and darker by manipulating the HSB values of the pixels. It was quite challenging, however this media manipulation workshop provided me with a better head-start towards my project and allowed me to explore further ways in which pixels could be manipulated in webcam through processing. 

The code was the following:

//int r = 50;
//int g = 50;
//int b = 50;

import processing.video.*; 
Capture cam;

//color invertColor( int r, int g, int b) {

//  return color(255 - r, 255 - g, 255 - b);
//}

void setup() {  
  size(640, 480); 
  colorMode(HSB);
  cam = new Capture(this, 640, 480);
  cam.start(); 
} 
 void draw() { 
   
   
   
  if (cam.available()) { 
   cam.read(); 
   image(cam, 0, 0); 
   cam.filter(GRAY);
   cam.filter(INVERT);
      //background(invertColor(r,g,b));
  } 

  cam.loadPixels();
       
//Pixels, code with Arduino Distance Sensor
  noStroke();
  int rectSize = 10;
  int w = cam.width;
  int h = cam.height;
  
  for (int y = 0; y < h; y+=rectSize) {
    for (int x = 0; x < w; x+=rectSize) {
      int i =  x + y * w;
      
      fill( cam.pixels[i] );     
      rect(x, y, rectSize, rectSize);
      
      
    //for (int y = 0; y < h; y++) {
    //  for (int x = 0; x < w; x++) {
    //    int i =  x + y*w; // *** IMPORTANT ***
    
        float b = brightness(cam.pixels[i]); 
        float s = saturation(cam.pixels[i]);
        float u = hue(cam.pixels[i]);
        float ch = map(mouseX, 0, 255, height, width);
        cam.pixels[i] = color(u, s, b+ch); 
     
      }
   
    cam.updatePixels();
  }
  }
 //   }
 //}