Desert of the Real: Final Progress 1

Our final project, The Exquisite Corpse, is a collaborative sculpting virtual experience, where the players are embodying the avatars of sculptors which are portrayed by ourselves, Sofía and Itay. So the first thing we needed to do was to get a 3D scan of us. For this, we use a structure sensor and the program ItSeez3D.

We tried several times but the scans were far from perfect, so we decided to work on the best one we had of each and then fix texture errors later. Itay’s scan came out decent enough, but Sofia’s was not very good, we decide to use another scan of Sofia’s where she was using all black clothes, but that meant we would have to work more on the textures so both scans were wearing the same attire: jeans and a white shirt.

Itay’s original scan:

Sofia’s scan attempt:

 

The process of fixing the scans was this: First, the .obj file was first resized in Maya, because the first .obj that is downloaded from ItSeez3D is very small and usually comes slighting down. After being resized we used Wrap3 to morph the scans to a base model, so the polygons get fixed and we can emend some volume errors.

This is how the polygons looked before the wrapping in wrap3:

And this is the process of wrap3, where there is pipeline system to adjust the scan to the base model mesh. The result is a cleaner scan made out of organized polygons.

After the wrapping process, you also get a cleaner texture file, which is what we need to fixed the details and actually know what part of the body we were working on.

This is the raw texture that comes with the .obj file downloaded from ItSeez3D:

And this is how Wrap3 process it:

Even though is more organized, there a lot of parts that are missing info or are super messy, so some serious photoshop had to be done to solved this. Here you can see the process of fixing the textures of Itay’s and Sofia’s scan.

After getting the textures fixed, we added to our avatars boots and an apron to look more like a sculptor.

After having our avatars ready, we rigged them in mixamo so we could have an skeletal mesh associated to our .obj, so that our .fbx file would come into the game, and we could map it’s rigged skeleton to the VR Inverse Kinematic script.

But first, now that we have our avatars ready and rigged we place them in the 3D environment. For now we have a basic version of what we designed for the environment.

This is the design we are aiming for. We don’t have the sculptures made from body-parts yet, but we have a pedestal in the middle, the decorative sculptures in the back, and the mirrors:

We still need to create some human sculptures to set into the environment. We are missing some other objects, like unfinished sculpture, sculpting tools, and some object-trouvée to give it a more surreal and dadaist look to the scene. We planned to create a big sand clock also on the back wall.

But we have a basic environment were we could try our avatars.The first thing we did was try the VR Inverse Kinematic to see if we could have a first person perspective for our avatars. It is working, but we need to still fix some offset and some movement range, so the movement looks natural.

Once we have that going, we tried to instantiate objects by colliding with something. We first had just a cube that when it collided with the players hand, it instantiated more cubes. Once this was working, we added box colliders and rigid bodies to each part of the avatar body, and tagged it so we could have a script that recognized with which part the player’s controller is colliding and according to that, instantiate a body part. So if the player’s controller collided with a head, then a head is instantiated.  Here is a video of how this is working:

Machine learning bias in google’s quick draw

Machine learning systems are increasingly influencing many aspects of everyday life, and are used by both the hardware and software products that serve people globally. As such, researchers and designers seeking to create products that are useful and accessible for everyone often face the challenge of finding data sets that reflect the variety and backgrounds of users around the world. In order to train these machine learning systems, open, global — and growing — datasets are needed.

Google’s latest approach to helping wide, international audiences understand how neural networks work is a fun drawing project called quick draw. In short, the quick draw system will try to guess what you are drawing within 20 seconds. While the goal of Quick, Draw! was simply to create a fun game that runs on machine learning, it has resulted in 800 million drawings from twenty million people in 100 nations, from Brazil to Japan to the U.S. to South Africa.

In an article posted by google in 2017, google shared the inherent bias in the Quick, Draw! database that they collected. One example that stands out is the shoe style example. when analyzed, 115,000+ drawings of shoes in the Quick, Draw! dataset, it was discovered that a single style of shoe, which resembles a sneaker, was overwhelmingly represented. Because it was so frequently drawn, the neural network learned to recognize only this style as a “shoe.”

 

This biased is may be rooted in the Quick, Draw! user base (could it be mostly men)? or is it even more primal than that? could this bias be rooted in us as we (women and men), are thinking of a shoe? this of course is varied between cultures and context and has relates to how we know what we know – an epistemological question. 

Of course, when we build machines that aspire to know something we want them to know all the possibilities of a specific something, and for them to keep learning and be flexible to changes and a wide verity of possibilities.

This, in my perspective, is an enormous challenge, philosophically and technically. Machine learning has to adapt like humans do because change is the only constant.

 

What’s good bot 👍

We live in an era of media explosion. our minds are bombarded with information from countless media channels on a daily basis. We can’t really control it, it’s everywhere, with data that’s accessibility is increasing everyday, it seems that every passing day there is more and more bad news reported everywhere. 

It isn’t that these are the only things that happen. Perhaps the readers, have trained journalists to focus on these things and we are fed what we actually need in order to progress and survive. Bad news sales. 

But what if there was a channel that would surface only good news? would we watch it?

I chose to focus this in my bot project.

Using news API, Im querying a list of news headlines from different news channels throughout the world and from different topics such as business, entertainment, health, science, technology and more.

After fetching the answers back from the API and pushing them to an array, using a sentiment analysis library sentiment.js , I’m checking the score of the headlines and filtering only positive results, picking the highest score from a batch of 20 headlines.

Posting to mastodon:

Once every 5 minutes I’m pushing my results to mastodon via the mastodon API, creating a feed of filtered good news results – the whats good bot 👍.

 

 

Tree music – generative music deprived from NYC tree census data

I stumbled upon this database containing NYC tree information. over 60,000 trees are documented in this database that includes tree type, diameter, health condition, GPS location and more.

I thought it will be interesting to build an experiment that will sonify the tree information, and to see what types of weird sounds and patterns I could get from different neighborhoods, streets and parks.  

I started building this experiment using Mapbox API as the core element  to visualize the tree instances on a map. with the help of the Mapbox API, I was able to filter, search, and gather information I needed to pass to the auditory aspect of the system – tone.js.

Using tone, I’m sequencing sounds in correlation to a type of tree’s diameter variable from the database. This is then sent to 3 different synthesizers built in tone.js that produce the sound. I am mapping the diameter of the trees to a major C scale over three octaves. And ermined by the amount of trees in the the area. 

 

Upending the Uncanny Valley

Reading Upending the Uncanny Valley made me think of the growth and progression of human consciousness throughout its years. There have been physiological and cultural reasons (among numerous other reasons)  for the formation, development and progression of the human consciousness. in the beginning some thousands years ago, it was limited. As humans did not have the technological tools of language not to mention written text. The formation of these tools along the years, along with other progressions in human history made our minds broaden and we are now able to grasp wide array of ideas, metaphors and other concepts that humans thousands years ago just couldn’t.

When the art of cinema was in its early days, people actually believed that what they see on screen on the screen actually exists. On Jan. 25, 1896, the Lumière brothers screened their short film “L’Arrivée d’un Train en Gare de La Ciotat” (“Arrival of a Train at La Ciotat Station”) at the Salon Indien du Grand Café in Paris. There were only a handful of people in the world who had seen a movie at this point, and it was probably the first time for everyone in this particular audience. And, so the story goes, they did not take it sitting down. On the contrary, they broke into a panic, screaming and running for their lives as the locomotive on the screen seemingly headed straight toward them. 

~100 years later and we can mock these people in the movie theatre, for it is rare that we mistake visual effect in the cinema for reality. Technology has introduced us to new forms of knowing what is real and what is not.

I believe that the uncanny valley is an ongoing continuum that keeps appearing as a side effect of what we know vs. what we expect. In my opinion, As our ways of knowing progress and broaden, upending it through hyper resemblance is bound to fail.