Kenneth's documentation blog

  • Skip to primary navigation
  • Skip to main content
  • Skip to primary sidebar
  • AI Arts
  • Critical Data Visulization
  • Nature of Code
  • Make Believe

Week11: Case study-sneaker style translation + Final project: iSneaker

April 17, 2020 by Haoquan Wang Leave a Comment

Background:

Inspired by image-to-image translation, I was thinking about exploring the possibility of translating sneaker design language among different brands. Hence I keep researching on image translation model on the Internet and find a useful tool–the Pix2Pix model.

Reference link: https://ml4a.github.io/guides/Pix2Pix/

Pix2Pix is shorthand for an implementation of a generic image-to-image translation using conditional adversarial networks, originally introduced by Phillip Isola et al. Given a training set which contains pairs of related images (“A” and “B”), a pix2pix model learns how to convert an image of type “A” into an image of type “B”, or vice-versa. 

Why it is powerful: “The nice thing about pix2pix is that it is generic; it does not require pre-defining the relationship between the two types of images. It makes no assumptions about the relationship and instead learns the objective during training, by comparing the defined inputs and outputs during training and inferring the objective. This makes pix2pix highly flexible and adaptable to a wide variety of situations, including ones where it is not easy to verbally or explicitly define the task we want to model.”

In general, we don’t need to define the relation between input images and output images. It is really flexible when we are in real production.

Research Project

On the same website, there is a project called “Invisible cities”. In Invisible Cities, a collection of map tiles and their corresponding satellite images from multiple cities were downloaded from the MapBox API. A pix2pix model was trained to convert the map tiles into the satellite images.

Example:

Below are the examples from training dataset:

After the training process, we can translate maps into AI-generated satellite image.

Also, we can translate human-created input image:

So taking this idea, I think it is possible for me to do sneaker style transfer, and also hand-sketch translation proejcts.

My plan

Dataset

So the first thing is to prepare the dataset. Since we need a huge dataset, we need a bunch of sneaker images. So I decided to just focus on the brand design language translation: Adidas & Nike design language translation. Sounds interesting!

So my plan is to collect all the Nike and Adidas sneakers side pictures to train a model. Here are some examples:

Model

The model should be possible to be trained with two datasets: Nike dataset and Adidas dataset. After the training process, the users should be able to upload a sneaker image, and then choose a design language(it could be just Nike or Adidas, and also it could be mixed mode). The output should be a sneaker image with a relatively clear design. I am supposed to train my own model and run it in runwayML. But for now, I need to test it the model. I choose to test the pre-trained model in runwayML, Pix2Pix model.

Pix2Pix model in runwayML

In runwayML, the Pix2Pix model only provides two styles. For a testing purpose, it is ideal to get a clear output image. I upload a sneaker picture, which is

And here are what I got from the model:

So I think they are not the ideal ones. But I think if we change the model, it will work better. But they look amazing though.

Final Project: iSneaker

Background:

In recent years, sneaker culture is spreading throughout the whole world. More and more people wear sneakers for their daily outfit. Inspired by Nike ID service(you can design your shoe colorway by yourself on NIKE.com), I am going to build a project to design a sneaker, not just colorway, but the whole design language. iSneaker project is supposed to make sneaker design possible to the sneaker lovers or public people who have no design background. Everyone can design their shoes by drawing sketches or combining different existing sneakers.

Potential AI Models

For sketches design, I am considering to use the same plan as I mentioned above, which will use Pix2Pix to achieve this function. For combining different sneakers, I will consider training my own model(something similar to style transfer). The available model now is styleGAN.

Datasets

For datasets, I will try to collect sides picture of sneakers as many as possible. Nike, Adidas, Puma, Converse, Under Armor, etc. 

 

Filed Under: AI Arts

Reader Interactions

Leave a Reply Cancel reply

Your email address will not be published. Required fields are marked *

Primary Sidebar

Recent Posts

  • Make Believe Final Project: Autonomous Driving Taxi of the First Autonomous Work Group Company Service–Supportive Policy for Third Child Policy
  • Weekly assignment 5: TV Buddha by Nam June Paik
  • Make Believe Weekly Assignment 4–How to Explain Pictures to a Dead Hare 1965 by Joseph Beuys
  • Make Believe Weekly Assignment III: Uncommon Places The Complete Works By Stephen Shore
  • Make Believe Project II: Documentary “A Day with Black Chinese”

Recent Comments

  • A WordPress Commenter on Hello world!

Archives

  • December 2021
  • November 2021
  • October 2021
  • September 2021
  • December 2020
  • November 2020
  • October 2020
  • September 2020
  • May 2020
  • April 2020
  • March 2020
  • February 2020

Categories

  • AI Arts
  • Creative Game Design
  • Make Believe
  • Nature of Code
  • Uncategorized

Meta

  • Log in
  • Entries feed
  • Comments feed
  • WordPress.org
  • AI Arts
  • Critical Data Visulization
  • Nature of Code
  • Make Believe

Copyright © 2025 · Agency Pro on Genesis Framework · WordPress · Log in