Week 02 Assignment: Case Study Research – Style2Paints- Abdullah Zameek

Style2Paints – An AI driven lineart colorization tool

One of the biggest “bottlenecks” in the comic/manga industry is the time taken for artists to find the perfect color schemes for their drawings, and to actually color them in. This makes creating a single chapter or volume of a particular work to be a long, tedious process. 
Developed as  a collaboration between four students at the The Chinese University of Hong Kong and Soochow University, Style2Paints is one of the first systems to colorize lineart in “real-life human workflow”. What this essentially means is that it tries to follow the exact same process that a human goes through when coloring in a picture. As the authors of the project described, the human workflow could be summed up as follows :

sketching -> colour filling/flattening – > gradients/adding details -> shading

The Style2Paints library mimics the same process, and generates 4 different, independent layers in the form of PSD files. The layers are :

  • Lineart Layers
  • Flat Color Layers 
  • Gradient Layers
  • Shading Layers

Style2Paints was inspired by past projects such as PaintsChainer[TaiZan 2016] and Comicolorisation[Furusawa et al]. These outputs of these two model, however, often contained artifacts and coloring mistakes. This is solved up some extent with the separate layer model that Style2Paints uses.
Having separate layers allows for an artist to adjust and fine-tune each layer before merging them to form the final picture. But, as described by the authors, the Style2Paints model is able to do most of that fine tuning for the user. The user inputs the lineart image, and three optional parameters which are hints(which color palette to focus on more, etc), color style reference images, and light location and color. 

The results generated by the model are classified as follows: 

Fully Automatic Result – When there is absolutely no human intervention.
Semi Automatic Result – When the result needs some color correction, the user can put in some color hints (clicks) to guide the model.
Almost Automatic Result – Semi automatic results with fewer than 10 human corrections.

The underlying technology behind this project lies in a two-stage convolutional neural network framework for colorizing. The first stage involves (called the drafting stage) involves an aggressive splash of color across the canvas to create a vivid color composition. This stage would contain color mistakes and blurry texture, would be fixed in the next stage where blurry textures are refined to smoothen the final output. This sort of model splits the complicated task of coloring into two, smaller tasks which allows for more effective learning models, and can even be used to refine the output from other models such as PaintsChainer.

A few images from the model have been attached below for reference. (It was reported that all the images below were achieved with fewer than 15 clicks)

I find this project to be interesting for multiple reasons. From a technical point of view, I like the fact that they approached the problem in the most “human” way as possible, i.e focus more on how a human would do it, rather than what would be most efficient for a computer to handle. Secondly, this model gives artists freedom to experiment with different colors at a faster pace. For example, they can try out different schemes using the model, and pick out on that works best in a short amount of time, as opposed to manually filling in the colors.  This would certainly help artists to create more exciting content in a shorter amount of time, which would ultimately be of benefit to the industry as  whole.

Sources :
The published paper can be found here
https://github.com/lllyasviel/style2paints

[P] Style2Paints V4 finally released: Help artists in standard human coloring workflow!
byu/paintstransfer inMachineLearning

Leave a Reply