Wednesday, August 02, 2023

 

AniFaceDrawing: Delivering generative AI-powered high-quality anime portraits for beginners


Researchers use a generative artificial intelligence framework to create high-quality anime portraits from incomplete freehand sketches to remove creative barriers for general users


Reports and Proceedings

JAPAN ADVANCED INSTITUTE OF SCIENCE AND TECHNOLOGY

AniFaceDrawing system: Generating High-Quality Anime Portraits using AI 

IMAGE: IMAGE GENERATIVE AI FACES INHERENT DIFFICULTIES IN GENERATING IMAGES FROM INCOMPLETE LINE DRAWING WITH SMALL AREAS MISSING AND SOMETIMES EVEN FROM COMPLETE SKETCHES. THE PROPOSED ANIFACEDRAWING SYSTEM CAN GENERATE HIGH-QUALITY RESULTS THAT CONSISTENTLY MATCH THE INPUT SKETCH THROUGHOUT THE SKETCHING PROCESS. THE IMAGE DEPICTS (A) THE FINAL USER SKETCHES, (B) THE GUIDANCE IN DETAIL MODE (COLOR LINES REPRESENT THE SEMANTIC SEGMENTED PARTS), AND (C) THE GENERATED COLOR DRAWINGS FROM (A) AFTER THE FINAL REFERENCE IMAGE SELECTION. view more 

CREDIT: HAORAN XIE FROM JAIST.




Ishikawa, Japan -- Anime, the Japanese art of animation, comprises hand-drawn sketches in an abstract form with unique characteristics and exaggerations of real-life subjects. While generative artificial intelligence (AI) has found use in the content creation such as anime portraits, its use to augment human creativity, and guide freehand drawings proves challenging. The primary challenge lies with the generation of suitable reference images corresponding with the incomplete and abstract strokes made during the freehand drawing process. This is particularly true when the strokes created during the drawing process are incomplete and offer insufficient information for generative AI to predict the final shape of the drawing.

To tackle this problem, a research team from Japan Advanced Institute of Science and Technology (JAIST) and Waseda University in Japan, sought to develop a novel generative AI tool that offers progressive drawing assistance and helps generate anime portraits from freehand sketches. The tool is based on a sketch-to-image (S2I) deep learning framework that matches raw sketches with latent vectors of the generative model. It employs a two-stage training strategy through the pre-trained Style Generative Adversarial Network (StyleGAN)—a state-of-the-art generative model that uses adversarial networks to generate new images.

The team, led by Dr. Zhengyu Huang from JAIST, including Associate Professor Haoran Xie and Professor Kazunori Miyata, and Lecturer Tsukasa Fukusato from Waseda University proposed a novel "stroke-level disentanglement”, a strategy that associates input strokes of a freehand sketch with edge-related attributes, in the latent structural code of StyleGAN. This approach allows users to manipulate the attribute parameters, thereby having greater autonomy over the properties of generated images. Dr. Huang says, “We introduced an unsupervised training strategy for stroke-level disentanglement in StyleGAN, which enables the automatic matching of rough sketches with sparse strokes to the corresponding local parts in anime portraits, all without the need for semantic labels.”

This study will be presented at ACM SIGGRAPH 2023, the premier conference for computer graphics and interactive techniques and the only CORE ranking A* conference in the research fields worldwide.

Regarding the development of the tool, Prof. Xie adds, “We first trained an image encoder using a pre-trained StyleGAN model as a teacher encoder. In the second stage, we simulated the drawing process of generated images without additional data to train the sketch encoder for incomplete progressive sketches. This helped us generate high-quality portrait images that align with the disentangled representations of teacher encoder.”

To further highlight the effectiveness and usability of AniFaceDrawing in aiding users with anime portrait creation, the team conducted a user study. They invited 15 graduate students to draw digital freehand anime-style portraits using the AniFaceDrawing tool, with the option to switch between rough and detailed guidance modes for line art. While the former provided prompts for specific facial parts, the latter provided prompts for the full-face portrait based on the user’s drawing progress. Participants could pin the generated guidance once it matched their expectations, and further refine their input sketch. This tool also allowed participants to select a reference image to generate a color portrait of their input sketch. Next, they evaluated the tool for user satisfaction and guidance matching through a survey.

The team noted that the system consistently provided high-quality facial guidance and effectively supported the creation of anime-style portraits, by not only enhancing user sketches, but also by generating desirable corresponding colored images. Prof. Fukusato remarks, “Our system could successfully transform the user’s rough sketches into high-quality anime portraits. The user study indicated that even novices could make reasonable sketches with the help of the system and end up with high-quality color art drawings”. 

“Our generative AI framework enables users, regardless of their skill level and experience, to create professional anime portraits even from incomplete drawings. Our approach consistently produces high-quality image generation results throughout the creation process, regardless of the drawing order or how poor the initial sketches are,” summarizes Prof. Miyata.

In the long run, these findings can help democratize AI technology and assist users with creative tasks, thereby augmenting their creative capacity without technological barriers.

###

 

Reference

Title of original paper:

AniFaceDrawing: Anime Portrait Exploration during Your Sketching

Authors:

Zhengyu Huang, Haoran Xie, Tsukasa Fukusato, Kazunori Miyata

Conference:

ACM SIGGRAPH 2023

Project:

http://www.jaist.ac.jp/~xie/AniFaceDrawing.html

Video:

https://youtu.be/GcL67h8QEOY

DOI:

https://doi.org/10.1145/3588432.3591548

                                   

About Japan Advanced Institute of Science and Technology, Japan

Founded in 1990 in Ishikawa prefecture, the Japan Advanced Institute of Science and Technology (JAIST) was the first independent national graduate school in Japan. After 30 years of steady progress, JAIST has become one of Japan’s top-ranking universities. JAIST counts with multiple satellite campuses and strives to foster capable leaders with a state-of-the-art education system where diversity is key; about 40% of its alumni are international students. The university has a unique style of graduate education based on a carefully designed coursework-oriented curriculum to ensure that its students have a solid foundation to conduct cutting-edge research. JAIST also works closely both with local and overseas communities by promoting industry–academia collaborative research.

 

About Associate Professor Haoran Xie from Japan Advanced Institute of Science and Technology, Japan

Dr. Haoran Xie is an Associate Professor at the Japan Advanced Institute of Science and Technology (JAIST), Japan. With a research career spanning over a decade, Dr. Xie has over 100 publications to his credit and holds a Ph.D. in Computer Graphics from JAIST. His research interests focus on User Interfaces for Augmented Intelligence—especially for content creation, machine learning, human augmentation, and other artificial intelligence (AI)-related applications. Prof. Xie’s work has garnered him many academic awards, including several Best Paper Awards in international conferences, and FUNAI Research Award for Young Scientists. His work has been reported by various medias include Tech Xplore, China Science Daily, Nikkan Kogyo Shimbun and ITmedia NEWS.

 

Funding information

This research was supported by the JAIST Research Fund, Kayamori Foundation of Informational Science Advancement, JSPS KAKENHI JP20K19845, and JP19K20316.

No comments:

Post a Comment