Learning to Shadow Hand-drawn Sketches

Qingyuan Zheng*1, Zhuoru Li*2, Adam W. Bargteil1

1University of Maryland, Baltimore County   2ProjectHAT

1{qing3, adamb}@umbc.edu

*Equal contribution

CVPR 2020 (Oral paper)

[code] [demo] [bibtex]

paper [full size] [small size]

Abstract

We present a fully automatic method to generate detailed and accurate artistic shadows from pairs of line drawing sketches and lighting directions. We also contribute a new dataset of one thousand examples of pairs of line drawings and shadows that are tagged with lighting directions. Remarkably, the generated shadows quickly communicate the underlying 3D structure of the sketched scene. Consequently, the shadows generated by our approach can be used directly or as an excellent starting point for artists. We demonstrate that the deep learning network we propose takes a hand-drawn sketch, builds a 3D model in latent space, and renders the resulting shadows. The generated shadows respect the hand-drawn lines and underlying 3D space and contain sophisticated and accurate details, such as self-shadowing effects. Moreover, the generated shadows contain artistic effects, such as rim lighting or halos appearing from back lighting, that would be achievable with traditional 3D rendering methods.


Overview

The cube shows how we denote 26 lighting directions.

The user specifies the light position with a three-digit string. The first digit corresponds to the lighting direction (1-8), the second to the depth plane (1-3), and the third is 0 except for the special directions, which are 001 (directly in front) and 002 (behind). This results in 8 * 3 + 2 = 26 lighting directions.

Network Architecture.


Gallery

Example results. Our system is able to produce binary shadows and soft shadows.

Though our network is trained with a discrete set of 26 lighting directions, the lighting direction is inputted to the network using floating point values in [-1, 1]3, allowing for the generation of shadows from arbitrary light locations. Intuitively, our network learns a continuous representation of lighting direction from the discrete set of examples.


Dataset

[Download] (available soon)

Samples from our dataset.

Our dataset comprises 1,160 sketch/shadow pairs and includes a variety of lighting directions and subjects. Specifically, 372 front-lighting, 506 side-lighting, 111 back-lighting, 85 center-back, and 86 center-front. With regard to subjects there are 867 single-person, 56 multi-person, 177 body-part, and 60 mecha.


Artistic Control

Combining our shadows with color. (a) Input sketch. (b) Our shadow with lighting direction 710. (c) Our shadows in complex lighting conditions created by compositing shadows from 001, 730, and 210. (d) Our shadows composited from lighting directions 001, 210, 220 with dots and soft shadow to produce a manga style. (e) Colorized sketch with commercial software. (f) Composite of (e) and (b). (g) Composite of (e) and (c). (h) Original artist's image. © nico-opendata


Virtual Lighting

Examples of our shadowing system applied to artistic line drawings.

Antoine Thomeguex

Kabuki Actor Segawa Kikunojo III as the Shirabyoshi Hisakata Disguised as Yamato Manzai

Jardin de Paris


Results from 26 discrete lighting directions.


Notes: An online demo coming soon.