3D human pose estimation from sketches has broad applications in computer animation and film production. Unlike traditional human pose estimation, this task presents unique challenges due to the abstract and disproportionate nature of sketches. Previous sketch-to-pose methods, constrained by the lack of large-scale sketch-3D pose annotations, primarily relied on optimization with heuristic rules—an approach that is both time-consuming and limited in generalizability. To address these challenges, we propose a novel approach leveraging a "learn from synthesis" strategy. First, a diffusion model is trained to synthesize sketch images from 2D poses projected from 3D human poses, mimicking disproportionate human structures in sketches. This process enables the creation of a synthetic dataset, SKEP-120K, consisting of 120k accurate sketch-3D pose annotation pairs across various sketch styles. Building on this synthetic dataset, we introduce an end-to-end data-driven framework for estimating human poses and shapes from diverse sketch styles. Our framework combines existing 2D pose detectors and generative diffusion priors for sketch feature extraction with a feed-forward neural network for efficient 2D pose estimation. Multiple heuristic loss functions are incorporated to guarantee geometric coherence between the derived 3D poses and the detected 2D poses while preserving accurate self-contacts. Qualitative, quantitative, and subjective evaluations collectively show that our model substantially surpasses previous ones in both estimation accuracy and speed for sketch-to-pose tasks.
Overall Method. Given a sketch image as input, the network predicts 3D human poses represented by SMPL parameters. The overall network consists of three modules: the 2D guidance extractor (i), the sketch feature extractor (ii), and the SMPL regressor (iii).
Dataset Creating Pipeline. Three stages are involved: (i) generating diverse 3D poses (as SMPL); (ii) adding random biases to bone lengths and projecting to 2D poses; (iii) generating diverse text guidance; (iv) training a text-conditioned image generator for sketch synthesis.
@article{wang2025sketch2posenet,
author = {Li Wang, Yiyu Zhuang, Yanwen Wang, Xun Cao, Chuan Guo, Xinxin Zuo, Hao Zhu},
title = {Sketch2PoseNet: Efficient and Generalized Sketch to 3D Human Pose Prediction},
journal = {SIGGRAPH Asia},
year = {2025},
}