Abstract
A Chinese calligraphy copybook usually has a limited number of Chinese characters, far from a whole set of characters needed for typesetting. Therefore, there is a need to develop complete sets of Chinese calligraphy libraries for well-known calligrapher styles. This paper proposes an end-to-end network for character generation based on specific calligraphy styles. Specifically, a style transfer network is designed to transfer the style of characters, and a content supplement network is designed to capture the details of stylish strokes. Our model can generate high-quality calligraphy images without manually annotating data. To verify the generated calligraphy styles, a new dataset is constructed for experimental comparison between our method and two other baseline methods. Moreover, a user study is conducted to evaluate our generated calligraphy from a visual perspective. When the experiment participants are asked to distinguish the real calligraphy from generated samples, the correct rate was 53.5%. The results show that the calligraphy styles generated by our model are almost indistinguishable from the original works.










Similar content being viewed by others
References
Agostinelli F, Hoffman M, Sadowski P, Baldi P (2016) Learning activation functions to improve deep neural networks. In: 3rd international conference on learning representations
Alfred Z, Yuke Z (2014) Strokebank: automating personalized chinese handwriting generation. In: Proceedings of the AAAI conference on artificial intelligence, pp 3024–3029
Badrinarayanan V, Handa A, Cipolla R (2015) Segnet: A deep convolutional encoder-decoder architecture for robust semantic pixel-wise labelling. arXiv:abs/1505.07293
Dmitry U, Andrea V, Victor L (2016) Instance normalization: The missing ingredient for fast stylization. arXiv:abs/1607.08022
Dmitry U, Vadim L, Andrea V, Victor SL (2016) Texture networks: Feed-forward synthesis of textures and stylized images. In: ICML vol 1, pp 4
Gatys L, Ecker AS, Bethge M (2015) Texture synthesis using convolutional neural networks. In: Advances in neural information processing systems, pp 262–270
Goodfellow I, Pouget-Abadie J, Mirza M, Xu B, Warde-Farley D, Ozair S, Courville A, Bengio Y (2014) Generative adversarial nets. In: Advances in neural information processing systems, pp 2672–2680
He K, Zhang X, Ren S, Sun J (2016) Deep residual learning for image recognition. In: Proceedings of the IEEE conference on computer vision and pattern recognition, pp 770–778
Hwang Y, Lee J-Y, So Kweon I, Joo Kim S (2014) Color transfer using probabilistic moving least squares. In: Proceedings of the IEEE Conference on computer vision and pattern recognition, pp 3342–3349
Isola P, Zhu J-Y, Zhou T, Efros AA (2017) Image-to-image translation with conditional adversarial networks. In: Proceedings of the IEEE conference on computer vision and pattern recognition, pp 1125–1134
Jiang Y, Lian Z, Tang Y, Xiao J (2019) scfont: Structure-guided chinese font generation via deep stacked networks. In: Proceedings of the AAAI conference on artificial intelligence, vol 33, pp 4015–4022
Johnson J, Alahi A, Fei-Fei L (2016) Perceptual losses for real-time style transfer and super-resolution. In: European conference on computer vision, pp 694–711
Jun-Yan Z, Taesung P, Phillip I, Alexei AE (2017) Unpaired image-to-image translation using cycle-consistent adversarial networks. In: Proceedings of the IEEE international conference on computer vision, pp 2223–2232
Kingma DP, Ba J (2014) adam: A method for stochastic optimization. In: 3rd International Conference on Learning Representations
Li W, Chen Y, Tang C, Yu S (2018) Example-based chinese calligraphy synthesis. In 2018 international conference on advanced control, automation and artificial intelligence
Long J, Shelhamer E, Darrell T (2015) Fully convolutional networks for semantic segmentation. In: Proceedings of the IEEE conference on computer vision and pattern recognition, pp 3431–3440
Luc P, Couprie C, Chintala S, Verbeek J (2016) Semantic segmentation using adversarial networks. In: NIPS workshop on adversarial training
Lyu P, Bai X, Yao C, Zhu Z, Huang T, Liu W (2017) Auto-encoder guided gan for chinese calligraphy synthesis. In: 2017 14th IAPR international conference on document analysis and recognition, vol 1, pp 1095–1100
Matthew DZ, Rob F (2014) Visualizing and understanding convolutional networks. In: European conference on computer vision, pp 818–833
Mehdi M, Simon O (2014) Conditional generative adversarial nets. arXiv:abs/1411.1784
Mirza M, Osindero S (2018) Coconditional autoencoding adversarial networks for chinese font feature learning. arXiv:abs/1812.04451
Pathak D, Krahenbuhl P, Donahue J, Darrell T, Efros AA (2016) context encoders: Feature learning by inpainting. In: Proceedings of the IEEE conference on computer vision and pattern recognition, pp 2536–2544
Radford A, Metz L, Chintala S (2015) unsupervised representation learning with deep convolutional generative adversarial networks. In: 4th international conference on learning representations
Richard Z, Phillip I, Alexei AE (2016) Colorful image colorization. In European conference on computer vision, pp 649–666
Ronneberger O, Fischer P, Brox T (2015) U-net: Convolutional networks for biomedical image segmentation. In: International conference on medical image computing and computer-assisted intervention, pp 234–241
Shuai Y, Jiaying L, Zhouhui L, Zongming G (2017) Awesome typography: Statistics-based text effects transfer. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp 7464–7473
Souly N, Spampinato C, Shah M (2017) Semi supervised semantic segmentation using generative adversarial network. In: Proceedings of the IEEE international conference on computer vision, pp 5688–5696
Srivastava N, Hinton G, Krizhevsky A, Sutskever I, Salakhutdinov R (2014) Dropout: a simple way to prevent neural networks from over-fitting. J Mach Learn Res 15(1):1929–1958
Taigman Y, Polyak A, Wolf L (2016) Unsupervised cross-domain image generation. In: 5th international conference on learning representations
Wu SJ, Yang CY, Jane YH (2020) calligan: Style and structure-aware chinese calligraphy character generator. arXiv:abs/2005.12500
Zhang J, Shan S, Kan M, Chen X (2014) Coarse-to-fine auto-encoder networks (cfan) for real-time face alignment. In: European conference on computer vision, Springer, pp 1–16
Author information
Authors and Affiliations
Corresponding author
Additional information
Publisher’s note
Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.
Electronic supplementary material
Below is the link to the electronic supplementary material.
Rights and permissions
About this article
Cite this article
Zhou, P., Zhao, Z., Zhang, K. et al. An end-to-end model for chinese calligraphy generation. Multimed Tools Appl 80, 6737–6754 (2021). https://doi.org/10.1007/s11042-020-09709-5
Received:
Revised:
Accepted:
Published:
Issue Date:
DOI: https://doi.org/10.1007/s11042-020-09709-5