Skip to main content
Log in

An end-to-end model for chinese calligraphy generation

  • Published:
Multimedia Tools and Applications Aims and scope Submit manuscript

Abstract

A Chinese calligraphy copybook usually has a limited number of Chinese characters, far from a whole set of characters needed for typesetting. Therefore, there is a need to develop complete sets of Chinese calligraphy libraries for well-known calligrapher styles. This paper proposes an end-to-end network for character generation based on specific calligraphy styles. Specifically, a style transfer network is designed to transfer the style of characters, and a content supplement network is designed to capture the details of stylish strokes. Our model can generate high-quality calligraphy images without manually annotating data. To verify the generated calligraphy styles, a new dataset is constructed for experimental comparison between our method and two other baseline methods. Moreover, a user study is conducted to evaluate our generated calligraphy from a visual perspective. When the experiment participants are asked to distinguish the real calligraphy from generated samples, the correct rate was 53.5%. The results show that the calligraphy styles generated by our model are almost indistinguishable from the original works.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Subscribe and save

Springer+ Basic
$34.99 /Month
  • Get 10 units per month
  • Download Article/Chapter or eBook
  • 1 Unit = 1 Article or 1 Chapter
  • Cancel anytime
Subscribe now

Buy Now

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Fig. 1
Fig. 2
Fig. 3
Fig. 4
Fig. 5
Fig. 6
Fig. 7
Fig. 8
Fig. 9
Fig. 10

Similar content being viewed by others

References

  1. Agostinelli F, Hoffman M, Sadowski P, Baldi P (2016) Learning activation functions to improve deep neural networks. In: 3rd international conference on learning representations

  2. Alfred Z, Yuke Z (2014) Strokebank: automating personalized chinese handwriting generation. In: Proceedings of the AAAI conference on artificial intelligence, pp 3024–3029

  3. Badrinarayanan V, Handa A, Cipolla R (2015) Segnet: A deep convolutional encoder-decoder architecture for robust semantic pixel-wise labelling. arXiv:abs/1505.07293

  4. Dmitry U, Andrea V, Victor L (2016) Instance normalization: The missing ingredient for fast stylization. arXiv:abs/1607.08022

  5. Dmitry U, Vadim L, Andrea V, Victor SL (2016) Texture networks: Feed-forward synthesis of textures and stylized images. In: ICML vol 1, pp 4

  6. Gatys L, Ecker AS, Bethge M (2015) Texture synthesis using convolutional neural networks. In: Advances in neural information processing systems, pp 262–270

  7. Goodfellow I, Pouget-Abadie J, Mirza M, Xu B, Warde-Farley D, Ozair S, Courville A, Bengio Y (2014) Generative adversarial nets. In: Advances in neural information processing systems, pp 2672–2680

  8. He K, Zhang X, Ren S, Sun J (2016) Deep residual learning for image recognition. In: Proceedings of the IEEE conference on computer vision and pattern recognition, pp 770–778

  9. Hwang Y, Lee J-Y, So Kweon I, Joo Kim S (2014) Color transfer using probabilistic moving least squares. In: Proceedings of the IEEE Conference on computer vision and pattern recognition, pp 3342–3349

  10. Isola P, Zhu J-Y, Zhou T, Efros AA (2017) Image-to-image translation with conditional adversarial networks. In: Proceedings of the IEEE conference on computer vision and pattern recognition, pp 1125–1134

  11. Jiang Y, Lian Z, Tang Y, Xiao J (2019) scfont: Structure-guided chinese font generation via deep stacked networks. In: Proceedings of the AAAI conference on artificial intelligence, vol 33, pp 4015–4022

  12. Johnson J, Alahi A, Fei-Fei L (2016) Perceptual losses for real-time style transfer and super-resolution. In: European conference on computer vision, pp 694–711

  13. Jun-Yan Z, Taesung P, Phillip I, Alexei AE (2017) Unpaired image-to-image translation using cycle-consistent adversarial networks. In: Proceedings of the IEEE international conference on computer vision, pp 2223–2232

  14. Kingma DP, Ba J (2014) adam: A method for stochastic optimization. In: 3rd International Conference on Learning Representations

  15. Li W, Chen Y, Tang C, Yu S (2018) Example-based chinese calligraphy synthesis. In 2018 international conference on advanced control, automation and artificial intelligence

  16. Long J, Shelhamer E, Darrell T (2015) Fully convolutional networks for semantic segmentation. In: Proceedings of the IEEE conference on computer vision and pattern recognition, pp 3431–3440

  17. Luc P, Couprie C, Chintala S, Verbeek J (2016) Semantic segmentation using adversarial networks. In: NIPS workshop on adversarial training

  18. Lyu P, Bai X, Yao C, Zhu Z, Huang T, Liu W (2017) Auto-encoder guided gan for chinese calligraphy synthesis. In: 2017 14th IAPR international conference on document analysis and recognition, vol 1, pp 1095–1100

  19. Matthew DZ, Rob F (2014) Visualizing and understanding convolutional networks. In: European conference on computer vision, pp 818–833

  20. Mehdi M, Simon O (2014) Conditional generative adversarial nets. arXiv:abs/1411.1784

  21. Mirza M, Osindero S (2018) Coconditional autoencoding adversarial networks for chinese font feature learning. arXiv:abs/1812.04451

  22. Pathak D, Krahenbuhl P, Donahue J, Darrell T, Efros AA (2016) context encoders: Feature learning by inpainting. In: Proceedings of the IEEE conference on computer vision and pattern recognition, pp 2536–2544

  23. Radford A, Metz L, Chintala S (2015) unsupervised representation learning with deep convolutional generative adversarial networks. In: 4th international conference on learning representations

  24. Rewrite. https://github.com/kaonashi-tyc/Rewrite

  25. Richard Z, Phillip I, Alexei AE (2016) Colorful image colorization. In European conference on computer vision, pp 649–666

  26. Ronneberger O, Fischer P, Brox T (2015) U-net: Convolutional networks for biomedical image segmentation. In: International conference on medical image computing and computer-assisted intervention, pp 234–241

  27. Shuai Y, Jiaying L, Zhouhui L, Zongming G (2017) Awesome typography: Statistics-based text effects transfer. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp 7464–7473

  28. Souly N, Spampinato C, Shah M (2017) Semi supervised semantic segmentation using generative adversarial network. In: Proceedings of the IEEE international conference on computer vision, pp 5688–5696

  29. Srivastava N, Hinton G, Krizhevsky A, Sutskever I, Salakhutdinov R (2014) Dropout: a simple way to prevent neural networks from over-fitting. J Mach Learn Res 15(1):1929–1958

    MathSciNet  MATH  Google Scholar 

  30. Taigman Y, Polyak A, Wolf L (2016) Unsupervised cross-domain image generation. In: 5th international conference on learning representations

  31. Wu SJ, Yang CY, Jane YH (2020) calligan: Style and structure-aware chinese calligraphy character generator. arXiv:abs/2005.12500

  32. Zhang J, Shan S, Kan M, Chen X (2014) Coarse-to-fine auto-encoder networks (cfan) for real-time face alignment. In: European conference on computer vision, Springer, pp 1–16

  33. zi2zi. https://github.com/kaonashi-tyc/zi2zi

Download references

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Changbo Wang.

Additional information

Publisher’s note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Electronic supplementary material

Below is the link to the electronic supplementary material.

(ZIP 12.7 MB)

Rights and permissions

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Zhou, P., Zhao, Z., Zhang, K. et al. An end-to-end model for chinese calligraphy generation. Multimed Tools Appl 80, 6737–6754 (2021). https://doi.org/10.1007/s11042-020-09709-5

Download citation

  • Received:

  • Revised:

  • Accepted:

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.1007/s11042-020-09709-5

Keywords

Navigation

pFad - Phonifier reborn

Pfad - The Proxy pFad of © 2024 Garber Painting. All rights reserved.

Note: This service is not intended for secure transactions such as banking, social media, email, or purchasing. Use at your own risk. We assume no liability whatsoever for broken pages.


Alternative Proxies:

Alternative Proxy

pFad Proxy

pFad v3 Proxy

pFad v4 Proxy