Skip to main content

Elegantly Written: Disentangling Writer and Character Styles for Enhancing Online Chinese Handwriting

  • Conference paper
  • First Online:
Computer Vision – ECCV 2024 (ECCV 2024)

Abstract

The electronic writing tools, while enhancing convenience, sacrifice the readability and efficiency of handwritten content. Balancing high efficiency with readable handwriting poses a challenging research task. In this paper, we propose a method sequence-based models to beautify user handwritten traces. Unlike most existing methods that treat Chinese handwriting as images and cannot reflect the human writing process, we capture individual writing characteristics from a small amount of user handwriting trajectories and beautify the user’s traces by mimicking their writing style and process. We fully consider the style of radicals and components between the content and reference glyphs, assigning appropriate fine-grained styles to strokes in the content glyphs through a cross-attention mechanism module. Additionally, we find that many style features contribute minimally to the final stylized results. Therefore, we decompose the style features into the Cartesian product of single-dimensional variable sets, effectively removing redundant features with limited impact on the stylization effect while preserving key style information. Qualitative and quantitative experiments both demonstrate the superiority of our approach.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Subscribe and save

Springer+ Basic
$34.99 /Month
  • Get 10 units per month
  • Download Article/Chapter or eBook
  • 1 Unit = 1 Article or 1 Chapter
  • Cancel anytime
Subscribe now

Buy Now

Chapter
USD 29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD 74.99
Price excludes VAT (USA)
  • Available as EPUB and PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD 89.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

Similar content being viewed by others

References

  1. Aksan, E., Pece, F., Hilliges, O.: Deepwriting: making digital ink editable via deep generative modeling. In: CHI, pp. 1–14. ACM (2018). https://doi.org/10.1145/3173574.3173779

  2. Bhattarai, B., Kim, T.-K.: Inducing optimal attribute representations for conditional GANs. In: Vedaldi, A., Bischof, H., Brox, T., Frahm, J.-M. (eds.) ECCV 2020. LNCS, vol. 12352, pp. 69–85. Springer, Cham (2020). https://doi.org/10.1007/978-3-030-58571-6_5

    Chapter  Google Scholar 

  3. Chen, Z., et al.: Complex handwriting trajectory recovery: evaluation metrics and algorithm. In: Wang, L., Gall, J., Chin, T.J., Sato, I., Chellappa, R. (eds.) ACCV 2022. LNCS, vol. 13842, pp. 1060–1076. Springer, Cham (2022). https://doi.org/10.1007/978-3-031-26284-5_4

    Chapter  Google Scholar 

  4. Choi, Y., Uh, Y., Yoo, J., Ha, J.W.: Stargan v2: diverse image synthesis for multiple domains. In: CVPR, pp. 8185–8194. IEEE (2020). https://doi.org/10.1109/CVPR42600.2020.00821

  5. Dai, G., et al.: Disentangling writer and character styles for handwriting generation. In: CVPR, pp. 5977–5986. IEEE (2023). https://doi.org/10.1109/cvpr52729.2023.00579

  6. Fogel, S., Averbuch-Elor, H., Cohen, S., Mazor, S., Litman, R.: Scrabblegan: semi-supervised varying length handwritten text generation. In: CVPR, pp. 4324–4333. IEEE (2020). https://doi.org/10.1109/cvpr42600.2020.00438

  7. Gan, J., Wang, W.: HIGAN: handwriting imitation conditioned on arbitrary-length texts and disentangled styles. In: AAAI, pp. 7484–7492. AAAI Press (2021). https://doi.org/10.1609/aaai.v35i9.16917

  8. Gao, Y., Wu, J.: Gan-based unpaired Chinese character image translation via skeleton transformation and stroke rendering. In: AAAI, pp. 646–653. AAAI Press (2020). https://doi.org/10.1609/aaai.v34i01.5405

  9. Gao, Y., Guo, Y., Lian, Z., Tang, Y., Xiao, J.: Artistic glyph image synthesis via one-stage few-shot learning. ACM Trans. Graph. 38(6), 1–12 (2019). https://doi.org/10.1145/3355089.3356574

    Article  Google Scholar 

  10. Graves, A., Graves, A.: Long short-term memory. Supervised sequence labelling with recurrent neural networks, pp. 37–45 (2012)

    Google Scholar 

  11. Hassan, A.U., Ahmed, H., Choi, J.: Unpaired font family synthesis using conditional generative adversarial networks. Knowl. Based Syst. 229, 107304 (2021). https://doi.org/10.1016/j.knosys.2021.107304

    Article  Google Scholar 

  12. Jeong, S., Kim, Y., Lee, E., Sohn, K.: Memory-guided unsupervised image-to-image translation. In: CVPR, pp. 6554–6563. IEEE (2021). https://doi.org/10.1109/CVPR46437.2021.00649

  13. Jiang, L., Zhang, C., Huang, M., Liu, C., Shi, J., Loy, C.C.: TSIT: a simple and versatile framework for image-to-image translation. In: Vedaldi, A., Bischof, H., Brox, T., Frahm, J.M. (eds.) ECCV 2020. LNCS, vol. 12348, pp. 206–222. Springer, Cham (2020). https://doi.org/10.1007/978-3-030-58580-8_13

    Chapter  Google Scholar 

  14. Jiang, Y., Lian, Z., Tang, Y., Xiao, J.: Scfont: structure-guided Chinese font generation via deep stacked networks. In: AAAI, pp. 4015–4022. AAAI Press (2019). https://doi.org/10.1609/aaai.v33i01.33014015

  15. Kotani, A., Tellex, S., Tompkin, J.: Generating handwriting via decoupled style descriptors. In: Vedaldi, A., Bischof, H., Brox, T., Frahm, J.-M. (eds.) ECCV 2020. LNCS, vol. 12357, pp. 764–780. Springer, Cham (2020). https://doi.org/10.1007/978-3-030-58610-2_45

    Chapter  Google Scholar 

  16. Li, T.M., Lukáč, M., Gharbi, M., Ragan-Kelley, J.: Differentiable vector graphics rasterization for editing and learning. TOG 39(6), 1–15 (2020). https://doi.org/10.1145/3414685.3417871

    Article  Google Scholar 

  17. Liu, C.L., Yin, F., Wang, D.H., Wang, Q.F.: Casia online and offline Chinese handwriting databases. In: ICDAR, pp. 37–41. IEEE (2011). https://doi.org/10.1109/icdar.2011.17

  18. Liu, X., Meng, G., Chang, J., Hu, R., Xiang, S., Pan, C.: Decoupled representation learning for character glyph synthesis. IEEE Trans. Multimedia 24, 1787–1799 (2021). https://doi.org/10.1109/tmm.2021.3072449

    Article  Google Scholar 

  19. Liu, Y.T., Zhang, Z., Guo, Y.C., Fisher, M., Wang, Z., Zhang, S.H.: Dualvector: unsupervised vector font synthesis with dual-part representation. In: CVPR, pp. 14193–14202. IEEE (2023). https://doi.org/10.1109/cvpr52729.2023.01364

  20. Liu, Y., binti Khalid, F., binti Mustaffa, M.R., bin Azman, A.: Dual-modality learning and transformer-based approach for high-quality vector font generation. Expert Syst. Appl. 240, 122405 (2024). https://doi.org/10.1016/j.eswa.2023.122405

  21. Liu, Y., binti Khalid, F., Wang, C., binti Mustaffa, M.R., bin Azman, A.: An end-to-end chinese font generation network with stroke semantics and deformable attention skip-connection. Expert Syst. Appl. 237, 121407 (2024). https://doi.org/10.1016/j.eswa.2023.121407

  22. Matsumoto, K., Fukushima, T., Nakagawa, M.: Collection and analysis of on-line handwritten Japanese character patterns. In: Proceedings of Sixth International Conference on Document Analysis and Recognition, pp. 496–500. IEEE (2001). https://doi.org/10.1109/ICDAR.2001.953839

  23. Pan, W., Zhu, A., Zhou, X., Iwana, B.K., Li, S.: Few shot font generation via transferring similarity guided global style and quantization local style. In: ICCV, pp. 19506–19516 (2023). https://doi.org/10.1109/iccv51070.2023.01787

  24. Park, S., Chun, S., Cha, J., Lee, B., Shim, H.: Few-shot font generation with localized style representations and factorization. In: AAAI, pp. 2393–2402. AAAI Press (2021). https://doi.org/10.1609/aaai.v35i3.16340

  25. Pippi, V., Cascianelli, S., Cucchiara, R.: Handwritten text generation from visual archetypes. In: CVPR, pp. 22458–22467. IEEE (2023). https://doi.org/10.1109/cvpr52729.2023.02151

  26. Reddy, P., Gharbi, M., Lukac, M., Mitra, N.J.: Im2vec: synthesizing vector graphics without vector supervision. In: CVPRW, pp. 7342–7351. IEEE (2021). https://doi.org/10.1109/cvprw53098.2021.00241

  27. Richardson, E., et al.: Encoding in style: a stylegan encoder for image-to-image translation. In: CVPR, pp. 2287–2296. IEEE (2021). https://doi.org/10.1109/CVPR46437.2021.00232

  28. Sumi, T., Iwana, B.K., Hayashi, H., Uchida, S.: Modality conversion of handwritten patterns by cross variational autoencoders. In: ICDAR, pp. 407–412. IEEE (2019). https://doi.org/10.1109/icdar.2019.00072

  29. Tang, L., et al.: Few-shot font generation by learning fine-grained local styles. In: CVPR, pp. 7895–7904. IEEE (2022). https://doi.org/10.1109/cvpr52688.2022.00774

  30. Tang, S., Lian, Z.: Write like you: synthesizing your cursive online Chinese handwriting via metric-based meta learning. Comput. Graph. Forum 40(2), 141–151 (2021). https://doi.org/10.1111/cgf.142621

    Article  Google Scholar 

  31. Tang, S., Xia, Z., Lian, Z., Tang, Y., Xiao, J.: FontRNN: generating large-scale Chinese fonts via recurrent neural network. Comput. Graph. Forum 38(7), 567–577 (2019). https://doi.org/10.1111/cgf.13861

    Article  Google Scholar 

  32. Vaswani, A., et al.: Attention is all you need. In: NeurIPS, vol. 30 (2017)

    Google Scholar 

  33. Wang, C., Zhou, M., Ge, T., Jiang, Y., Bao, H., Xu, W.: Cf-font: content fusion for few-shot font generation. In: CVPR, pp. 1858–1867. IEEE (2023). https://doi.org/10.1109/CVPR52729.2023.00185

  34. Wang, Y., Lian, Z.: Deepvecfont: synthesizing high-quality vector fonts via dual-modality learning. TOG 40(6), 1–15 (2021). https://doi.org/10.1145/3478513.3480488

    Article  Google Scholar 

  35. Wang, Y., Wang, Y., Yu, L., Zhu, Y., Lian, Z.: Deepvecfont-v2: exploiting transformers to synthesize vector fonts with higher quality. In: CVPR, pp. 18320–18328. IEEE (2023). https://doi.org/10.1109/cvpr52729.2023.01757

  36. Wen, C., et al.: Handwritten Chinese font generation with collaborative stroke refinement. In: WACV, pp. 3882–3891. IEEE (2021). https://doi.org/10.1109/WACV48630.2021.00393

  37. Wen, Q., Li, S., Han, B., Yuan, Y.: Zigan: Fine-grained Chinese calligraphy font generation via a few-shot style transfer approach. In: ACMMM, pp. 621–629. ACM (2021). https://doi.org/10.1145/3474085.3475225

  38. Xie, Y., Chen, X., Sun, L., Lu, Y.: Dg-font: deformable generative networks for unsupervised font generation. In: CVPR, pp. 735–751. IEEE (2021). https://doi.org/10.1109/cvpr46437.2021.00509

  39. Yu, L., et al.: Magvit: masked generative video transformer. In: CVPR, pp. 10459–10469 (2023). https://doi.org/10.1109/cvpr52729.2023.01008

  40. Yu, L., et al.: Language model beats diffusion–tokenizer is key to visual generation. arXiv preprint arXiv:2310.05737 (2023). https://doi.org/10.48550/arXiv.2310.05737

  41. Zeng, J., Chen, Q., Liu, Y., Wang, M., Yao, Y.: Strokegan: reducing mode collapse in chinese font generation via stroke encoding. In: AAAI, pp. 3270–3277 (2021). https://doi.org/10.1609/aaai.v35i4.16438

  42. Zeng, S., Pan, Z.: An unsupervised font style transfer model based on generative adversarial networks. Multimedia Tools Appl. 81(4), 5305–5324 (2022). https://doi.org/10.1007/s11042-021-11777-0

    Article  Google Scholar 

  43. Zhang, R., Isola, P., Efros, A.A., Shechtman, E., Wang, O.: The unreasonable effectiveness of deep features as a perceptual metric. In: CVPR, pp. 586–595. IEEE (2018)

    Google Scholar 

  44. Zhang, X.Y., Yin, F., Zhang, Y.M., Liu, C.L., Bengio, Y.: Drawing and recognizing Chinese characters with recurrent neural network. PAMI 40(4), 849–862 (2017). https://doi.org/10.1109/tpami.2017.2695539

    Article  Google Scholar 

  45. Zhang, Y., Zhang, Y., Cai, W.: Separating style and content for generalized style transfer. In: CVPR, pp. 8447–8455. IEEE (2018). https://doi.org/10.1109/cvpr.2018.00881

  46. Zhao, B., Tao, J., Yang, M., Tian, Z., Fan, C., Bai, Y.: Deep imitator: handwriting calligraphy imitation via deep attention networks. Pattern Recogn. 104, 107080 (2020). https://doi.org/10.1016/j.patcog.2019.107080

    Article  Google Scholar 

  47. Zhu, A., Lu, X., Bai, X., Uchida, S., Iwana, B.K., Xiong, S.: Few-shot text style transfer via deep feature similarity. IEEE Trans. Image Process. 29, 6932–6946 (2020). https://doi.org/10.1109/tip.2020.2995062

    Article  Google Scholar 

Download references

Acknowledgements

This research was supported by Dalian Science and Technology Innovation Fund (No.2023JJGX026) and Key Laboratory of Informatization of National Education of Ministry of Education (No.EIN2024B002).

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Cunrui Wang .

Editor information

Editors and Affiliations

1 Electronic supplementary material

Below is the link to the electronic supplementary material.

Supplementary material 1 (pdf 64394 KB)

Rights and permissions

Reprints and permissions

Copyright information

© 2025 The Author(s), under exclusive license to Springer Nature Switzerland AG

About this paper

Check for updates. Verify currency and authenticity via CrossMark

Cite this paper

Liu, Y., Khalid, F.B., Wang, L., Zhang, Y., Wang, C. (2025). Elegantly Written: Disentangling Writer and Character Styles for Enhancing Online Chinese Handwriting. In: Leonardis, A., Ricci, E., Roth, S., Russakovsky, O., Sattler, T., Varol, G. (eds) Computer Vision – ECCV 2024. ECCV 2024. Lecture Notes in Computer Science, vol 15066. Springer, Cham. https://doi.org/10.1007/978-3-031-73242-3_23

Download citation

  • DOI: https://doi.org/10.1007/978-3-031-73242-3_23

  • Published:

  • Publisher Name: Springer, Cham

  • Print ISBN: 978-3-031-73241-6

  • Online ISBN: 978-3-031-73242-3

  • eBook Packages: Computer ScienceComputer Science (R0)

Publish with us

Policies and ethics

pFad - Phonifier reborn

Pfad - The Proxy pFad of © 2024 Garber Painting. All rights reserved.

Note: This service is not intended for secure transactions such as banking, social media, email, or purchasing. Use at your own risk. We assume no liability whatsoever for broken pages.


Alternative Proxies:

Alternative Proxy

pFad Proxy

pFad v3 Proxy

pFad v4 Proxy