skip to main content
research-article

Dual Discriminator GAN: Restoring Ancient Yi Characters

Authors Info & Claims
Published:14 March 2022Publication History
Skip Abstract Section

Abstract

In China, the damage of ancient Yi books are serious. Due to the lack of ancient Yi experts, the repairation of ancient Yi books is progressing very slowly. The artificial intelligence is successful in the field of image and text, so it is feasible for the automatic restoration of ancient books. In this article, a generative adversarial networks with dual discriminator (DDGAN) is designed to restore incomplete characters in the ancient Yi literature. The DDGAN integrates the deep convolution generative adversarial network with an ancient Yi comparison discriminator. Through two training stages, it could iteratively optimizes the ancient Yi character generation networks to obtain the text generator According to the loss of comparison discriminator, DDGAN mode could be optimized. The DDGAN model can generate characters to restore the missing stroke in the ancient Yi. The experiment shows that the proposed method achieves a restoration rate of 77.3% when no more than one third of the characters are missing. This work is effective for the protection of Yi ancient books.

REFERENCES

  1. [1] Fazil Altinel, Mete Ozay, and Takayuki Okatani. 2018. Deep structured energy-based image inpainting. In 2018 24th International Conference on Pattern Recognition (ICPR’18). IEEE, Beijing, China, 423–428. Google ScholarGoogle ScholarCross RefCross Ref
  2. [2] Badrinarayanan Vijay, Kendall Alex, and Cipolla Roberto. 2017. SegNet: A deep convolutional encoder-decoder architecture for image segmentation. IEEE Transactions on Pattern Analysis and Machine Intelligence 39, 12 (2017), 24812495. Google ScholarGoogle ScholarCross RefCross Ref
  3. [3] Bouillon Manuel, Ingold Rolf, and Liwicki Marcus. 2019. Grayification: A meaningful grayscale conversion to improve handwritten historical documents analysis. Pattern Recognition Letters 121 (2019), 4651.Google ScholarGoogle ScholarDigital LibraryDigital Library
  4. [4] Cao Xiangyong, Zhou Feng, Xu Lin, Meng Deyu, Xu Zongben, and Paisley John. 2018. Hyperspectral image classification with Markov random fields and a convolutional neural network. IEEE Transactions on Image Processing 27, 5 (2018), 23542367. Google ScholarGoogle ScholarDigital LibraryDigital Library
  5. [5] Bo Chang, Qiong Zhang, Shenyi Pan, and Lili Meng. 2018. Generating handwritten Chinese characters using CycleGAN. In IEEE Winter Conference on Applications of Computer Vision (WACV’18). IEEE, 199–207. Google ScholarGoogle ScholarCross RefCross Ref
  6. [6] Jie Chang, Yujun Gu, Ya Zhang, Yan-Feng Wang, and CM Innovation. 2018. Chinese handwriting imitation with hierarchical generative adversarial network. In BMVC. Springer, Northumbria University, United Kingdom, 290.Google ScholarGoogle Scholar
  7. [7] Chang Lanlan, Sun Jun, Suwa Misako, Takebe Hiroaki, He Yuan, and Naoi Satoshi. 2010. Occluded text restoration and recognition. In Proceedings of the 9th IAPR International Workshop on Document Analysis Systems. 151158.Google ScholarGoogle ScholarDigital LibraryDigital Library
  8. [8] Chen Shanxiong, Han Xu, Wang Xiaolong, and Ma Hui. 2019. A recognition method of ancient Yi script based on deep learning. International Journal of Computer and Information Engineering 13, 9 (2019), 508515.Google ScholarGoogle Scholar
  9. [9] Dan C. Cireşan, Ueli Meier, and Jürgen Schmidhuber. 2012. Transfer learning for Latin and Chinese characters with deep neural networks. In International Joint Conference on Neural Networks (IJCNN’12). Brisbane, Australia. Google ScholarGoogle ScholarCross RefCross Ref
  10. [10] Criminisi Antonio, Pérez Patrick, and Toyama Kentaro. 2004. Region filling and object removal by exemplar-based image inpainting. IEEE Transactions on Image Processing 13, 9 (2004), 12001212. Google ScholarGoogle ScholarDigital LibraryDigital Library
  11. [11] Dong Hao, Neekhara Paarth, Wu Chao, and Guo Yike. 2017. Unsupervised image-to-image translation with generative adversarial networks. arXiv:1701.02676.Google ScholarGoogle Scholar
  12. [12] Elharrouss Omar, Almaadeed Noor, Al-Maadeed Somaya, and Akbari Younes. 2020. Image inpainting: A review. Neural Processing Letters 51, 2 (2020), 20072028. Google ScholarGoogle ScholarDigital LibraryDigital Library
  13. [13] Goodfellow Ian, Pouget-Abadie Jean, Mirza Mehdi, Xu Bing, Warde-Farley David, Ozair Sherjil, Courville Aaron, and Bengio Yoshua. 2020. Generative adversarial networks. Communications of the ACM 63, 11 (2020), 139144. Google ScholarGoogle ScholarDigital LibraryDigital Library
  14. [14] Paul Harrison. 2001. A non-hierarchical procedure for re-synthesis of complex textures. In Proceedings of the International Conferences in Central Europe on Computer Graphics, Visualization and Computer Vision. University of West Bohemia, Plzen, Czech Republic, 190–197.Google ScholarGoogle Scholar
  15. [15] Kaiming He, Xiangyu Zhang, Shaoqing Ren, and Jian Sun. 2016. Deep residual learning for image recognition. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR’16). IEEE, Nevada Las Vegas, USA, 770–778. Google ScholarGoogle ScholarCross RefCross Ref
  16. [16] Hu Bo, Li Leida, Liu Hantao, Lin Weisi, and Qian Jiansheng. 2019. Pairwise-comparison-based rank learning for benchmarking image restoration algorithms. IEEE Transactions on Multimedia 21, 8 (2019), 20422056. Google ScholarGoogle ScholarCross RefCross Ref
  17. [17] Jiao Jianbo, Tu Wei-Chih, Liu Ding, He Shengfeng, Lau Rynson W. H., and Huang Thomas S.. 2020. FormNet: Formatted learning for image restoration. IEEE Transactions on Image Processing 29 (2020), 63026314. Google ScholarGoogle ScholarCross RefCross Ref
  18. [18] Shi Jinbo. 2018. A modest study and arrangements on contemporary minority nationalities characters historical documents. Qinghai Nationalities Research 29, 3 (2018), 108–115.Google ScholarGoogle Scholar
  19. [19] Johnson Justin, Alahi Alexandre, and Fei-Fei Li. 2016. Perceptual losses for real-time style transfer and super-resolution. In European Conference on Computer Vision. Springer, 694711. Google ScholarGoogle ScholarCross RefCross Ref
  20. [20] Lahiri Avisek, Jain Arnav, Biswas Prabir Kumar, and Mitra Pabitra. 2017. Improving consistency and correctness of sequence inpainting using semantically guided generative adversarial network. arXiv:1711.06106.Google ScholarGoogle Scholar
  21. [21] Mazumdar Eric and Ratliff Lillian J.. 2018. On the convergence of competitive, multi-agent gradient-based learning. arXiv:1804.05464.Google ScholarGoogle Scholar
  22. [22] R. Memisevic. 2011. Gradient-based learning of higher-order image features. In the 2011 International Conference on Computer Vision (ICCV). IEEE, Barcelona, Spain, 1591–1598. Google ScholarGoogle ScholarDigital LibraryDigital Library
  23. [23] Meng Yingying, Kong Deqiang, Zhu Zhenfeng, and Zhao Yao. 2019. From night to day: GANs based low quality image enhancement. Neural Processing Letters 50, 1 (2019), 799814. Google ScholarGoogle ScholarDigital LibraryDigital Library
  24. [24] Mirza Mehdi and Osindero Simon. 2014. Conditional generative adversarial nets. arXiv:1411.1784.Google ScholarGoogle Scholar
  25. [25] Deepak Pathak, Philipp Krahenbuhl, Jeff Donahue, Trevor Darrell, and Alexei A. Efros. 2016. Context encoders: Feature learning by inpainting. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR). IEEE, Nevada Las Vegas, USA, 2536–2544. Google ScholarGoogle ScholarCross RefCross Ref
  26. [26] Radford Alec, Metz Luke, and Chintala Soumith. 2015. Unsupervised representation learning with deep convolutional generative adversarial networks. arXiv:1511.06434.Google ScholarGoogle Scholar
  27. [27] Simard Patrice Y., Steinkraus David, Platt John C., et al. 2003. Best practices for convolutional neural networks applied to visual document analysis. In ICDAR, Vol. 3.Google ScholarGoogle Scholar
  28. [28] Song Xiaoning, Chen Yao, Feng Zhen-Hua, Hu Guosheng, Yu Dong-Jun, and Wu Xiao-Jun. 2020. SP-GAN: Self-growing and pruning generative adversarial networks. IEEE Transactions on Neural Networks and Learning Systems 32, 6 (2020), 24582469. Google ScholarGoogle ScholarCross RefCross Ref
  29. [29] Aaron Van Oord, Nal Kalchbrenner, and Koray Kavukcuoglu. 2016. Pixel recurrent neural networks. In International Conference on Machine Learning (ICML). PMLR, New York, USA, 1747–1756.Google ScholarGoogle Scholar
  30. [30] Yuping L. U. W. U. Xie and Wang M. 2014. Construction on character set of ancient Yi in Guizhou. Journal of Chinese Information Processing. 28, 4 (2014). Google ScholarGoogle ScholarCross RefCross Ref
  31. [31] Wang F. 2003. Tectonic-lithofacies palaeogeography of the Silurian in Sichuan-Yunnan-Guizhou-Guangxi region. Journal of Palaeogeography 5 (2003), 180–186. Google ScholarGoogle ScholarCross RefCross Ref
  32. [32] Wang Z. 2022. A brief account of the Yis’ character inscribed on metals and rocks. Guizhou Etmnic Studies 2 (2002), 156–163.Google ScholarGoogle Scholar
  33. [33] Whichello Adrian P. and Yan Hong. 1996. Linking broken character borders with variable sized masks to improve recognition. Pattern Recognition 29, 8 (1996), 14291435. Google ScholarGoogle ScholarCross RefCross Ref
  34. [34] Yeh Raymond, Chen Chen, Lim Teck Yian, Hasegawa-Johnson Mark, and Do Minh N.. 2016. Semantic image inpainting with perceptual and contextual losses. arXiv:1607.07539 2, 3.Google ScholarGoogle Scholar
  35. [35] Raymond A. Yeh, Chen Chen, Teck Yian Lim, Alexander G. Schwing, Mark Hasegawa-Johnson, and Minh N. Do. 2017. Semantic image inpainting with deep generative models. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR). IEEE, Nevada Las Vegas, USA, 5485–5493. Google ScholarGoogle ScholarCross RefCross Ref
  36. [36] Donggeun Yoo, Namil Kim, Sunggyun Park, Anthony S. Paek, and In So Kweon. 2016. Pixel-level domain transfer. In European Conference on Computer Vision. (ECCV). Springer, Drente Amsterdam, Netherlands, 517–532. Google ScholarGoogle ScholarCross RefCross Ref
  37. [37] Yu Donggang and Yan Hong. 2001. Reconstruction of broken handwritten digits based on structural morphological features. Pattern Recognition 34, 2 (2001), 235254. Google ScholarGoogle ScholarCross RefCross Ref
  38. [38] Richard Zhang, Phillip Isola, and Alexei A. Efros. 2016. Colorful image colorization. In European Conference on Computer Vision (ECCV’16). Springer, Drente Amsterdam, Netherlands, 649–666. Google ScholarGoogle ScholarCross RefCross Ref
  39. [39] Zhu Linwei, Kwong Sam, Zhang Yun, Wang Shiqi, and Wang Xu. 2019. Generative adversarial network-based intra prediction for video coding. IEEE Transactions on Multimedia 22, 1 (2019), 4558. Google ScholarGoogle ScholarDigital LibraryDigital Library

Index Terms

  1. Dual Discriminator GAN: Restoring Ancient Yi Characters

          Recommendations

          Comments

          Login options

          Check if you have access through your login credentials or your institution to get full access on this article.

          Sign in

          Full Access

          • Published in

            cover image ACM Transactions on Asian and Low-Resource Language Information Processing
            ACM Transactions on Asian and Low-Resource Language Information Processing  Volume 21, Issue 4
            July 2022
            464 pages
            ISSN:2375-4699
            EISSN:2375-4702
            DOI:10.1145/3511099
            Issue’s Table of Contents

            ACM acknowledges that this contribution was authored or co-authored by an employee, contractor or affiliate of a national government. As such, the Government retains a nonexclusive, royalty-free right to publish or reproduce this article, or to allow others to do so, for Government purposes only.

            Publisher

            Association for Computing Machinery

            New York, NY, United States

            Publication History

            • Published: 14 March 2022
            • Accepted: 1 October 2021
            • Revised: 1 September 2021
            • Received: 1 June 2021
            Published in tallip Volume 21, Issue 4

            Permissions

            Request permissions about this article.

            Request Permissions

            Check for updates

            Qualifiers

            • research-article
            • Refereed

          PDF Format

          View or Download as a PDF file.

          PDF

          eReader

          View online with eReader.

          eReader

          Full Text

          View this article in Full Text.

          View Full Text

          HTML Format

          View this article in HTML Format .

          View HTML Format
          About Cookies On This Site

          We use cookies to ensure that we give you the best experience on our website.

          Learn more

          Got it!