Abstract
This article considers the task of text style transfer: transforming a specific style of sentence into another while preserving its style-independent content. A dominate approach to text style transfer is to learn a good content factor of text, define a fixed vector for every style and recombine them to generate text in the required style. In fact, there are a large number of different words to convey the same style from different aspects. Thus, using a fixed vector to represent one style is very inefficient, which causes the weak representation power of the style vector and limits text diversity of the same style. To address this problem, we propose a novel neural generative model called Adversarial Separation Network (ASN), which can learn the content and style vector jointly and the learnt vectors have strong representation power and good interpretabilities. In our method, adversarial learning is implemented to enhance our model’s capability of disentangling the two factors. To evaluate our method, we conduct experiments on two benchmark datasets. Experimental results show our method can perform style transfer better than strong comparison systems. We also demonstrate the strong interpretability of the learnt latent vectors.
- Zhiting Hu, Zichao Yang, Xiaodan Liang, Ruslan Salakhutdinov, and Eric P. Xing. 2017. Toward controlled generation of text. In Proceedings of the 34th International Conference of Machine Learning. Google Scholar
Digital Library
- Tianxiao Shen, Tao Lei, Regina Barzilay, and Tommi Jaakkola. 2017. Style transfer from non-parallel text by cross-alignment. In Proceedings of the 31st Neural Information Processing System. Google Scholar
Digital Library
- Shrimai Prabhumoye, Yulia Tsvetkov, Ruslan Salakhutdinov, and Alan Black. 2018. Style transfer through back-translation. In Proceedings of International Conference on Computational Linguistics and Meeting of the Association for Computational Linguistics.Google Scholar
- Jingjing Xu, Xu Sun, Qi Zeng, Xuancheng Ren, Xiaodong Zhang, Houfeng Wang, and Wenjie Li. 2018. Unpaired sentiment-to-sentiment translation: A cycled reinforcement learning approach. In Proceedings of International Conference on Computational Linguistics and Meeting of the Association for Computational Linguistics.Google Scholar
- Cicero Nogueira ods Santos, Igor Melnyk, and Inkit Padhi. 2018. Fighting offensive language on social media with unsupervised text style transfer. In Proceedings of International Conference on Computational Linguistics and Meeting of the Association for Computational Linguistics.Google Scholar
- Juncen Li, Robin Jia, He He, and Percy Liang. 2018. Delete, retrieve, generate: A simple approach to sentiment and style transfer. In Proceedings of the North American Chapter of the Association for Computational Linguistics.Google Scholar
- Leon A. Gatys, Alexander S. Ecker, and Matthias Bethge. 2016. Image style transfer using convolutional neural networks. In Proceedings of the International Conference of Computer Vision and Pattern Recognition.Google Scholar
- Phillip Isola, Jun-Yan Zhu, Tinghui Zhou, and Alexei A. Efros. 2017. Image-to-image translation with conditional adversarial networks. In Proceedings of the International Conference of Computer Vision and Pattern Recognition.Google Scholar
- Jun-Yan Zhu, Taesung Park, Phillip Isola, and Alexei A. Efros. 2017. Unpaired image-to-image translation using cycle-consistent adversarial networks. In Proceedings of the International Conference of Computer Vision.Google Scholar
- Fuli Luo, Peng Li, Jie Zhou, Pengcheng Yang, Baobao Chang, Xu Sun, and Zhifang Sui. 2019. A dual reinforcement learning framework for unsupervised text style transfer. In Proceedings of the International Conference of Artificial Intelligence. Google Scholar
Cross Ref
- Ian J. Goodfellow, Jean P. Abadie, Mehdi Mirza, Bing Xu, David W. Farley, Sherji Ozair, Aaron Courville, and Yoshua Bengio. 2014. Generative adversarial nets. In Proceedings of the International Conference on Neural Information Processing Systems. Google Scholar
Digital Library
- Alec Radford, Luke Metz, and Soumith Chintala. 2016. Unsupervised representation learning with deep convolutional generative adversarial networks. In Proceedings of the International Conference on on Learning Representations.Google Scholar
- Tao Lei, Regina Barzilay, and Tommi Jaakkola. 2016. Rationalizing neural predictions. In Proceedings of the 2016 Conference on Empirical Methods in Natural Language Processing.Google Scholar
- Jasmijn Bastings, Wilker Aziz, and Ivan Titov. 2019. Interpretable neural predictions with differentiable binary variables. In Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics. 2963–2977.Google Scholar
- Nicola De Cao, Michael Sejr Schlichtkrull, Wilker Aziz, and Ivan Titov. 2020. How do decisions emerge across layers in neural models? Interpretation with differentiable masking. In Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing. 3243–3255.Google Scholar
- David Blei, Alp Kucukelbir, and Jon McAuliffe. 2017. Variational inference: A review for statisticians. Journal of the American Statistical Association 3, 518 (2017), 859–877.Google Scholar
- Diederik P. Kingma and Max Welling. 2014. Auto-encoding variational bayes. In Proceedings of the International Conference on on Learning Representations.Google Scholar
- Sudha Rao and Joel Tetreault. 2018. Dear sir or madam, may I introduce the GYAFC dataset: Corpus, benchmarks and metrics for formality style transfer. In Proceedings of the North American Chapter of the Association for Computational Linguistics.Google Scholar
- Samuel R. Bowman, Luke Vilnis, Oriol Vinyals, Andrew M. Dai, Rafal Jozefowicz, and Samy Bengio. 2016. Generating sentences from a continuous space. In Proceedings of the 20th SIGNLL Conference on Computational Natural Language Learning.Google Scholar
- Zhenxin Fu, Xiaoye Tan, Nanyun Peng, Dongyan Zhao, and Rui Yan. 2018. Style transfer in text: Exploration and evaluation. In Proceedings of the International Conference of Artificial Intelligence, New Orleans. Google Scholar
Digital Library
- Ning Dai, Jianze Liang, Xipeng Qiu, and Xuanjing Huang, 2019. Style transformer: Unpaired text style transfer without disentangled latent representation. In Proceedings of International Conference on Computational Linguistics and Meeting of the Association for Computational Linguistics.Google Scholar
Index Terms
Adversarial Separation Network for Text Style Transfer
Recommendations
Text style transfer between classical and modern chinese through prompt-based reinforcement learning
AbstractText style transfer aims at converting the stylistic features of a sentence to another style while preserving its content. Despite the remarkable progress achieved in English style transfer, Chinese style transfer still relies heavily on manual ...
A Dual Reinforcement Network for Classical and Modern Chinese Text Style Transfer
Web Information Systems Engineering – WISE 2021AbstractText style transfer aims to change the stylistic features of a sentence while preserving its content. Although remarkable progress have been achieved in English style transfer, Chinese style transfer, such as classical and modern Chinese style ...
Denoising AutoEncoder Based Delete and Generate Approach for Text Style Transfer
Artificial Neural Networks and Machine Learning – ICANN 2021AbstractText style transfer task is transferring sentences to other styles while preserving the semantics as much as possible. In this work, we study a two-step text style transfer method on non-parallel datasets. In the first step, the style-relevant ...






Comments