Abstract
Both voice conversion and hidden Markov model-- (HMM) based speech synthesis can be used to produce artificial voices of a target speaker. They have shown great negative impacts on speaker verification (SV) systems. In order to enhance the security of SV systems, the techniques to detect converted/synthesized speech should be taken into consideration. During voice conversion and HMM-based synthesis, speech reconstruction is applied to transform a set of acoustic parameters to reconstructed speech. Hence, the identification of reconstructed speech can be used to distinguish converted/synthesized speech from human speech. Several related works on such identification have been reported. The equal error rates (EERs) lower than 5% of detecting reconstructed speech have been achieved. However, through the cross-database evaluations on different speech databases, we find that the EERs of several testing cases are higher than 10%. The robustness of detection algorithms to different speech databases needs to be improved. In this article, we propose an algorithm to identify the reconstructed speech. Three different speech databases and two different reconstruction methods are considered in our work, which has not been addressed in the reported works. The high-dimensional data visualization approach is used to analyze the effect of speech reconstruction on Mel-frequency cepstral coefficients (MFCC) of speech signals. The Gaussian mixture model supervectors of MFCC are used as acoustic features. Furthermore, a set of commonly used classification algorithms are applied to identify reconstructed speech. According to the comparison among different classification methods, linear discriminant analysis-ensemble classifiers are chosen in our algorithm. Extensive experimental results show that the EERs lower than 1% can be achieved by the proposed algorithm in most cases, outperforming the reported state-of-the-art identification techniques.
- Leigh D. Alsteris and Kuldip K. Paliwal. 2007. Short-time phase spectrum in speech processing: A review and some experimental results. Digital Sign. Process. 17, 3 (2007), 578--616. Google Scholar
Digital Library
- Leo Breiman. 1996. Bagging predictors. Mach. Learn. 24, 2 (1996), 123--140. Google Scholar
Digital Library
- William M. Campbell, Joseph P. Campbell, Douglas A. Reynolds, Douglas A. Jones, and Timothy R. Leek. 2003. Phonetic speaker recognition with support vector machines. In Proceedings of Neural Information Processing Systems (NIPS’03). Google Scholar
Digital Library
- William M. Campbell, Joseph P. Campbell, Douglas A. Reynolds, Elliot Singer, and Pedro A. Torres-Carrasquillo. 2006a. Support vector machines for speaker and language recognition. Comput. Speech Lang. 20, 2 (2006), 210--229.Google Scholar
Cross Ref
- William M. Campbell, Douglas E. Sturim, and Douglas A. Reynolds. 2006b. Support vector machines using GMM supervectors for speaker verification. IEEE Sign. Process. Lett. 13, 5 (2006), 308--311.Google Scholar
Cross Ref
- Chih-Chung Chang and Chih-Jen Lin. 2014a. LIBLINEAR -- A Library for Large Linear Classification {Online}. In http://www.csie.ntu.edu.tw/cjlin/liblinear/.Google Scholar
- Chih-Chung Chang and Chih-Jen Lin. 2014b. LIBSVM -- A Library for Support Vector Machines {Online}. In http://www.csie.ntu.edu.tw/ cjlin/libsvm/. Google Scholar
Digital Library
- Speech Database Committee of the Priority Areas Project. 2013. Advanced Utilization of Multimedia to Promote Higher Education Reform Speech Database {Online}. In http://research.nii.ac.jp/src/en/UME-ERJ.html.Google Scholar
- Phillip L. De Leon, Vijendra Raj Apsingekar, Michael Pucher, and Junichi Yamagishi. 2010a. Revisiting the security of speaker verification systems against imposture using synthetic speech. In Proceedings of IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP’10). 1798--1801.Google Scholar
Cross Ref
- Phillip L. De Leon, Michael Pucher, and Junichi Yamagishi. 2010b. Evaluation of the vulnerability of speaker verification to synthetic speech. In Proceedings of Odyssey (The Speaker and Language Recognition Workshop).Google Scholar
- Phillip L. De Leon, Michael Pucher, Junichi Yamagishi, Inma Hernaez, and Ibon Saratxaga. 2012. Evaluation of speaker verification security and detection of HMM-based synthetic speech. IEEE Trans. Aud. Speech Lang. Process. 20, 8 (2012), 2280--2290. Google Scholar
Digital Library
- Najim Dehak, Patrick Kenny, Réda Dehak, Pierre Dumouchel, and Pierre Ouellet. 2011. Front-end factor analysis for speaker verification. IEEE Trans. Aud. Speech Lang. Process. 19, 4 (2011), 788--798. Google Scholar
Digital Library
- Arthur P. Dempster, Nan M. Laird, and Donald B. Rubin. 1977. Maximum likelihood from incomplete data via the EM algorithm. J. Roy. Stat. Soc. Ser. B (1977), 1--38.Google Scholar
- Toshiaki Fukada, Keiichi Tokuda, Takao Kobayashi, and Satoshi Imai. 1992. An adaptive algorithm for mel-cepstral analysis of speech. In Proceedings of IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP’92), Vol. 1. 137--140. Google Scholar
Digital Library
- John Garofolo, David Graff, Doug Paul, and David Pallett. 2013. Wall Street Journal Speech Database {Online}. In https://catalog.ldc.upenn.edu/LDC93S6A.Google Scholar
- John Garofolo, Lori Lamel, William Fisher, Jonathan Fiscus, David Pallett, Nancy Dahlgren, and Victor Zue. 2012. TIMIT Acoustic-Phonetic Continuous Speech Corpus {Online}. In http-s://catalog.ldc.upenn.edu/LDC93S1.Google Scholar
- Ville Hautamaki, Tomi Kinnunen, Ismo Karkkainen, Juhani Saastamoinen, Marko Tuononen, and Pasi Franti. 2008. Maximum a posteriori adaptation of the centroid model for speaker verification. IEEE Sign. Process. Lett. 15 (2008), 162--165.Google Scholar
Cross Ref
- Rajesh M. Hegde, Hema A. Murthy, and Venkata Ramana Rao Gadde. 2007. Significance of the modified group delay feature in speech recognition. IEEE Trans. Aud. Speech Lang. Process. 15, 1 (2007), 190--202. Google Scholar
Digital Library
- Qin Jin, Arthur R. Toth, Alan W. Black, and Tanja Schultz. 2008. Is voice transformation a threat to speaker identification? In Proceedings of IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP’08). 4845--4848.Google Scholar
Cross Ref
- Alexander Kain and Michael W. Macon. 2001. Design and evaluation of a voice conversion algorithm based on spectral envelope mapping and residual prediction. In Proceedings of IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP’01), Vol. 2. 813--816. Google Scholar
Digital Library
- Hideki Kawahara, Ikuyo Masuda-Katsuse, and Alain de Cheveigné. 1999. Restructuring speech representations using a pitch-adaptive time--frequency smoothing and an instantaneous-frequency-based F0 extraction: Possible role of a repetitive structure in sounds. Speech Commun. 27, 3 (1999), 187--207. Google Scholar
Digital Library
- Patrick Kenny, Gilles Boulianne, Pierre Ouellet, and Pierre Dumouchel. 2007. Joint factor analysis versus eigenchannels in speaker recognition. IEEE Trans. Aud. Speech Lang. Process. 15, 4 (2007), 1435--1447. Google Scholar
Digital Library
- Tomi Kinnunen, Zhi-Zheng Wu, Kong Aik Lee, Filip Sedlak, Eng Siong Chng, and Haizhou Li. 2012. Vulnerability of speaker verification systems against voice conversion spoofing attacks: The case of telephone speech. In Proceedings of IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP’12). 4401--4404.Google Scholar
Cross Ref
- Jan Kodovsky, Jessica Fridrich, and Vojtech Holub. 2012. Ensemble classifiers for steganalysis of digital media. IEEE Trans. Inform. Forens. Sec. 7, 2 (2012), 432--444. Google Scholar
Digital Library
- NIST Multimodal Information Group. 2010. 2006 NIST Speaker Recognition Evaluation Database {Online}. In https://catalog.ldc.upenn.edu/LDC2011S09.Google Scholar
- Takashi Masuko, Takafumi Hitotsumatsu, Keiichi Tokuda, and Takao Kobayashi. 1999. On the security of HMM-based speaker verification systems against imposture using synthetic speech. In Proceedings of EUROSPEECH.Google Scholar
- Tomoko Matsui and Sadaoki Furui. 1995. Likelihood normalization for speaker verification using a phoneme-and speaker-independent model. Speech Commun. 17, 1 (1995), 109--116. Google Scholar
Digital Library
- Douglas Reynolds. 2015. Gaussian mixture models. Ency. Biometr. (2015), 827--832.Google Scholar
- Douglas A. Reynolds, Thomas F. Quatieri, and Robert B. Dunn. 2000. Speaker verification using adapted Gaussian mixture models. Digital Sign. Process. 10, 1 (2000), 19--41. Google Scholar
Digital Library
- Ibon Saratxaga, Inma Hernaez, Daniel Erro, Eva Navas, and Jon Sanchez. 2009. Simple representation of signal phase for harmonic speech models. Electron. Lett. 45, 7 (2009), 381--383.Google Scholar
Cross Ref
- Ibon Saratxaga, Inma Hernaez, Igor Odriozola, Eva Navas, Iker Luengo, and Daniel Erro. 2010. Using harmonic phase information to improve ASR rate. In Proceedings of INTERSPEECH. 1185--1188.Google Scholar
- Yannis Stylianou. 2001. Applying the harmonic plus noise model in concatenative speech synthesis. IEEE Trans. Speech Aud. Process. 9, 1 (2001), 21--29.Google Scholar
Cross Ref
- Yannis Stylianou, Olivier Cappé, and Eric Moulines. 1998. Continuous probabilistic transform for voice conversion. IEEE Trans. Speech Aud. Process. 6, 2 (1998), 131--142.Google Scholar
Cross Ref
- Masatsune Tamura, Takashi Masuko, Keiichi Tokuda, and Takao Kobayashi. 1998. Speaker adaptation for HMM-based speech synthesis system using MLLR. In Proceedings of the 3rd ESCA/COCOSDA Workshop (ETRW) on Speech Synthesis.Google Scholar
- Tomoki Toda, Alan W. Black, and Keiichi Tokuda. 2007. Voice conversion based on maximum-likelihood estimation of spectral parameter trajectory. IEEE Trans. Aud. Speech Lang. Process. 15, 8 (2007), 2222--2235. Google Scholar
Digital Library
- Tomoki Toda, Hiroshi Saruwatari, and Kiyohiro Shikano. 2001. Voice conversion algorithm based on Gaussian mixture model with dynamic frequency warping of STRAIGHT spectrum. In Proceedings of IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP’01), Vol. 2. 841--844. Google Scholar
Digital Library
- Tomoki Toda and Keiichi Tokuda. 2007. A speech parameter generation algorithm considering global variance for HMM-based speech synthesis. IEICE Trans. Inform. Syst. 90, 5 (2007), 816--824. Google Scholar
Digital Library
- Keiichi Tokuda, Takayoshi Yoshimura, Takashi Masuko, Takao Kobayashi, and Tadashi Kitamura. 2000. Speech parameter generation algorithms for HMM-based speech synthesis. In Proceedings of IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP’00), Vol. 3. 1315--1318.Google Scholar
Cross Ref
- Laurens Van der Maaten and Geoffrey Hinton. 2008. Visualizing data using t-SNE. J. Mach. Learn. Res. 9, 2579--2605 (2008), 85.Google Scholar
- Laurens Van der Maaten. 2014. t-Distributed Stochastic Neighbor Embedding (t-SNE) {Online}. In http://lvdmaaten.github.io/tsne/.Google Scholar
- Zhizheng Wu, Tomi Kinnunen, Eng Siong Chng, Haizhou Li, and Eliathamby Ambikairajah. 2012a. A study on spoofing attack in state-of-the-art speaker verification: The telephone speech case. In Proceedings of Signal 8 Information Processing Association Annual Summit and Conference (APSIPA ASC’12). 1--5.Google Scholar
- Zhizheng Wu, Chng Eng Siong, and Haizhou Li. 2012b. Detecting converted speech and natural speech for anti-spoofing attack in speaker recognition. In Proceedings of INTERSPEECH.Google Scholar
Cross Ref
- Zhizheng Wu, Xiong Xiao, Eng Siong Chng, and Haizhou Li. 2013. Synthetic speech detection using temporal modulation feature. In Proceedings of IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP’13). 7234--7238.Google Scholar
Cross Ref
- Junichi Yamagishi, Takao Kobayashi, Yuji Nakano, Katsumi Ogata, and Juri Isogai. 2009a. Analysis of speaker adaptation algorithms for HMM-based speech synthesis and a constrained SMAPLR adaptation algorithm. IEEE Trans. Aud., Speech Lang. Process. 17, 1 (2009), 66--83. Google Scholar
Digital Library
- Junichi Yamagishi, Takashi Nose, Heiga Zen, Zhen-Hua Ling, Tomoki Toda, Keiichi Tokuda, Simon King, and Steve Renals. 2009b. Robust speaker-adaptive HMM-based text-to-speech synthesis. IEEE Trans. Aud. Speech Lang. Process. 17, 6 (2009), 1208--1230.Google Scholar
Digital Library
- Takayoshi Yoshimura, Keiichi Tokuda, Takashi Masuko, Takao Kobayashi, and Tadashi Kitamura. 1999. Simultaneous modeling of spectrum, pitch and duration in HMM-based speech synthesis. In Proceedings of EUROSPEECH. 2347--2350.Google Scholar
- Heiga Zen, Keiichi Tokuda, and Alan W. Black. 2009. Statistical parametric speech synthesis. Speech Commun. 51, 11 (2009), 1039--1064. Google Scholar
Digital Library
Index Terms
Identification of Reconstructed Speech
Recommendations
Importance of Utterance Partitioning in SVM Classifier with GMM Supervectors for Text-Independent Speaker Verification
MIKE 2013: Proceedings of the First International Conference on Mining Intelligence and Knowledge Exploration - Volume 8284This paper compares performances between GMM-UBM classifier and SVM classifier with GMM supervector as the linear kernel for text-independent speaker verification. The MFCC feature set has been used for this comparison. Experimental evaluation was ...
Speaker Identification Using Whispered Speech
CSNT '13: Proceedings of the 2013 International Conference on Communication Systems and Network TechnologiesThe study of closed set text-independent speaker identification using whisper speech is presented in this paper. A new feature called temporal Teager energy based sub band cepstral coefficients (TTESBCC) is proposed. The work presented compares the ...
Multitaper MFCC and PLP features for speaker verification using i-vectors
In this paper we study the performance of the low-variance multi-taper Mel-frequency cepstral coefficient (MFCC) and perceptual linear prediction (PLP) features in a state-of-the-art i-vector speaker verification system. The MFCC and PLP features are ...






Comments