skip to main content
10.1145/3319535.3354261acmconferencesArticle/Chapter ViewAbstractPublication PagesccsConference Proceedingsconference-collections
research-article

Neural Network Inversion in Adversarial Setting via Background Knowledge Alignment

Published: 06 November 2019 Publication History

Abstract

The wide application of deep learning technique has raised new security concerns about the training data and test data. In this work, we investigate the model inversion problem under adversarial settings, where the adversary aims at inferring information about the target model's training data and test data from the model's prediction values. We develop a solution to train a second neural network that acts as the inverse of the target model to perform the inversion. The inversion model can be trained with black-box accesses to the target model. We propose two main techniques towards training the inversion model in the adversarial settings. First, we leverage the adversary's background knowledge to compose an auxiliary set to train the inversion model, which does not require access to the original training data. Second, we design a truncation-based technique to align the inversion model to enable effective inversion of the target model from partial predictions that the adversary obtains on victim user's data. We systematically evaluate our approach in various machine learning tasks and model architectures on multiple image datasets. We also confirm our results on Amazon Rekognition, a commercial prediction API that offers "machine learning as a service". We show that even with partial knowledge about the black-box model's training data, and with only partial prediction values, our inversion approach is still able to perform accurate inversion of the target model, and outperform previous approaches.

Supplementary Material

WEBM File (p225-yang.webm)

References

[1]
Mart'in Abadi, Andy Chu, Ian Goodfellow, H Brendan McMahan, Ilya Mironov, Kunal Talwar, and Li Zhang. 2016. Deep learning with differential privacy. In Proceedings of the 2016 ACM SIGSAC Conference on Computer and Communications Security. ACM, 308--318.
[2]
Amazon. 2019. Amazon Rekognition . https://aws.amazon.com/rekognition/.
[3]
Giuseppe Ateniese, Luigi V Mancini, Angelo Spognardi, Antonio Villani, Domenico Vitali, and Giovanni Felici. 2015a. Hacking smart machines with smarter ones: How to extract meaningful data from machine learning classifiers. International Journal of Security and Networks, Vol. 10, 3 (2015), 137--150.
[4]
Giuseppe Ateniese, Luigi V Mancini, Angelo Spognardi, Antonio Villani, Domenico Vitali, and Giovanni Felici. 2015b. Hacking Smart Machines with Smarter Ones: How to Extract Meaningful Data from Machine Learning Classifiers . International Journal of Security and Networks, Vol. 10, 3 (sep 2015), 137--150. https://doi.org/10.1504/IJSN.2015.071829
[5]
Mordecai Avriel. 2003. Nonlinear programming: analysis and methods .Courier Corporation.
[6]
Amazon AWS. 2019. Data Privacy. https://aws.amazon.com/compliance/data-privacy-faq/.
[7]
Pierre Baldi. 2011. Autoencoders, Unsupervised Learning and Deep Architectures. In Proceedings of the 2011 International Conference on Unsupervised and Transfer Learning Workshop - Volume 27 (UTLW'11). JMLR.org, 37--50. http://dl.acm.org/citation.cfm?id=3045796.3045801
[8]
Beauty.AI. 2019. Beauty.AI . http://beauty.ai/.
[9]
Battista Biggio, Blaine Nelson, and Pavel Laskov. 2012. Poisoning Attacks Against Support Vector Machines. In Proceedings of the 29th International Coference on International Conference on Machine Learning . Omnipress.
[10]
Christopher M. Bishop. 1995. Neural Networks for Pattern Recognition .Oxford University Press, Inc., New York, NY, USA.
[11]
Keith Bonawitz, Vladimir Ivanov, Ben Kreuter, Antonio Marcedone, H Brendan McMahan, Sarvar Patel, Daniel Ramage, Aaron Segal, and Karn Seth. 2017. Practical Secure Aggregation for Privacy Preserving Machine Learning. IACR Cryptology ePrint Archive, Vol. 2017 (2017), 281.
[12]
G. Bradski. 2000. The OpenCV Library . Dr. Dobb's Journal of Software Tools (2000).
[13]
Adrian Bulat and Georgios Tzimiropoulos. 2017. How far are we from solving the 2d & 3d face alignment problem?(and a dataset of 230,000 3d facial landmarks). In International Conference on Computer Vision, Vol. 1. 8.
[14]
Nicholas Carlini and David Wagner. 2017. Towards evaluating the robustness of neural networks. In IEEE Symposium on Security and Privacy (SP) .
[15]
N. Dalal and B. Triggs. 2005. Histograms of oriented gradients for human detection. In 2005 IEEE Computer Society Conference on Computer Vision and Pattern Recognition (CVPR'05), Vol. 1. 886--893 vol. 1. https://doi.org/10.1109/CVPR.2005.177
[16]
Hung Dang, Yue Huang, and Ee-Chien Chang. 2017. Evading Classifiers by Morphing in the Dark. In Proceedings of the 2017 ACM SIGSAC Conference on Computer and Communications Security. ACM.
[17]
Alexey Dosovitskiy and Thomas Brox. 2016a. Generating Images with Perceptual Similarity Metrics based on Deep Networks . In Advances in Neural Information Processing Systems 29, D D Lee, M Sugiyama, U V Luxburg, I Guyon, and R Garnett (Eds.). Curran Associates, Inc., 658--666.
[18]
Alexey Dosovitskiy and Thomas Brox. 2016b. Inverting Visual Representations with Convolutional Networks. In 2016 IEEE Conference on Computer Vision and Pattern Recognition, CVPR 2016, Las Vegas, NV, USA, June 27--30, 2016. 4829--4837. https://doi.org/10.1109/CVPR.2016.522
[19]
Mengnan Du, Ninghao Liu, Qingquan Song, and Xia Hu. 2018. Towards Explanation of DNN-based Prediction with Guided Feature Inversion. In Proceedings of the 24th ACM SIGKDD International Conference on Knowledge Discovery & Data Mining (KDD '18). ACM, New York, NY, USA, 1358--1367. https://doi.org/10.1145/3219819.3220099
[20]
Cynthia Dwork and Vitaly Feldman. 2018. Privacy-preserving Prediction. In Proceedings of the 31st Conference On Learning Theory (Proceedings of Machine Learning Research), Sé bastien Bubeck, Vianney Perchet, and Philippe Rigollet (Eds.), Vol. 75. PMLR, 1693--1702. http://proceedings.mlr.press/v75/dwork18a.html
[21]
Matt Fredrikson, Somesh Jha, and Thomas Ristenpart. 2015. Model Inversion Attacks that Exploit Confidence Information and Basic Countermeasures. In Proceedings of the 22nd ACM SIGSAC Conference on Computer and Communications Security, Denver, CO, USA, October 12--6, 2015. 1322--1333. https://doi.org/10.1145/2810103.2813677
[22]
Matthew Fredrikson, Eric Lantz, Somesh Jha, Simon Lin, David Page, and Thomas Ristenpart. 2014. Privacy in Pharmacogenetics: An End-to-End Case Study of Personalized Warfarin Dosing. In USENIX Security Symposium. 17--32.
[23]
Anna C. Gilbert, Yi Zhang, Kibok Lee, Yuting Zhang, and Honglak Lee. 2017. Towards understanding the invertibility of convolutional neural networks . IJCAI International Joint Conference on Artificial Intelligence (2017), 1703--1710. https://doi.org/10.1080/10920277.1998.10595667 arxiv: 1705.08664
[24]
Ian J Goodfellow, Jonathon Shlens, and Christian Szegedy. 2014. Explaining and harnessing adversarial examples. arXiv preprint arXiv:1412.6572 (2014).
[25]
Tianyu Gu, Brendan Dolan-Gavitt, and Siddharth Garg. 2017. BadNets: Identifying Vulnerabilities in the Machine Learning Model Supply Chain. arXiv preprint arXiv:1708.06733 (2017).
[26]
Mark A. Hall and Lloyd A. Smith. 1999. Feature Selection for Machine Learning: Comparing a Correlation-Based Filter Approach to the Wrapper. In Proceedings of the Twelfth International Florida Artificial Intelligence Research Society Conference . AAAI Press, 235--239. http://dl.acm.org/citation.cfm?id=646812.707499
[27]
Seira Hidano, Takao Murakami, Shuichi Katsumata, Shinsaku Kiyomoto, and Goichiro Hanaoka. 2017. Model Inversion Attacks for Prediction Systems : Without Knowledge of Non-Sensitive Attributes. In 2017 15th Annual Conference on Privacy, Security and Trust (PST) .
[28]
Briland Hitaj, Giuseppe Ateniese, and Fernando Perez-Cruz. 2017. Deep Models Under the GAN: Information Leakage from Collaborative Deep Learning. In Proceedings of the 2017 ACM SIGSAC Conference on Computer and Communications Security, Vol. 1. 603--618. https://doi.org/10.1145/3133956.3134012 arxiv: 1702.07464
[29]
Jörn-Henrik Jacobsen, Arnold W.M. Smeulders, and Edouard Oyallon. 2018. i-RevNet: Deep Invertible Networks. In International Conference on Learning Representations . https://openreview.net/forum?id=HJsjkMb0Z
[30]
C A Jensen, R D Reed, R J Marks, M A El-Sharkawi, Jae-Byung Jung, R T Miyamoto, G M Anderson, and C J Eggen. 1999. Inversion of feedforward neural networks: algorithms and applications . Proc. IEEE, Vol. 87, 9 (sep 1999), 1536--1549. https://doi.org/10.1109/5.784232
[31]
Jinyuan Jia and Neil Zhenqiang Gong. 2018. AttriGuard: A Practical Defense Against Attribute Inference Attacks via Adversarial Machine Learning. In 27th USENIX Security Symposium (USENIX Security 18). USENIX Association, Baltimore, MD, 513--529. arxiv: arXiv:1805.04810v1 https://www.usenix.org/conference/usenixsecurity18/presentation/jia-jinyuan
[32]
Chiraag Juvekar, Vinod Vaikuntanathan, and Anantha Chandrakasan. 2018. Gazelle: A Low Latency Framework for Secure Neural Network Inference. In 27th USENIX Security Symposium (USENIX Security 18) .arxiv: 1801.05507 http://arxiv.org/abs/1801.05507
[33]
Alex Krizhevsky, Vinod Nair, and Geoffrey Hinton. 2014. The CIFAR-10 dataset. online: http://www. cs. toronto. edu/kriz/cifar. html (2014).
[34]
Alex Krizhevsky, Ilya Sutskever, and Geoffrey E Hinton. 2012. ImageNet Classification with Deep Convolutional Neural Networks . Advances In Neural Information Processing Systems (2012), 1--9. https://doi.org/10.1016/j.protcy.2014.09.007 arxiv: 1102.0183
[35]
Yann LeCun. 1998. The MNIST database of handwritten digits. http://yann. lecun. com/exdb/mnist/ (1998).
[36]
Yann LeCun, Yoshua Bengio, and Geoffrey Hinton. 2015. Deep learning. Nature (2015).
[37]
S. Lee and R. M. Kil. 1994. Inverse mapping of continuous functions using local and global information. IEEE Transactions on Neural Networks, Vol. 5, 3 (May 1994), 409--423. https://doi.org/10.1109/72.286912
[38]
Shiqi Li, Chi Xu, and Ming Xie. 2012. A robust O (n) solution to the perspective-n-point problem. IEEE transactions on pattern analysis and machine intelligence, Vol. 34, 7 (2012), 1444--1450.
[39]
Linden and Kindermann. 1989. Inversion of multilayer nets. In International 1989 Joint Conference on Neural Networks. 425--430 vol.2. https://doi.org/10.1109/IJCNN.1989.118277
[40]
Yingqi Liu, Shiqing Ma, Yousra Aafer, Wen-Chuan Lee, Juan Zhai, Weihang Wang, and Xiangyu Zhang. 2018. Trojaning Attack on Neural Networks. In 25nd Annual Network and Distributed System Security Symposium (NDSS) .
[41]
Ziwei Liu, Ping Luo, Xiaogang Wang, and Xiaoou Tang. 2015. Deep Learning Face Attributes in the Wild. In Proceedings of International Conference on Computer Vision (ICCV) .
[42]
David G. Lowe. 2004. Distinctive Image Features from Scale-Invariant Keypoints. Int. J. Comput. Vision, Vol. 60, 2 (Nov. 2004), 91--110. https://doi.org/10.1023/B:VISI.0000029664.99615.94
[43]
Bao-Liang Lu, H. Kita, and Y. Nishikawa. 1999. Inverting feedforward neural networks using linear and nonlinear programming. IEEE Transactions on Neural Networks, Vol. 10, 6 (Nov 1999), 1271--1290. https://doi.org/10.1109/72.809074
[44]
Aravindh Mahendran and Andrea Vedaldi. 2015. Understanding deep image representations by inverting them. In IEEE Conference on Computer Vision and Pattern Recognition, CVPR 2015, Boston, MA, USA, June 7--12, 2015. 5188--5196. https://doi.org/10.1109/CVPR.2015.7299155
[45]
Aravindh Mahendran and Andrea Vedaldi. 2016a. Visualizing deep convolutional neural networks using natural pre-images. International Journal of Computer Vision, Vol. 120, 3 (2016), 233--255.
[46]
Aravindh Mahendran and Andrea Vedaldi. 2016b. Visualizing deep convolutional neural networks using natural pre-images . International Journal of Computer Vision Springer (2016). arxiv: arXiv:1512.02017v3
[47]
Frank McSherry. 2017. Statistical inference considered harmful. https://github.com/frankmcsherry/blog/blob/master/posts/2016-06--14.md .
[48]
Luca Melis, Congzheng Song, Emiliano De Cristofaro, and Vitaly Shmatikov. 2018. Exploiting Unintended Feature Leakage in Collaborative Learning?. In Proceedings of 40th IEEE Symposium on Security & Privacy (S&P 2019) .arxiv: 1805.04049 http://arxiv.org/abs/1805.04049
[49]
Dongyu Meng and Hao Chen. 2017. MagNet: a Two-Pronged Defense against Adversarial Examples. arXiv preprint arXiv:1705.09064 (2017).
[50]
Microsoft. 2019 a. How-Old.net . https://www.how-old.net/.
[51]
Microsoft. 2019 b. Microsoft Privacy Statement. https://privacy.microsoft.com/en-us/privacystatement .
[52]
Microsoft. 2019 c. Xiaoice . https://kan.msxiaobing.com/ImageGame/Portal .
[53]
Charlie Nash, Nate Kushman, and Christopher KI Williams. 2018a. Inverting Supervised Representations with Autoregressive Neural Density Models. arXiv preprint arXiv:1806.00400 (2018).
[54]
Charlie Nash, Nate Kushman, and Christopher KI Williams. 2018b. Inverting Supervised Representations with Autoregressive Neural Density Models. arXiv preprint arXiv:1806.00400 (2018).
[55]
Hong-Wei Ng and Stefan Winkler. 2014. A data-driven approach to cleaning large face datasets. In IEEE International Conference on Image Processing (ICIP). IEEE, 343--347.
[56]
Olga Ohrimenko, Felix Schuster, Cédric Fournet, Aastha Mehta, Sebastian Nowozin, Kapil Vaswani, and Manuel Costa. 2016. Oblivious Multi-Party Machine Learning on Trusted Processors. In USENIX Security Symposium . 619--636.
[57]
Nicolas Papernot, Patrick McDaniel, Ian Goodfellow, Somesh Jha, Z Berkay Celik, and Ananthram Swami. 2017. Practical Black-Box Attacks against Machine Learning. In ASIACCS .
[58]
Nicolas Papernot, Patrick Mcdaniel, and Penn State. 2018. Security and Privacy in Machine Learning. In 2018 IEEE European Symposium on Security and Privacy (EuroS$backslash$&P), Vol. 392001. 1--16. https://doi.org/10.1109/EuroSP.2018.00035
[59]
Nicolas Papernot, Patrick D McDaniel, Xi Wu, Somesh Jha, and Ananthram Swami. 2016. Distillation as a Defense to Adversarial Perturbations Against Deep Neural Networks. In IEEE Symposium on Security and Privacy, SP 2016, San Jose, CA, USA, May 22--26, 2016 . 582--597. https://doi.org/10.1109/SP.2016.41
[60]
Adam Paszke, Sam Gross, Soumith Chintala, Gregory Chanan, Edward Yang, Zachary DeVito, Zeming Lin, Alban Desmaison, Luca Antiga, and Adam Lerer. 2017. Automatic differentiation in pytorch. (2017).
[61]
Le Trieu Phong, Yoshinori Aono, Takuya Hayashi, Lihua Wang, and Shiho Moriai. 2018. Privacy-Preserving Deep Learning via Additively Homomorphic Encryption . IEEE Transactions on Information Forensics and Security, Vol. 13, 5 (2018), 1333--1345. https://doi.org/10.1109/TIFS.2017.2787987
[62]
Trusted Reviews. 2018. Animoji and Memoji: Everything you need to know about Apple's AR emojis. https://www.trustedreviews.com/news/iphone-x-animoji-3335178 .
[63]
Emad W. Saad and Donald C. Wunsch, II. 2007. Neural Network Explanation Using Inversion. Neural Netw., Vol. 20, 1 (Jan. 2007), 78--93. https://doi.org/10.1016/j.neunet.2006.07.005
[64]
Reza Shokri and Vitaly Shmatikov. 2015. Privacy-preserving deep learning. In Proceedings of the 22nd ACM SIGSAC conference on computer and communications security. ACM, 1310--1321.
[65]
Reza Shokri, Marco Stronati, Congzheng Song, and Vitaly Shmatikov. 2017. Membership Inference Attacks Against Machine Learning Models. In 2017 IEEE Symposium on Security and Privacy, SP 2017, San Jose, CA, USA, May 22--26, 2017. 3--18. https://doi.org/10.1109/SP.2017.41
[66]
Karen Simonyan, Andrea Vedaldi, and Andrew Zisserman. 2014. Deep inside convolutional networks: Visualising image classification models and saliency maps. In ICLR workshop track .
[67]
Congzheng Song, Thomas Ristenpart, and Vitaly Shmatikov. 2017. Machine Learning Models that Remember Too Much . In Proceedings of the 2017 ACM SIGSAC Conference on Computer and Communications Security, CCS 2017, Dallas, TX, USA, October 30 - November 03, 2017 . 587--601. https://doi.org/10.1145/3133956.3134077
[68]
Rupesh Kumar Srivastava, Klaus Greff, and Jürgen Schmidhuber. 2015. Training Very Deep Networks. In Proceedings of the 28th International Conference on Neural Information Processing Systems - Volume 2 (NIPS'15). MIT Press, Cambridge, MA, USA, 2377--2385. http://dl.acm.org/citation.cfm?id=2969442.2969505
[69]
Christian Szegedy, Wojciech Zaremba, Ilya Sutskever, Joan Bruna, Dumitru Erhan, Ian Goodfellow, and Rob Fergus. 2014. Intriguing properties of neural networks. In International Conference on Learning Representations .
[70]
USA Today. 2015. Website “How Old Do I Look?” guesses your age. https://www.usatoday.com/story/tech/2015/05/01/new-website-how-old-do-i-look-guesses-your-age/26688603/.
[71]
DIGIDAY UK. 2015. How Microsoft scored a viral hit with #HowOldRobot. https://digiday.com/media/microsoft-scored-viral-hit-howoldrobot/.
[72]
Annamá ria R Vá rkonyi-Kó czy. 2009. Observer-Based Iterative Fuzzy and Neural Network Model Inversion for Measurement and Control Applications. Springer Berlin Heidelberg, Berlin, Heidelberg, 681--702. https://doi.org/10.1007/978--3--642-03737--5_49
[73]
Zhibo Wang, Mengkai Song, Zhifei Zhang, Yang Song, Qian Wang, and Hairong Qi. 2019. Beyond Inferring Class Representatives : User-Level Privacy Leakage From Federated Learning. In The 38th Annual IEEE International Conference on Computer Communications (INFOCOM 2019) .arxiv: arXiv:1812.00535v3
[74]
X Wu, M Fredrikson, S Jha, and J F Naughton. 2016. A Methodology for Formalizing Model-Inversion Attacks. In 2016 IEEE 29th Computer Security Foundations Symposium (CSF). 355--370. https://doi.org/10.1109/CSF.2016.32
[75]
Samuel Yeom, Matt Fredrikson, and Somesh Jha. 2017. The unintended consequences of overfitting: Training data inference attacks. arXiv preprint arXiv:1709.01604, Vol. 12 (2017).
[76]
Jason Yosinski, Jeff Clune, Anh Nguyen, Thomas Fuchs, and Hod Lipson. 2015. Understanding neural networks through deep visualization. In Deep Learning Workshop, ICML .
[77]
Matthew D Zeiler and Rob Fergus. 2014. Visualizing and Understanding Convolutional Networks. In Computer Vision - ECCV 2014 - 13th European Conference, Zurich, Switzerland, September 6--12, 2014, Proceedings, Part I . 818--833. https://doi.org/10.1007/978--3--319--10590--1_53
[78]
Quanshi Zhang, Ying Nian Wu, and Song-Chun Zhu. 2018. Interpretable Convolutional Neural Networks. In The IEEE Conference on Computer Vision and Pattern Recognition (CVPR) .arxiv: 1710.00935 http://arxiv.org/abs/1710.00935
[79]
Tong Zhang. 2004. Solving large scale linear prediction problems using stochastic gradient descent algorithms. In Proceedings of the twenty-first international conference on Machine learning. ACM, 116.

Cited By

View all
  • (2024)Secure and Trustworthy Artificial Intelligence-extended Reality (AI-XR) for MetaversesACM Computing Surveys10.1145/361442656:7(1-38)Online publication date: 9-Apr-2024
  • (2024)PATROL: Privacy-Oriented Pruning for Collaborative Inference Against Model Inversion Attacks2024 IEEE/CVF Winter Conference on Applications of Computer Vision (WACV)10.1109/WACV57701.2024.00465(4704-4713)Online publication date: 3-Jan-2024
  • (2024)Inversion-Guided Defense: Detecting Model Stealing Attacks by Output InvertingIEEE Transactions on Information Forensics and Security10.1109/TIFS.2024.337619019(4130-4145)Online publication date: 14-Mar-2024
  • Show More Cited By

Recommendations

Comments

Information & Contributors

Information

Published In

cover image ACM Conferences
CCS '19: Proceedings of the 2019 ACM SIGSAC Conference on Computer and Communications Security
November 2019
2755 pages
ISBN:9781450367479
DOI:10.1145/3319535
Permission to make digital or hard copies of all or part of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and the full citation on the first page. Copyrights for components of this work owned by others than ACM must be honored. Abstracting with credit is permitted. To copy otherwise, or republish, to post on servers or to redistribute to lists, requires prior specific permission and/or a fee. Request permissions from [email protected]

Sponsors

Publisher

Association for Computing Machinery

New York, NY, United States

Publication History

Published: 06 November 2019

Permissions

Request permissions for this article.

Check for updates

Author Tags

  1. deep learning
  2. model inversion
  3. neural networks
  4. privacy
  5. security

Qualifiers

  • Research-article

Funding Sources

Conference

CCS '19
Sponsor:

Acceptance Rates

CCS '19 Paper Acceptance Rate 149 of 934 submissions, 16%;
Overall Acceptance Rate 1,261 of 6,999 submissions, 18%

Upcoming Conference

CCS '24
ACM SIGSAC Conference on Computer and Communications Security
October 14 - 18, 2024
Salt Lake City , UT , USA

Contributors

Other Metrics

Bibliometrics & Citations

Bibliometrics

Article Metrics

  • Downloads (Last 12 months)508
  • Downloads (Last 6 weeks)37
Reflects downloads up to 23 Sep 2024

Other Metrics

Citations

Cited By

View all
  • (2024)Secure and Trustworthy Artificial Intelligence-extended Reality (AI-XR) for MetaversesACM Computing Surveys10.1145/361442656:7(1-38)Online publication date: 9-Apr-2024
  • (2024)PATROL: Privacy-Oriented Pruning for Collaborative Inference Against Model Inversion Attacks2024 IEEE/CVF Winter Conference on Applications of Computer Vision (WACV)10.1109/WACV57701.2024.00465(4704-4713)Online publication date: 3-Jan-2024
  • (2024)Inversion-Guided Defense: Detecting Model Stealing Attacks by Output InvertingIEEE Transactions on Information Forensics and Security10.1109/TIFS.2024.337619019(4130-4145)Online publication date: 14-Mar-2024
  • (2024)Unstoppable Attack: Label-Only Model Inversion Via Conditional Diffusion ModelIEEE Transactions on Information Forensics and Security10.1109/TIFS.2024.337281519(3958-3973)Online publication date: 2024
  • (2024)Defending Against Label-Only Attacks via Meta-Reinforcement LearningIEEE Transactions on Information Forensics and Security10.1109/TIFS.2024.335729219(3295-3308)Online publication date: 2024
  • (2024)Palette: Physically-Realizable Backdoor Attacks Against Video Recognition ModelsIEEE Transactions on Dependable and Secure Computing10.1109/TDSC.2023.331479221:4(2672-2685)Online publication date: Jul-2024
  • (2024)The Role of Class Information in Model Inversion Attacks Against Image Deep Learning ClassifiersIEEE Transactions on Dependable and Secure Computing10.1109/TDSC.2023.330674821:4(2407-2420)Online publication date: Jul-2024
  • (2024)C2FMI: Corse-to-Fine Black-Box Model Inversion AttackIEEE Transactions on Dependable and Secure Computing10.1109/TDSC.2023.328507121:3(1437-1450)Online publication date: May-2024
  • (2024)Boosting Model Inversion Attacks With Adversarial ExamplesIEEE Transactions on Dependable and Secure Computing10.1109/TDSC.2023.328501521:3(1451-1468)Online publication date: May-2024
  • (2024)Capacity Abuse Attack of Deep Learning Models Without Need of Label EncodingsIEEE Transactions on Artificial Intelligence10.1109/TAI.2023.32664195:2(814-826)Online publication date: Feb-2024
  • Show More Cited By

View Options

Get Access

Login options

View options

PDF

View or Download as a PDF file.

PDF

eReader

View online with eReader.

eReader

Media

Figures

Other

Tables

Share

Share

Share this Publication link

Share on social media