Identifying Privacy Vulnerabilities in Key Stages of Computer Vision, Natural Language Processing, and Voice Processing Systems

Identifying Privacy Vulnerabilities in Key Stages of Computer Vision, Natural Language Processing, and Voice Processing Systems

Authors

  • Shivansh Khanna School of Information Sciences, University of Illinois at Urbana-Champaign

Keywords:

Artificial Intelligence, Computer Vision, Data Collection, Natural Language Processing, Privacy Concerns, Privacy Risks, Voice Processing

Abstract

The core of many Artificial Intelligence algorithms lies in their requirement for extensive datasets, often comprising personal information, to function effectively. This necessity raises immediate concerns about potential infringements on individual privacy. This research aims to analyze the privacy concerns and risks associated with three major subdomains in the field of Artificial Intelligence (AI): Computer Vision, Natural Language Processing (NLP), and Voice Processing Systems. Each subdomain was broken down into multiple stages to scrutinize the inherent privacy vulnerabilities present. In Computer Vision, risks range from unauthorized image acquisition to the potential misuse of visual data when integrated with larger platforms. Attentions were paid to feature extraction and object detection stages, which can lead to unauthorized profiling or tracking. In NLP workflow, unauthorized data collection and the risk of data leakage through feature extraction are highlighted. The potential for adversarial attacks during the deployment stage and risks associated with post-deployment monitoring are also examined. Finally, in Voice Processing Systems, the risks tied to unauthorized data collection and potential identification of individuals through data preprocessing are discussed. Concerns related to human annotators in data annotation and the unintended memorization of specific voice inputs during model training are also explored. Each stage was analyzed in terms of whether it presented a new type of privacy risk or amplified existing risks. The objective is to provide a structured framework that comprehensively categorizes privacy risks in these AI subdomains, thereby facilitating future research and the development of more secure and privacy-preserving AI technologies.

Author Biography

Shivansh Khanna, School of Information Sciences, University of Illinois at Urbana-Champaign

Shivansh Khanna
School of Information Sciences,
University of Illinois at Urbana-Champaign

Shivansh Khanna research paper 2021

Downloads

Published

2021-01-05

How to Cite

Khanna, S. (2021). Identifying Privacy Vulnerabilities in Key Stages of Computer Vision, Natural Language Processing, and Voice Processing Systems. International Journal of Business Intelligence and Big Data Analytics, 4(1), 1–11. Retrieved from https://research.tensorgate.org/index.php/IJBIBDA/article/view/66
Loading...