SwePub
Sök i SwePub databas

  Utökad sökning

Träfflista för sökning "WFRF:(Torra Vicenç Professor) srt2:(2024)"

Sökning: WFRF:(Torra Vicenç Professor) > (2024)

  • Resultat 1-2 av 2
Sortera/gruppera träfflistan
   
NumreringReferensOmslagsbildHitta
1.
  • Minh-Ha, Le, 1989- (författare)
  • Beyond Recognition : Privacy Protections in a Surveilled World
  • 2024
  • Doktorsavhandling (övrigt vetenskapligt/konstnärligt)abstract
    • This thesis addresses the need to balance the use of facial recognition systems with the need to protect personal privacy in machine learning and biometric identification. As advances in deep learning accelerate their evolution, facial recognition systems enhance security capabilities, but also risk invading personal privacy. Our research identifies and addresses critical vulnerabilities inherent in facial recognition systems, and proposes innovative privacy-enhancing technologies that anonymize facial data while maintaining its utility for legitimate applications.Our investigation centers on the development of methodologies and frameworks that achieve k-anonymity in facial datasets; leverage identity disentanglement to facilitate anonymization; exploit the vulnerabilities of facial recognition systems to underscore their limitations; and implement practical defenses against unauthorized recognition systems. We introduce novel contributions such as AnonFACES, StyleID, IdDecoder, StyleAdv, and DiffPrivate, each designed to protect facial privacy through advanced adversarial machine learning techniques and generative models. These solutions not only demonstrate the feasibility of protecting facial privacy in an increasingly surveilled world, but also highlight the ongoing need for robust countermeasures against the ever-evolving capabilities of facial recognition technology.Continuous innovation in privacy-enhancing technologies is required to safeguard individuals from the pervasive reach of digital surveillance and protect their fundamental right to privacy. By providing open-source, publicly available tools, and frameworks, this thesis contributes to the collective effort to ensure that advancements in facial recognition serve the public good without compromising individual rights. Our multi-disciplinary approach bridges the gap between biometric systems, adversarial machine learning, and generative modeling to pave the way for future research in the domain and support AI innovation where technological advancement and privacy are balanced.  
  •  
2.
  • Varshney, Ayush K., 1998- (författare)
  • Exploring privacy-preserving models in model space
  • 2024
  • Licentiatavhandling (övrigt vetenskapligt/konstnärligt)abstract
    • Privacy-preserving techniques have become increasingly essential in the rapidly advancing era of artificial intelligence (AI), particularly in areas such as deep learning (DL). A key architecture in DL is the Multilayer Perceptron (MLP) network, a type of feedforward neural network. MLPs consist of at least three layers of nodes: an input layer, hidden layers, and an output layer. Each node, except for input nodes, is a neuron with a nonlinear activation function. MLPs are capable of learning complex models due to their deep structure and non-linear processing layers. However, the extensive data requirements of MLPs, often including sensitive information, make privacy a crucial concern. Several types of privacy attacks are specifically designed to target Deep Learning learning (DL) models like MLPs, potentially leading to information leakage. Therefore, implementing privacy-preserving approaches is crucial to prevent such leaks. Most privacy-preserving methods focus either on protecting privacy at the database level or during inference (output) from the model. Both approaches have practical limitations. In this thesis, we explore a novel privacy-preserving approach for DL models which focuses on choosing anonymous models, i.e., models that can be generated by a set of different datasets. This privacy approach is called Integral Privacy (IP). IP provide sound defense against Membership Inference Attacks (MIA), which aims to determine whether a sample was part of the training set.Considering the vast number of parameters in DL models, searching the model space for recurring models can be computationally intensive and time-consuming. To address this challenge, we present a relaxed variation of IP called $\Delta$-Integral Privacy ($\Delta$-IP), where two models are considered equivalent if their difference is within some $\Delta$ threshold. We also highlight the challenge of comparing two DNNs, particularly when similar layers in different networks may contain neurons that are permutations or combinations of one another. This adds complexity to the concept of IP, as identifying equivalencies between such models is not straightforward. In addition, we present a methodology, along with its theoretical analysis, for generating a set of integrally private DL models.In practice, data often arrives rapidly and in large volumes, and its statistical properties can change over time. Detecting and adapting to such drifts is crucial for maintaining model's reliable prediction over time.  Many approaches for detecting drift rely on acquiring true labels, which is often infeasible. Simultaneously, this exposes the model to privacy risks, necessitating that drift detection be conducted using privacy-preserving models. We present a methodology that detects drifts based on uncertainty in predictions from an ensemble of integrally private MLPs. This approach can detect drifts even without access to true labels, although it assumes they are available upon request. Furthermore, the thesis also addresses the membership inference concern in federated learning for computer vision models. Federated Learning (FL) was introduced as privacy-preserving paradigm in which users collaborate to train a joint model without sharing their data. However, recent studies have indicated that the shared weights in FL models encode the data they are trained on, leading to potential privacy breaches. As a solution to this problem, we present a novel integrally private aggregation methodology for federated learning along with its convergence analysis. 
  •  
Skapa referenser, mejla, bekava och länka
  • Resultat 1-2 av 2

Kungliga biblioteket hanterar dina personuppgifter i enlighet med EU:s dataskyddsförordning (2018), GDPR. Läs mer om hur det funkar här.
Så här hanterar KB dina uppgifter vid användning av denna tjänst.

 
pil uppåt Stäng

Kopiera och spara länken för att återkomma till aktuell vy