SwePub
Sök i SwePub databas

  Utökad sökning

Träfflista för sökning "WFRF:(Torra Vicenç Professor) "

Sökning: WFRF:(Torra Vicenç Professor)

  • Resultat 1-6 av 6
Sortera/gruppera träfflistan
   
NumreringReferensOmslagsbildHitta
1.
  • Minh-Ha, Le, 1989- (författare)
  • Beyond Recognition : Privacy Protections in a Surveilled World
  • 2024
  • Doktorsavhandling (övrigt vetenskapligt/konstnärligt)abstract
    • This thesis addresses the need to balance the use of facial recognition systems with the need to protect personal privacy in machine learning and biometric identification. As advances in deep learning accelerate their evolution, facial recognition systems enhance security capabilities, but also risk invading personal privacy. Our research identifies and addresses critical vulnerabilities inherent in facial recognition systems, and proposes innovative privacy-enhancing technologies that anonymize facial data while maintaining its utility for legitimate applications.Our investigation centers on the development of methodologies and frameworks that achieve k-anonymity in facial datasets; leverage identity disentanglement to facilitate anonymization; exploit the vulnerabilities of facial recognition systems to underscore their limitations; and implement practical defenses against unauthorized recognition systems. We introduce novel contributions such as AnonFACES, StyleID, IdDecoder, StyleAdv, and DiffPrivate, each designed to protect facial privacy through advanced adversarial machine learning techniques and generative models. These solutions not only demonstrate the feasibility of protecting facial privacy in an increasingly surveilled world, but also highlight the ongoing need for robust countermeasures against the ever-evolving capabilities of facial recognition technology.Continuous innovation in privacy-enhancing technologies is required to safeguard individuals from the pervasive reach of digital surveillance and protect their fundamental right to privacy. By providing open-source, publicly available tools, and frameworks, this thesis contributes to the collective effort to ensure that advancements in facial recognition serve the public good without compromising individual rights. Our multi-disciplinary approach bridges the gap between biometric systems, adversarial machine learning, and generative modeling to pave the way for future research in the domain and support AI innovation where technological advancement and privacy are balanced.  
  •  
2.
  • Anjomshoae, Sule, 1985- (författare)
  • Context-based explanations for machine learning predictions
  • 2022
  • Doktorsavhandling (övrigt vetenskapligt/konstnärligt)abstract
    • In recent years, growing concern regarding trust in algorithmic decision-making has drawn attention to more transparent and interpretable models. Laws and regulations are moving towards requiring this functionality from information systems to prevent unintended side effects. Such as the European Union's General Data Protection Regulations (GDPR) set out the right to be informed regarding machine-generated decisions. Individuals affected by these decisions can question, confront and challenge the inferences automatically produced by machine learning models. Consequently, such matters necessitate AI systems to be transparent and explainable for various practical applications.Furthermore, explanations help evaluate these systems' strengths and limitations, thereby fostering trustworthiness. As important as it is, existing studies mainly focus on creating mathematically interpretable models or explaining black-box algorithms with intrinsically interpretable surrogate models. In general, these explanations are intended for technical users to evaluate the correctness of a model and are often hard to interpret by general users.  Given a critical need for methods that consider end-user requirements, this thesis focuses on generating intelligible explanations for predictions made by machine learning algorithms. As a starting point, we present the outcome of a systematic literature review of the existing research on generating and communicating explanations in goal-driven eXplainable AI (XAI), such as agents and robots. These are known for their ability to communicate their decisions in human understandable terms. Influenced by that, we discuss the design and evaluation of our proposed explanation methods for black-box algorithms in different machine learning applications, including image recognition, scene classification, and disease prediction.Taken together, the methods and tools presented in this thesis could be used to explain machine learning predictions or as a baseline to compare to other explanation techniques, enabling interpretation indicators for experts and non-technical users. The findings would also be of interest to domains using machine learning models for high-stake decision-making to investigate the practical utility of proposed explanation methods.
  •  
3.
  • Khan, Md Sakib Nizam, 1990- (författare)
  • Towards Privacy Preserving Intelligent Systems
  • 2023
  • Doktorsavhandling (övrigt vetenskapligt/konstnärligt)abstract
    • Intelligent systems, i.e., digital systems containing smart devices that can gather, analyze, and act in response to the data they collect from their surrounding environment, have progressed from theory to application especially in the last decade, thanks to the recent technological advances in sensors and machine learning. These systems can take decisions on users' behalf dynamically by learning their behavior over time. The number of such smart devices in our surroundings is increasing rapidly. Since these devices in most cases handle privacy-sensitive data, privacy concerns are also increasing at a similar rate. However, privacy research has not been in sync with these developments. Moreover, the systems are heterogeneous in nature (e.g., in terms of form factor, energy, processing power, use case scenarios, etc.) and continuously evolving which makes the privacy problem even more challenging.In this thesis, we identify open privacy problems of intelligent systems and later propose solutions to some of the most prominent ones. We first investigate privacy concerns in the context of data stored on a single smart device. We identify that ownership change of a smart device can leak privacy-sensitive information stored on the device. To solve this, we propose a framework to enhance the privacy of owners during ownership change of smart devices based on context detection and data encryption. Moving from the single-device setting to more complex systems involving multiple devices, we conduct a systematic literature review and a review of commercial systems to identify the unique privacy concerns of home-based health monitoring systems. From the review, we distill a common architecture covering most commercial and academic systems, including an inventory of what concerns they address, their privacy considerations, and how they handle the data. Based on this, we then identify potential privacy intervention points of such systems.For the publication of collected data or a machine-learning model trained on such data, we explore the potential of synthetic data as a tool for achieving a better trade-off between privacy and utility compared to traditional privacy-enhancing approaches. We perform a thorough assessment of the utility of synthetic tabular data. Our investigation reveals that none of the commonly used utility metrics for assessing how well synthetic data corresponds to the original data can predict whether for any given univariate or multivariate statistical analysis (when the analysis is not known beforehand) synthetic data achieves utility similar to the original data. For machine learning-based classification tasks, however, the metric Confidence Interval Overlap shows a strong correlation with how similarly the machine learning models (i.e., trained on synthetic vs. original) perform. Concerning privacy, we explore membership inference attacks against machine learning models which aim at finding out whether some (or someone's) particular data was used to train the model. We find from our exploration that training on synthetic data instead of original data can significantly reduce the effectiveness of membership inference attacks. For image data, we propose a novel methodology to quantify, improve, and tune the privacy utility trade-off of the synthetic image data generation process compared to the traditional approaches.Overall, our exploration in this thesis reveals that there are several open research questions regarding privacy at different phases of the data lifespan of intelligent systems such as privacy-preserving data storage, possible inferences due to data aggregation, and the quantification and improvement of privacy utility trade-off for achieving better utility at an acceptable level of privacy in a data release. The identified privacy concerns and their corresponding solutions presented in this thesis will help the research community to recognize and address remaining privacy concerns in the domain. Solving the concerns will encourage the end-users to adopt the systems and enjoy the benefits without having to worry about privacy.
  •  
4.
  • Varshney, Ayush K., 1998- (författare)
  • Exploring privacy-preserving models in model space
  • 2024
  • Licentiatavhandling (övrigt vetenskapligt/konstnärligt)abstract
    • Privacy-preserving techniques have become increasingly essential in the rapidly advancing era of artificial intelligence (AI), particularly in areas such as deep learning (DL). A key architecture in DL is the Multilayer Perceptron (MLP) network, a type of feedforward neural network. MLPs consist of at least three layers of nodes: an input layer, hidden layers, and an output layer. Each node, except for input nodes, is a neuron with a nonlinear activation function. MLPs are capable of learning complex models due to their deep structure and non-linear processing layers. However, the extensive data requirements of MLPs, often including sensitive information, make privacy a crucial concern. Several types of privacy attacks are specifically designed to target Deep Learning learning (DL) models like MLPs, potentially leading to information leakage. Therefore, implementing privacy-preserving approaches is crucial to prevent such leaks. Most privacy-preserving methods focus either on protecting privacy at the database level or during inference (output) from the model. Both approaches have practical limitations. In this thesis, we explore a novel privacy-preserving approach for DL models which focuses on choosing anonymous models, i.e., models that can be generated by a set of different datasets. This privacy approach is called Integral Privacy (IP). IP provide sound defense against Membership Inference Attacks (MIA), which aims to determine whether a sample was part of the training set.Considering the vast number of parameters in DL models, searching the model space for recurring models can be computationally intensive and time-consuming. To address this challenge, we present a relaxed variation of IP called $\Delta$-Integral Privacy ($\Delta$-IP), where two models are considered equivalent if their difference is within some $\Delta$ threshold. We also highlight the challenge of comparing two DNNs, particularly when similar layers in different networks may contain neurons that are permutations or combinations of one another. This adds complexity to the concept of IP, as identifying equivalencies between such models is not straightforward. In addition, we present a methodology, along with its theoretical analysis, for generating a set of integrally private DL models.In practice, data often arrives rapidly and in large volumes, and its statistical properties can change over time. Detecting and adapting to such drifts is crucial for maintaining model's reliable prediction over time.  Many approaches for detecting drift rely on acquiring true labels, which is often infeasible. Simultaneously, this exposes the model to privacy risks, necessitating that drift detection be conducted using privacy-preserving models. We present a methodology that detects drifts based on uncertainty in predictions from an ensemble of integrally private MLPs. This approach can detect drifts even without access to true labels, although it assumes they are available upon request. Furthermore, the thesis also addresses the membership inference concern in federated learning for computer vision models. Federated Learning (FL) was introduced as privacy-preserving paradigm in which users collaborate to train a joint model without sharing their data. However, recent studies have indicated that the shared weights in FL models encode the data they are trained on, leading to potential privacy breaches. As a solution to this problem, we present a novel integrally private aggregation methodology for federated learning along with its convergence analysis. 
  •  
5.
  • Nanni, Mirco, et al. (författare)
  • Give more data, awareness and control to individual citizens, and they will help COVID-19 containment
  • 2020
  • Ingår i: Transactions on Data Privacy. - : Institut d'Investigació en Intel·ligència Artificial. - 1888-5063 .- 2013-1631. ; 23, s. 1-6
  • Tidskriftsartikel (refereegranskat)abstract
    • The rapid dynamics of COVID-19 calls for quick and effective tracking of virus transmission chains and early detection of outbreaks, especially in the "phase 2" of the pandemic, when lockdown and other restriction measures are progressively withdrawn, in order to avoid or minimize contagion resurgence. For this purpose, contact-tracing apps are being proposed for large scale adoption by many countries. A centralized approach, where data sensed by the app are all sent to a nation-wide server, raises concerns about citizens' privacy and needlessly strong digital surveillance, thus alerting us to the need to minimize personal data collection and avoiding location tracking. We advocate the conceptual advantage of a decentralized approach, where both contact and location data are collected exclusively in individual citizens' "personal data stores", to be shared separately and selectively (e.g., with a backend system, but possibly also with other citizens), voluntarily, only when the citizen has tested positive for COVID-19, and with a privacy preserving level of granularity. This approach better protects the personal sphere of citizens and affords multiple benefits: it allows for detailed information gathering for infected people in a privacy-preserving fashion; and, in turn this enables both contact tracing, and, the early detection of outbreak hotspots on more finely-granulated geographic scale. The decentralized approach is also scalable to large populations, in that only the data of positive patients need be handled at a central level. Our recommendation is two-fold. First to extend existing decentralized architectures with a light touch, in order to manage the collection of location data locally on the device, and allowthe user to share spatio-temporal aggregates - if and when they want and for specific aims - with health authorities, for instance. Second, we favour a longerterm pursuit of realizing a Personal Data Store vision, giving users the opportunity to contribute to collective good in the measure they want, enhancing self-awareness, and cultivating collective efforts for rebuilding society.
  •  
6.
  • Senavirathne, Navoda (författare)
  • Towards Privacy Preserving Micro-data Analysis : A machine learning based perspective under prevailing privacy regulations
  • 2021
  • Doktorsavhandling (övrigt vetenskapligt/konstnärligt)abstract
    • Machine learning (ML) has been employed in a wide variety of domains where micro-data (i.e., personal data) are used in the training process. In recent research, it has been shown that ML models are vulnerable to privacy attacks that exploit their observable predictions and optimization information in order to extract sensitive information about the underlying data subjects. Therefore, models trained on micro-data pose a distinct threat to the privacy of the data subjects. To mitigate these risks, privacy preserving machine learning (PPML) techniques are proposed in the literature. Existing PPML techniques are mainly based on differential privacy or cryptography based techniques. However, using these techniques for privacy preservation either results in poor predictive accuracy of the derived ML models or a high computational cost. Also, they operate under the assumption that raw data are available for training the ML models.Due to stringent requirements for data protection and data publishing, it is plausible that the micro-data are anonymized by the data controllers before releasing them for analysis. In the event that anonymized data are available for ML model training, it is vital to understand its impact on ML utility and privacy aspects. In literature on data privacy, anonymization and PPML are often studied as two disconnected fields. But we argue that a natural synergy exists between these two fields that results in a myriad of benefits for the data controllers as well as for the data subjects, in the light of new privacy regulations, business requirements, and privacy risk factors. When anonymized data are used to train the ML models there is an intrinsic requirement to re-think the existing privacy preserving mechanisms used in both data anonymization and PPML.One of the main contributions of this thesis is, understanding the opportunities and challenges presented by data anonymization in a ML setting. During this exploration, we highlight how certain provisions of the General Data Protection Regulation (GDPR) could be in direct conflict with the interest of ML utility and privacy. Inspired by these findings, we then propose a novel anonymization technique based on probabilistic k-anonymity that comprises amenable characteristics for ML utility and privacy. Next, we introduce a privacy-preserving technique for ML model selection based on integral privacy that can inhibit the inferences drawn by the adversaries about the training data or their transformations over time, by the means of selecting models with certain characteristics that can improve the adversary’s uncertainty. Moreover, we provide a rigorous characterization of a well-known privacy attack targeting the ML models (i.e., membership inference), and then identify the limitations of the existing methods that can easily be manipulated in order to overstate or understate the particular privacy risk. Finally, we present a new membership inference attack model, based on activation pattern based anomaly detection that overcomes these limitations while providing greater accuracy in identifying membership.Together, we believe these contributions will broaden the understanding of the research community, not only concerning the technical aspects of preserving privacy in ML but also highlighting its interplay with existing privacy regulations such as GDPR. It is hoped such findings will shape our journey for knowledge discovery in the era of big data.
  •  
Skapa referenser, mejla, bekava och länka
  • Resultat 1-6 av 6
Typ av publikation
doktorsavhandling (4)
tidskriftsartikel (1)
licentiatavhandling (1)
Typ av innehåll
övrigt vetenskapligt/konstnärligt (5)
refereegranskat (1)
Författare/redaktör
Torra, Vicenç, Profe ... (4)
Torra, Vicenç (2)
Buchegger, Sonja, Pr ... (2)
Lambrix, Patrick, Pr ... (1)
Dignum, Frank (1)
Dignum, Virginia, Pr ... (1)
visa fler...
Riveiro, Maria, 1978 ... (1)
Andrienko, Gennady (1)
Anjomshoae, Sule, 19 ... (1)
Jiang, Lili, Associa ... (1)
Riveiro, Maria, Prof ... (1)
Lehmann, Sune (1)
Lukowicz, Paul (1)
Nanni, Mirco (1)
Pedreschi, Dino (1)
van den Hoven, Jeroe ... (1)
Domingo-Ferrer, Jose ... (1)
Giannotti, Fosca (1)
Morik, Katharina (1)
Bonchi, Francesco (1)
Passerini, Andrea (1)
Oliver, Nuria (1)
Barabasi, Albert-Las ... (1)
Chiaromonte, Frances ... (1)
Khan, Md Sakib Nizam ... (1)
Monreale, Anna (1)
Minh-Ha, Le, 1989- (1)
Carlsson, Niklas, As ... (1)
Gurtov, Andrei, Prof ... (1)
Boldrini, Chiara (1)
Cattuto, Ciro (1)
Comande, Giovanni (1)
Conti, Marco (1)
Coté, Mark (1)
Ferragina, Paolo (1)
Guidotti, Riccardo (1)
Helbing, Dirk (1)
Kaski, Kimmo (1)
Kertesz, Janos (1)
Lepri, Bruno (1)
Matwin, Stan (1)
Jiménez, David Megía ... (1)
Passarella, Andrea (1)
Pentland, Alex (1)
Pianesi, Fabio (1)
Pratesi, Francesca (1)
Rinzivillo, Salvator ... (1)
Ruggieri, Salvatore (1)
Siebes, Arno (1)
Trasarti, Roberto (1)
visa färre...
Lärosäte
Umeå universitet (3)
Högskolan i Skövde (2)
Kungliga Tekniska Högskolan (1)
Linköpings universitet (1)
Språk
Engelska (6)
Forskningsämne (UKÄ/SCB)
Naturvetenskap (5)
Teknik (1)

År

Kungliga biblioteket hanterar dina personuppgifter i enlighet med EU:s dataskyddsförordning (2018), GDPR. Läs mer om hur det funkar här.
Så här hanterar KB dina uppgifter vid användning av denna tjänst.

 
pil uppåt Stäng

Kopiera och spara länken för att återkomma till aktuell vy