SwePub
Sök i SwePub databas

  Extended search

Träfflista för sökning "WFRF:(Matwin Stan Professor) srt2:(2020)"

Search: WFRF:(Matwin Stan Professor) > (2020)

  • Result 1-2 of 2
Sort/group result
   
EnumerationReferenceCoverFind
1.
  • Nanni, Mirco, et al. (author)
  • Give more data, awareness and control to individual citizens, and they will help COVID-19 containment
  • 2020
  • In: Transactions on Data Privacy. - : Institut d'Investigació en Intel·ligència Artificial. - 1888-5063 .- 2013-1631. ; 23, s. 1-6
  • Journal article (peer-reviewed)abstract
    • The rapid dynamics of COVID-19 calls for quick and effective tracking of virus transmission chains and early detection of outbreaks, especially in the "phase 2" of the pandemic, when lockdown and other restriction measures are progressively withdrawn, in order to avoid or minimize contagion resurgence. For this purpose, contact-tracing apps are being proposed for large scale adoption by many countries. A centralized approach, where data sensed by the app are all sent to a nation-wide server, raises concerns about citizens' privacy and needlessly strong digital surveillance, thus alerting us to the need to minimize personal data collection and avoiding location tracking. We advocate the conceptual advantage of a decentralized approach, where both contact and location data are collected exclusively in individual citizens' "personal data stores", to be shared separately and selectively (e.g., with a backend system, but possibly also with other citizens), voluntarily, only when the citizen has tested positive for COVID-19, and with a privacy preserving level of granularity. This approach better protects the personal sphere of citizens and affords multiple benefits: it allows for detailed information gathering for infected people in a privacy-preserving fashion; and, in turn this enables both contact tracing, and, the early detection of outbreak hotspots on more finely-granulated geographic scale. The decentralized approach is also scalable to large populations, in that only the data of positive patients need be handled at a central level. Our recommendation is two-fold. First to extend existing decentralized architectures with a light touch, in order to manage the collection of location data locally on the device, and allowthe user to share spatio-temporal aggregates - if and when they want and for specific aims - with health authorities, for instance. Second, we favour a longerterm pursuit of realizing a Personal Data Store vision, giving users the opportunity to contribute to collective good in the measure they want, enhancing self-awareness, and cultivating collective efforts for rebuilding society.
  •  
2.
  • Vu, Xuan-Son, 1988- (author)
  • Privacy-guardian : the vital need in machine learning with big data
  • 2020
  • Doctoral thesis (other academic/artistic)abstract
    • Social Network Sites (SNS) such as Facebook and Twitter, play a great role in our lives. On one hand, they help to connect people who would not otherwise be connected. Many recent breakthroughs in AI such as facial recognition [Kow+18], were achieved thanks to the amount of available data on the Internet via SNS (hereafter Big Data). On the other hand, many people have tried to avoid SNS to protect their privacy [Sti+13]. However, Machine Learning (ML), as the core of AI, was not designed with privacy in mind. For instance, one of the most popular supervised machine learning algorithms, Support Vector Machines (SVMs), try to solve a quadratic optimization problem in which the data of people involved in the training process is also published within the SVM models. Similarly, many other ML applications (e.g., ClearView) compromise the privacy of individuals presented in the data, especially when the big data era enhances the data federation. Thus, in the context of machine learning with big data, it is important to (1) protect sensitive information (privacy protection) while (2) preserving the quality of the output of algorithms (i.e., data utility). For the vital need of privacy in machine learning with big data, this thesis studies on: (1) how to construct information infrastructures for data federation with privacy guarantee in the big data era; (2) how to protect privacy while learning ML models with a good trade-off between data utility and privacy. To the first point, we proposed different frameworks empowered by privacy-aware algorithms. Regarding the second point, we proposed different neural architectures to capture the sensitivities of user data, from which, the algorithms themselves decide how much they should learn from user data to protect their privacy while achieving good performances for downstream tasks. The current outcomes of the thesis are: (a) privacy-guarantee data federation infrastructure for data analysis on sensitive data; (b) privacy utilities for privacy-concern analysis; and (c) privacy-aware algorithms for learning on personal data. For each outcome, extensive experimental studies were conducted on real-life social network datasets to evaluate aspects of the proposed approaches. Insights and outcomes from this thesis can be used by both academia and industry to provide privacy-guarantee data analysis and data learning in big data containing personal information. They also have the potential to facilitate relevant research in privacy-aware learning and its related evaluation methods.
  •  
Skapa referenser, mejla, bekava och länka
  • Result 1-2 of 2

Kungliga biblioteket hanterar dina personuppgifter i enlighet med EU:s dataskyddsförordning (2018), GDPR. Läs mer om hur det funkar här.
Så här hanterar KB dina uppgifter vid användning av denna tjänst.

 
pil uppåt Close

Copy and save the link in order to return to this view