SwePub
Sök i SwePub databas

  Utökad sökning

Träfflista för sökning "WFRF:(Mumtaz Shahzad) "

Sökning: WFRF:(Mumtaz Shahzad)

  • Resultat 1-3 av 3
Sortera/gruppera träfflistan
   
NumreringReferensOmslagsbildHitta
1.
  • Abbasi, Abdul Ghafoor, et al. (författare)
  • Security extensions of windows environment based on FIPS 201 (PIV) smart card
  • 2011
  • Ingår i: World Congr. Internet Secur., WorldCIS. - : IEEE. - 9780956426376 ; , s. 86-92
  • Konferensbidrag (refereegranskat)abstract
    • This paper describes security extensions of various Windows components based on usage of FIPS 201 (PIV) smart cards. Compared to some other similar solutions, this system has two significant advantages: first, smart cards are based on FIPS 201 standard and not on some proprietary technology; second, smart card security extensions represent an integrated solution, so the same card is used for security of several Microsoft products. Furthermore, our smart card system uses FIPS 201 applet and middleware with smart card APIs, so it can also be used by other developers to extend their own applications with smart card functions in a Windows environment. We support the following security features with smart cards: start-up authentication (based on PIN and/or fingerprint), certificate-based domain authentication, strong authentication, and protection of local resources. We also integrated our middleware and smart cards with MS Outlook and MS Internet Explorer.
  •  
2.
  • Arshed, Muhammad Asad, et al. (författare)
  • Chem2Side : A Deep Learning Model with Ensemble Augmentation (Conventional + Pix2Pix) for COVID-19 Drug Side-Effects Prediction from Chemical Images
  • 2023
  • Ingår i: Information (Switzerland). - 2078-2489. ; 14:12
  • Tidskriftsartikel (refereegranskat)abstract
    • Drug side effects (DSEs) or adverse drug reactions (ADRs) are a major concern in the healthcare industry, accounting for a significant number of annual deaths in Europe alone. Identifying and predicting DSEs early in the drug development process is crucial to mitigate their impact on public health and reduce the time and costs associated with drug development. Objective: In this study, our primary objective is to predict multiple drug side effects using 2D chemical structures, especially for COVID-19, departing from the conventional approach of relying on 1D chemical structures. We aim to develop a novel model for DSE prediction that leverages the CNN-based transfer learning architecture of ResNet152V2. Motivation: The motivation behind this research stems from the need to enhance the efficiency and accuracy of DSE prediction, enabling the pharmaceutical industry to identify potential drug candidates with fewer adverse effects. By utilizing 2D chemical structures and employing data augmentation techniques, we seek to revolutionize the field of drug side-effect prediction. Novelty: This study introduces several novel aspects. The proposed study is the first of its kind to use 2D chemical structures for predicting drug side effects, departing from the conventional 1D approaches. Secondly, we employ data augmentation with both conventional and diffusion-based models (Pix2Pix), a unique strategy in the field. These innovations set the stage for a more advanced and accurate approach to DSE prediction. Results: Our proposed model, named CHEM2SIDE, achieved an impressive average training accuracy of 0.78. Moreover, the average validation and test accuracy, precision, and recall were all at 0.73. When evaluated for COVID-19 drugs, our model exhibited an accuracy of 0.72, a precision of 0.79, a recall of 0.72, and an F1 score of 0.73. Comparative assessments against established transfer learning and machine learning models (VGG16, MobileNetV2, DenseNet121, and KNN) showcased the exceptional performance of CHEM2SIDE, marking a significant advancement in drug side-effect prediction. Conclusions: Our study introduces a groundbreaking approach to predicting drug side effects by using 2D chemical structures and incorporating data augmentation. The CHEM2SIDE model demonstrates remarkable accuracy and outperforms existing models, offering a promising solution to the challenges posed by DSEs in drug development. This research holds great potential for improving drug safety and reducing the associated time and costs.
  •  
3.
  • Arshed, Muhammad Asad, et al. (författare)
  • Multiclass AI-Generated Deepfake Face Detection Using Patch-Wise Deep Learning Model
  • 2024
  • Ingår i: Computers. - 2073-431X. ; 13:1
  • Tidskriftsartikel (refereegranskat)abstract
    • In response to the rapid advancements in facial manipulation technologies, particularly facilitated by Generative Adversarial Networks (GANs) and Stable Diffusion-based methods, this paper explores the critical issue of deepfake content creation. The increasing accessibility of these tools necessitates robust detection methods to curb potential misuse. In this context, this paper investigates the potential of Vision Transformers (ViTs) for effective deepfake image detection, leveraging their capacity to extract global features. Objective: The primary goal of this study is to assess the viability of ViTs in detecting multiclass deepfake images compared to traditional Convolutional Neural Network (CNN)-based models. By framing the deepfake problem as a multiclass task, this research introduces a novel approach, considering the challenges posed by Stable Diffusion and StyleGAN2. The objective is to enhance understanding and efficacy in detecting manipulated content within a multiclass context. Novelty: This research distinguishes itself by approaching the deepfake detection problem as a multiclass task, introducing new challenges associated with Stable Diffusion and StyleGAN2. The study pioneers the exploration of ViTs in this domain, emphasizing their potential to extract global features for enhanced detection accuracy. The novelty lies in addressing the evolving landscape of deepfake creation and manipulation. Results and Conclusion: Through extensive experiments, the proposed method exhibits high effectiveness, achieving impressive detection accuracy, precision, and recall, and an F1 rate of 99.90% on a multiclass-prepared dataset. The results underscore the significant potential of ViTs in contributing to a more secure digital landscape by robustly addressing the challenges posed by deepfake content, particularly in the presence of Stable Diffusion and StyleGAN2. The proposed model outperformed when compared with state-of-the-art CNN-based models, i.e., ResNet-50 and VGG-16.
  •  
Skapa referenser, mejla, bekava och länka
  • Resultat 1-3 av 3

Kungliga biblioteket hanterar dina personuppgifter i enlighet med EU:s dataskyddsförordning (2018), GDPR. Läs mer om hur det funkar här.
Så här hanterar KB dina uppgifter vid användning av denna tjänst.

 
pil uppåt Stäng

Kopiera och spara länken för att återkomma till aktuell vy