SwePub
Sök i SwePub databas

  Extended search

Träfflista för sökning "WFRF:(Khan Rasheed) "

Search: WFRF:(Khan Rasheed)

  • Result 1-10 of 32
Sort/group result
   
EnumerationReferenceCoverFind
1.
  • Ahmad, Shafqat, et al. (author)
  • Physical activity, smoking, and genetic predisposition to obesity in people from Pakistan : the PROMIS study
  • 2015
  • In: BMC Medical Genetics. - : BioMed Central. - 1471-2350. ; 16
  • Journal article (peer-reviewed)abstract
    • Background: Multiple genetic variants have been reliably associated with obesity-related traits in Europeans, but little is known about their associations and interactions with lifestyle factors in South Asians.Methods: In 16,157 Pakistani adults (8232 controls; 7925 diagnosed with myocardial infarction [MI]) enrolled in the PROMIS Study, we tested whether: a) BMI-associated loci, individually or in aggregate (as a genetic risk score - GRS), are associated with BMI; b) physical activity and smoking modify the association of these loci with BMI. Analyses were adjusted for age, age(2), sex, MI (yes/no), and population substructure.Results: Of 95 SNPs studied here, 73 showed directionally consistent effects on BMI as reported in Europeans. Each additional BMI-raising allele of the GRS was associated with 0.04 (SE = 0.01) kg/m(2) higher BMI (P = 4.5 x 10(-14)). We observed nominal evidence of interactions of CLIP1 rs11583200 (P-interaction = 0.014), CADM2 rs13078960 (P-interaction = 0.037) and GALNT10 rs7715256 (P-interaction = 0.048) with physical activity, and PTBP2 rs11165643 (P-interaction = 0.045), HIP1 rs1167827 (P-interaction = 0.015), C6orf106 rs205262 (P-interaction = 0.032) and GRID1 rs7899106 (P-interaction = 0.043) with smoking on BMI.Conclusions: Most BMI-associated loci have directionally consistent effects on BMI in Pakistanis and Europeans. There were suggestive interactions of established BMI-related SNPs with smoking or physical activity.
  •  
2.
  • Khan, Yusra Habib, et al. (author)
  • Barriers and facilitators of childhood COVID-19 vaccination among parents : A systematic review
  • 2022
  • In: Frontiers in Pediatrics. - : Frontiers Media S.A.. - 2296-2360. ; 10
  • Research review (peer-reviewed)abstract
    • BackgroundThe acceptance of vaccination against COVID-19 among parents of young children plays a significant role in controlling the current pandemic. A wide range of factors that influence vaccine hesitancy in adults has been reported worldwide, but less attention has been given to COVID-19 vaccination among children. Vaccine hesitancy is considered a major challenge in achieving herd immunity, and it is more challenging among parents as they remain deeply concerned about their child's health. In this context, a systematic review of the current literature is inevitable to assess vaccine hesitancy among parents of young children to ensure a successful ongoing vaccination program.MethodA systematic search of peer-reviewed English literature indexed in Google Scholar, PubMed, Embase, and Web of science was performed using developed keywords between 1 January 2020 and August 2022. This systematic review included only those studies that focused on parental concerns about COVID-19 vaccines in children up to 12 years without a diagnosis of COVID-19. Following PRISMA guidelines, a total of 108 studies were included. The quality appraisal of the study was performed by Newcastle-Ottawa Scale (NOS).ResultsThe results of 108 studies depict that vaccine hesitancy rates differed globally with a considerably large number of factors associated with it. The highest vaccine hesitancy rates among parents were reported in a study from the USA (86.1%) and two studies from Saudi Arabia (> 85%) and Turkey (89.6%). Conversely, the lowest vaccine hesitancy rates ranging from 0.69 and 2% were found in two studies from South Africa and Switzerland, respectively. The largest study (n = 227,740) was conducted in Switzerland while the smallest sample size (n = 12) was represented by a study conducted in the USA. The most commonly reported barriers to childhood vaccination were mothers' lower education level (N = 46/108, 43%), followed by financial instability (N = 19/108, 18%), low confidence in new vaccines (N = 13/108, 12%), and unmonitored social media platforms (N = 5/108, 4.6%). These factors were significantly associated with vaccine refusal among parents. However, the potential facilitators for vaccine uptake among respondents who intended to have their children vaccinated include higher education level (N = 12/108, 11%), followed by information obtained through healthcare professionals (N = 9/108, 8.3%) and strong confidence in preventive measures taken by the government (N = 5/81, 4.6%).ConclusionThis review underscores that parents around the globe are hesitant to vaccinate their kids against COVID-19. The spectrum of factors associated with vaccine hesitancy and uptake varies across the globe. There is a dire need to address vaccine hesitancy concerns regarding the efficacy and safety of approved vaccines. Local context is inevitable to take into account while developing programs to reduce vaccine hesitancy. There is a dire need to devise strategies to address vaccine hesitancy among parents through the identification of attributing factors.
  •  
3.
  • Khattak, Muhammad Uzair, et al. (author)
  • MaPLe: Multi-modal Prompt Learning
  • 2023
  • In: 2023 IEEE/CVF CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION (CVPR). - : IEEE COMPUTER SOC. - 9798350301298 - 9798350301304 ; , s. 19113-19122
  • Conference paper (peer-reviewed)abstract
    • Pre-trained vision-language (V-L) models such as CLIP have shown excellent generalization ability to downstream tasks. However, they are sensitive to the choice of input text prompts and require careful selection of prompt templates to perform well. Inspired by the Natural Language Processing (NLP) literature, recent CLIP adaptation approaches learn prompts as the textual inputs to ne-tune CLIP for downstream tasks. We note that using prompting to adapt representations in a single branch of CLIP (language or vision) is sub-optimal since it does not allow the exibility to dynamically adjust both representation spaces on a downstream task. In this work, we propose Multi-modal Prompt Learning (MaPLe) for both vision and language branches to improve alignment between the vision and language representations. Our design promotes strong coupling between the vision-language prompts to ensure mutual synergy and discourages learning independent uni-modal solutions. Further, we learn separate prompts across different early stages to progressively model the stage-wise feature relationships to allow rich context learning. We evaluate the effectiveness of our approach on three representative tasks of generalization to novel classes, new target datasets and unseen domain shifts. Compared with the state-of-the-art method Co-CoOp, MaPLe exhibits favorable performance and achieves an absolute gain of 3.45% on novel classes and 2.72% on overall harmonic-mean, averaged over 11 diverse image recognition datasets. Our code and pre-trained models are available at https://github.com/muzairkhattak/multimodal-prompt-learning.
  •  
4.
  • Maaz, Muhammad, et al. (author)
  • Class-Agnostic Object Detection with Multi-modal Transformer
  • 2022
  • In: COMPUTER VISION, ECCV 2022, PT X. - Cham : SPRINGER INTERNATIONAL PUBLISHING AG. - 9783031200793 - 9783031200809 ; , s. 512-531
  • Conference paper (peer-reviewed)abstract
    • What constitutes an object? This has been a long-standing question in computer vision. Towards this goal, numerous learning-free and learning-based approaches have been developed to score objectness. However, they generally do not scale well across new domains and novel objects. In this paper, we advocate that existing methods lack a top-down supervision signal governed by human-understandable semantics. For the first time in literature, we demonstrate that Multi-modal Vision Transformers (MViT) trained with aligned image-text pairs can effectively bridge this gap. Our extensive experiments across various domains and novel objects show the state-of-the-art performance of MViTs to localize generic objects in images. Based on the observation that existing MViTs do not include multi-scale feature processing and usually require longer training schedules, we develop an efficient MViT architecture using multi-scale deformable attention and late vision-language fusion. We show the significance of MViT proposals in a diverse range of applications including open-world object detection, salient and camouflage object detection, supervised and self-supervised detection tasks. Further, MViTs can adaptively generate proposals given a specific language query and thus offer enhanced interactability.
  •  
5.
  • Rasheed, Hanoona, et al. (author)
  • Fine-tuned CLIP Models are Efficient Video Learners
  • 2023
  • In: 2023 IEEE/CVF CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION, CVPR. - : IEEE COMPUTER SOC. - 9798350301298 - 9798350301304 ; , s. 6545-6554
  • Conference paper (peer-reviewed)abstract
    • Large-scale multi-modal training with image-text pairs imparts strong generalization to CLIP model. Since training on a similar scale for videos is infeasible, recent approaches focus on the effective transfer of image-based CLIP to the video domain. In this pursuit, new parametric modules are added to learn temporal information and inter-frame relationships which require meticulous design efforts. Furthermore, when the resulting models are learned on videos, they tend to overfit on the given task distribution and lack in generalization aspect. This begs the following question: How to effectively transfer image-level CLIP representations to videos? In this work, we show that a simple Video Fine-tuned CLIP (ViFi-CLIP) baseline is generally sufficient to bridge the domain gap from images to videos. Our qualitative analysis illustrates that the framelevel processing from CLIP image-encoder followed by feature pooling and similarity matching with corresponding text embeddings helps in implicitly modeling the temporal cues within ViFi-CLIP. Such fine-tuning helps the model to focus on scene dynamics, moving objects and inter-object relationships. For low-data regimes where full fine-tuning is not viable, we propose a bridge and prompt approach that first uses fine-tuning to bridge the domain gap and then learns prompts on language and vision side to adapt CLIP representations. We extensively evaluate this simple yet strong baseline on zero-shot, base-to-novel generalization, few-shot and fully supervised settings across five video benchmarks. Our code and pre-trained models are available at https://github.com/muzairkhattak/ViFi-CLIP.
  •  
6.
  • Shaker, Abdelrahman, et al. (author)
  • SwiftFormer: Efficient Additive Attention for Transformer-based Real-time Mobile Vision Applications
  • 2023
  • In: 2023 IEEE/CVF INTERNATIONAL CONFERENCE ON COMPUTER VISION (ICCV 2023). - : IEEE COMPUTER SOC. - 9798350307184 - 9798350307191 ; , s. 17379-17390
  • Conference paper (peer-reviewed)abstract
    • Self-attention has become a defacto choice for capturing global context in various vision applications. However, its quadratic computational complexity with respect to image resolution limits its use in real-time applications, especially for deployment on resource-constrained mobile devices. Although hybrid approaches have been proposed to combine the advantages of convolutions and self-attention for a better speed-accuracy trade-off, the expensive matrix multiplication operations in self-attention remain a bottleneck. In this work, we introduce a novel efficient additive attention mechanism that effectively replaces the quadratic matrix multiplication operations with linear element-wise multiplications. Our design shows that the key-value interaction can be replaced with a linear layer without sacrificing any accuracy. Unlike previous state-of-the-art methods, our efficient formulation of self-attention enables its usage at all stages of the network. Using our proposed efficient additive attention, we build a series of models called "Swift-Former" which achieves state-of-the-art performance in terms of both accuracy and mobile inference speed. Our small variant achieves 78.5% top-1 ImageNet-1K accuracy with only 0.8 ms latency on iPhone 14, which is more accurate and 2 faster compared to MobileViT-v2. Our code and models: https://tinyurl.com/5ft8v46w
  •  
7.
  •  
8.
  •  
9.
  •  
10.
  •  
Skapa referenser, mejla, bekava och länka
  • Result 1-10 of 32

Kungliga biblioteket hanterar dina personuppgifter i enlighet med EU:s dataskyddsförordning (2018), GDPR. Läs mer om hur det funkar här.
Så här hanterar KB dina uppgifter vid användning av denna tjänst.

 
pil uppåt Close

Copy and save the link in order to return to this view