SwePub
Sök i SwePub databas

  Utökad sökning

Träfflista för sökning "WFRF:(Fahad Shah) "

Sökning: WFRF:(Fahad Shah)

  • Resultat 1-10 av 21
Sortera/gruppera träfflistan
   
NumreringReferensOmslagsbildHitta
1.
  • Ademuyiwa, Adesoji O., et al. (författare)
  • Determinants of morbidity and mortality following emergency abdominal surgery in children in low-income and middle-income countries
  • 2016
  • Ingår i: BMJ Global Health. - : BMJ Publishing Group Ltd. - 2059-7908. ; 1:4
  • Tidskriftsartikel (refereegranskat)abstract
    • Background: Child health is a key priority on the global health agenda, yet the provision of essential and emergency surgery in children is patchy in resource-poor regions. This study was aimed to determine the mortality risk for emergency abdominal paediatric surgery in low-income countries globally.Methods: Multicentre, international, prospective, cohort study. Self-selected surgical units performing emergency abdominal surgery submitted prespecified data for consecutive children aged <16 years during a 2-week period between July and December 2014. The United Nation's Human Development Index (HDI) was used to stratify countries. The main outcome measure was 30-day postoperative mortality, analysed by multilevel logistic regression.Results: This study included 1409 patients from 253 centres in 43 countries; 282 children were under 2 years of age. Among them, 265 (18.8%) were from low-HDI, 450 (31.9%) from middle-HDI and 694 (49.3%) from high-HDI countries. The most common operations performed were appendectomy, small bowel resection, pyloromyotomy and correction of intussusception. After adjustment for patient and hospital risk factors, child mortality at 30 days was significantly higher in low-HDI (adjusted OR 7.14 (95% CI 2.52 to 20.23), p<0.001) and middle-HDI (4.42 (1.44 to 13.56), p=0.009) countries compared with high-HDI countries, translating to 40 excess deaths per 1000 procedures performed.Conclusions: Adjusted mortality in children following emergency abdominal surgery may be as high as 7 times greater in low-HDI and middle-HDI countries compared with high-HDI countries. Effective provision of emergency essential surgery should be a key priority for global child health agendas.
  •  
2.
  • Munsif, Fazal, et al. (författare)
  • Dual-purpose wheat technology : a tool for ensuring food security and livestock sustainability in cereal-based cropping pattern
  • 2021
  • Ingår i: Archives of Agronomy and Soil Science. - : Taylor & Francis Group. - 0365-0340 .- 1476-3567. ; 67:13, s. 1889-1900
  • Tidskriftsartikel (refereegranskat)abstract
    • Wheat cultivation under a dual-purpose (DP) system holds great potential to provide additional fodder for livestock with marginal grain reduction. This study explores the potential of wheat as a DP crop for improving both, forage and grain cropping system by finding out optimal sowing dates and cultivars suitable for DP cropping. Field experiments with four cultivars (Saleem-2000, Bathoor-2007, Fakhre Sarhad-99 (FS-99) and Siran-2008), three sowing dates (October 15, October 30 and November 15) and two cutting treatments (cut and no-cut) determines the effects on yield and physiology of wheat. Wheat sown either in mid or end of October resulted in 11 and 8% increase in grain yield while 13 and 9% in biological yield over mid November sowing, respectively. This increase in yield was due to higher grain spike(-1), chlorophyll content, transpiration rate and relative water content. The cultivars Siran-2008 and Saleem-2000 had higher biological and grain yields than other cultivars across cutting and sowing dates treatments. Biological and grain yields were reduced by 4% and 3%, respectively under the DP wheat compared with no-cut treatment, but grains N content was unaffected. Conclusively, DP wheat system (cut treatment) had higher profitability (11.2%) than wheat crop sown only for grain purposes.
  •  
3.
  • Acsintoae, Andra, et al. (författare)
  • UBnormal: New Benchmark for Supervised Open-Set Video Anomaly Detection
  • 2022
  • Ingår i: 2022 IEEE/CVF CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION (CVPR 2022). - : IEEE COMPUTER SOC. - 9781665469463 - 9781665469470 ; , s. 20111-20121
  • Konferensbidrag (refereegranskat)abstract
    • Detecting abnormal events in video is commonly framed as a one-class classification task, where training videos contain only normal events, while test videos encompass both normal and abnormal events. In this scenario, anomaly detection is an open-set problem. However, some studies assimilate anomaly detection to action recognition. This is a closed-set scenario that fails to test the capability of systems at detecting new anomaly types. To this end, we propose UBnormal, a new supervised open-set benchmark composed of multiple virtual scenes for video anomaly detection. Unlike existing data sets, we introduce abnormal events annotated at the pixel level at training time, for the first time enabling the use of fully-supervised learning methods for abnormal event detection. To preserve the typical open-set formulation, we make sure to include dis-joint sets of anomaly types in our training and test collections of videos. To our knowledge, UBnormal is the first video anomaly detection benchmark to allow a fair head-to-head comparison between one-class open-set models and supervised closed-set models, as shown in our experiments. Moreover, we provide empirical evidence showing that UB-normal can enhance the performance of a state-of-the-art anomaly detection framework on two prominent data sets, Avenue and ShanghaiTech. Our benchmark is freely available at https://github.com/lilygeorgescu/UBnormal.
  •  
4.
  • Barbalau, Antonio, et al. (författare)
  • SSMTL plus plus : Revisiting self-supervised multi-task learning for video anomaly detection
  • 2023
  • Ingår i: Computer Vision and Image Understanding. - : ACADEMIC PRESS INC ELSEVIER SCIENCE. - 1077-3142 .- 1090-235X. ; 229
  • Tidskriftsartikel (refereegranskat)abstract
    • A self-supervised multi-task learning (SSMTL) framework for video anomaly detection was recently introduced in literature. Due to its highly accurate results, the method attracted the attention of many researchers. In this work, we revisit the self-supervised multi-task learning framework, proposing several updates to the original method. First, we study various detection methods, e.g. based on detecting high-motion regions using optical flow or background subtraction, since we believe the currently used pre-trained YOLOv3 is suboptimal, e.g. objects in motion or objects from unknown classes are never detected. Second, we modernize the 3D convolutional backbone by introducing multi-head self-attention modules, inspired by the recent success of vision transformers. As such, we alternatively introduce both 2D and 3D convolutional vision transformer (CvT) blocks. Third, in our attempt to further improve the model, we study additional self-supervised learning tasks, such as predicting segmentation maps through knowledge distillation, solving jigsaw puzzles, estimating body pose through knowledge distillation, predicting masked regions (inpainting), and adversarial learning with pseudo-anomalies. We conduct experiments to assess the performance impact of the introduced changes. Upon finding more promising configurations of the framework, dubbed SSMTL++v1 and SSMTL++v2, we extend our preliminary experiments to more data sets, demonstrating that our performance gains are consistent across all data sets. In most cases, our results on Avenue, ShanghaiTech and UBnormal raise the state-of-the-art performance bar to a new level.
  •  
5.
  • Bhunia, Ankan Kumar, et al. (författare)
  • Handwriting Transformers
  • 2021
  • Ingår i: 2021 IEEE/CVF INTERNATIONAL CONFERENCE ON COMPUTER VISION (ICCV 2021). - : IEEE. - 9781665428125 - 9781665428132 ; , s. 1066-1074
  • Annan publikation (övrigt vetenskapligt/konstnärligt)abstract
    • We propose a novel transformer-based styled handwritten text image generation approach, HWT, that strives to learn both style-content entanglement as well as global and local writing style patterns. The proposed HWT captures the long and short range relationships within the style examples through a self-attention mechanism, thereby encoding both global and local style patterns. Further, the proposed transformer-based HWT comprises an encoder-decoder attention that enables style-content entanglement by gathering the style representation of each query character. To the best of our knowledge, we are the first to introduce a transformer-based generative network for styled handwritten text generation. Our proposed HWT generates realistic styled handwritten text images and significantly outperforms the state-of-the-art demonstrated through extensive qualitative, quantitative and human-based evaluations. The proposed HWT can handle arbitrary length of text and any desired writing style in a few-shot setting. Further, our HWT generalizes well to the challenging scenario where both words and writing style are unseen during training, generating realistic styled handwritten text images.
  •  
6.
  • Bhunia, Ankan Kumar, et al. (författare)
  • Person Image Synthesis via Denoising Diffusion Model
  • 2023
  • Ingår i: 2023 IEEE/CVF CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION, CVPR. - : IEEE COMPUTER SOC. - 9798350301298 - 9798350301304 ; , s. 5968-5976
  • Konferensbidrag (refereegranskat)abstract
    • The pose-guided person image generation task requires synthesizing photorealistic images of humans in arbitrary poses. The existing approaches use generative adversarial networks that do not necessarily maintain realistic textures or need dense correspondences that struggle to handle complex deformations and severe occlusions. In this work, we show how denoising diffusion models can be applied for high-fidelity person image synthesis with strong sample diversity and enhanced mode coverage of the learnt data distribution. Our proposed Person Image Diffusion Model (PIDM) disintegrates the complex transfer problem into a series of simpler forward-backward denoising steps. This helps in learning plausible source-to-target transformation trajectories that result in faithful textures and undistorted appearance details. We introduce a texture diffusion module based on cross-attention to accurately model the correspondences between appearance and pose information available in source and target images. Further, we propose disentangled classifier-free guidance to ensure close resemblance between the conditional inputs and the synthesized output in terms of both pose and appearance information. Our extensive results on two large-scale benchmarks and a user study demonstrate the photorealism of our proposed approach under challenging scenarios. We also show how our generated images can help in downstream tasks. Code is available at https://github.com/ankanbhunia/PIDM.
  •  
7.
  • Cao, Jiale, et al. (författare)
  • PSTR: End-to-End One-Step Person Search With Transformers
  • 2022
  • Ingår i: 2022 IEEE/CVF CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION (CVPR). - : IEEE COMPUTER SOC. - 9781665469463 - 9781665469470 ; , s. 9448-9457
  • Konferensbidrag (refereegranskat)abstract
    • We propose a novel one-step transformer-based person search framework, PSTR, that jointly performs person detection and re-identification (re-id) in a single architecture. PSTR comprises a person search-specialized (PSS) module that contains a detection encoder-decoder for person detection along with a discriminative re-id decoder for person re-id. The discriminative re-id decoder utilizes a multi-level supervision scheme with a shared decoder for discriminative re-id feature learning and also comprises a part attention block to encode relationship between different parts of a person. We further introduce a simple multi-scale scheme to support re-id across person instances at different scales. PSTR jointly achieves the diverse objectives of object-level recognition (detection) and instance-level matching (re-id). To the best of our knowledge, we are the first to propose an end-to-end one-step transformer-based person search framework. Experiments are performed on two popular benchmarks: CUHK-SYSU and PRW. Our extensive ablations reveal the merits of the proposed contributions. Further, the proposed PSTR sets a new state-of-the-art on both benchmarks. On the challenging PRW benchmark, PSTR achieves a mean average precision (mAP) score of 56.5%.
  •  
8.
  • Gupta, Akshita, et al. (författare)
  • OW-DETR: Open-world Detection Transformer
  • 2022
  • Ingår i: 2022 IEEE/CVF CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION (CVPR). - : IEEE COMPUTER SOC. - 9781665469463 - 9781665469470 ; , s. 9225-9234
  • Konferensbidrag (refereegranskat)abstract
    • Open-world object detection (OWOD) is a challenging computer vision problem, where the task is to detect a known set of object categories while simultaneously identifying unknown objects. Additionally, the model must incrementally learn new classes that become known in the next training episodes. Distinct from standard object detection, the OWOD setting poses significant challenges for generating quality candidate proposals on potentially unknown objects, separating the unknown objects from the background and detecting diverse unknown objects. Here, we introduce a novel end-to-end transformer-based framework, OW-DETR, for open-world object detection. The proposed OW-DETR comprises three dedicated components namely, attention-driven pseudo-labeling, novelty classification and objectness scoring to explicitly address the aforementioned OWOD challenges. Our OW-DETR explicitly encodes multi-scale contextual information, possesses less inductive bias, enables knowledge transfer from known classes to the unknown class and can better discriminate between unknown objects and background. Comprehensive experiments are performed on two benchmarks: MS-COCO and PASCAL VOC. The extensive ablations reveal the merits of our proposed contributions. Further, our model outperforms the recently introduced OWOD approach, ORE, with absolute gains ranging from 1.8% to 3.3% in terms of unknown recall on MS-COCO. In the case of incremental object detection, OW-DETR outperforms the state-of-theart for all settings on PASCAL VOC. Our code is available at https://github.com/akshitac8/OW-DETR.
  •  
9.
  • Hanif, Asif, et al. (författare)
  • Frequency Domain Adversarial Training for Robust Volumetric Medical Segmentation
  • 2023
  • Ingår i: MEDICAL IMAGE COMPUTING AND COMPUTER ASSISTED INTERVENTION, MICCAI 2023, PT II. - : SPRINGER INTERNATIONAL PUBLISHING AG. - 9783031438943 - 9783031438950 ; , s. 457-467
  • Konferensbidrag (refereegranskat)abstract
    • It is imperative to ensure the robustness of deep learning models in critical applications such as, healthcare. While recent advances in deep learning have improved the performance of volumetric medical image segmentation models, these models cannot be deployed for real-world applications immediately due to their vulnerability to adversarial attacks. We present a 3D frequency domain adversarial attack for volumetric medical image segmentation models and demonstrate its advantages over conventional input or voxel domain attacks. Using our proposed attack, we introduce a novel frequency domain adversarial training approach for optimizing a robust model against voxel and frequency domain attacks. Moreover, we propose frequency consistency loss to regulate our frequency domain adversarial training that achieves a better tradeoff between model's performance on clean and adversarial samples. Code is available at https://github.com/asif-hanif/vafa.
  •  
10.
  • Khan, Salman, et al. (författare)
  • Guest Editorial Introduction to the Special Section on Transformer Models in Vision
  • 2023
  • Ingår i: IEEE Transactions on Pattern Analysis and Machine Intelligence. - : IEEE COMPUTER SOC. - 0162-8828 .- 1939-3539. ; 45:11, s. 12721-12725
  • Tidskriftsartikel (övrigt vetenskapligt/konstnärligt)abstract
    • Transformer models have achieved outstanding results on a variety of language tasks, such as text classification, ma- chine translation, and question answering. This success in the field of Natural Language Processing (NLP) has sparked interest in the computer vision community to apply these models to vision and multi-modal learning tasks. However, visual data has a unique structure, requiring the need to rethink network designs and training methods. As a result, Transformer models and their variations have been suc- cessfully used for image recognition, object detection, seg- mentation, image super-resolution, video understanding, image generation, text-image synthesis, and visual question answering, among other applications.
  •  
Skapa referenser, mejla, bekava och länka
  • Resultat 1-10 av 21
Typ av publikation
konferensbidrag (10)
tidskriftsartikel (8)
annan publikation (2)
forskningsöversikt (1)
Typ av innehåll
refereegranskat (18)
övrigt vetenskapligt/konstnärligt (3)
Författare/redaktör
Shah, Mubarak (16)
Khan, Fahad (14)
Ionescu, Radu Tudor (4)
Georgescu, Mariana-I ... (2)
Ismail, Mohammed (1)
Rahmani, Amir Masoud (1)
visa fler...
Arif, Muhammad (1)
Dalal, Koustuv (1)
McKee, Martin (1)
Mohammed, Ahmed (1)
Liu, Ke (1)
Salah, Omar (1)
Abolhassani, Hassan (1)
Koyanagi, Ai (1)
Harapan, Harapan (1)
Acsintoae, Andra (1)
Florescu, Andrei (1)
Mare, Tudor (1)
Sumedrea, Paul (1)
Khan, S (1)
Gunnarsson, Ulf (1)
Sheikh, Aziz (1)
Ademuyiwa, Adesoji O ... (1)
Arnaud, Alexis P. (1)
Drake, Thomas M. (1)
Fitzgerald, J. Edwar ... (1)
Poenaru, Dan (1)
Bhangu, Aneel (1)
Harrison, Ewen M. (1)
Fergusson, Stuart (1)
Glasbey, James C. (1)
Khatri, Chetan (1)
Mohan, Midhun (1)
Nepogodiev, Dmitri (1)
Soreide, Kjetil (1)
Gobin, Neel (1)
Freitas, Ana Vega (1)
Hall, Nigel (1)
Kim, Sung-Hee (1)
Negeida, Ahmed (1)
Khairy, Hosni (1)
Jaffry, Zahra (1)
Chapman, Stephen J. (1)
Tabiri, Stephen (1)
Recinos, Gustavo (1)
Amandito, Radhian (1)
Shawki, Marwan (1)
Hanrahan, Michael (1)
Pata, Francesco (1)
Zilinskas, Justas (1)
visa färre...
Lärosäte
Linköpings universitet (17)
Karolinska Institutet (2)
Umeå universitet (1)
Uppsala universitet (1)
Stockholms universitet (1)
Mittuniversitetet (1)
visa fler...
Linnéuniversitetet (1)
visa färre...
Språk
Engelska (21)
Forskningsämne (UKÄ/SCB)
Naturvetenskap (17)
Medicin och hälsovetenskap (2)
Lantbruksvetenskap (2)
Teknik (1)

År

Kungliga biblioteket hanterar dina personuppgifter i enlighet med EU:s dataskyddsförordning (2018), GDPR. Läs mer om hur det funkar här.
Så här hanterar KB dina uppgifter vid användning av denna tjänst.

 
pil uppåt Stäng

Kopiera och spara länken för att återkomma till aktuell vy