SwePub
Tyck till om SwePub Sök här!
Sök i SwePub databas

  Utökad sökning

Träfflista för sökning "WFRF:(Haris Khan Muhammad) "

Sökning: WFRF:(Haris Khan Muhammad)

  • Resultat 1-10 av 11
Sortera/gruppera träfflistan
   
NumreringReferensOmslagsbildHitta
1.
  • Khan, Sabih Ahmad, et al. (författare)
  • Investigation of the mechanical behavior of FDM processed CFRP/Al hybrid joint at elevated temperatures
  • 2023
  • Ingår i: Thin-walled structures. - : Elsevier BV. - 0263-8231 .- 1879-3223. ; 192
  • Tidskriftsartikel (refereegranskat)abstract
    • This research is focused on investigating the mechanical behavior of Fused Deposition Modeling (FDM) processed CFRP/Al hybrid riveted joints at elevated temperatures. A two-pronged approach was adopted entailing experimental and computational domains. In the experimental thrust, the developed joint was evaluated for its mechanical behavior by employing Digital Image Correlation, micro-XCT, and fractographic analysis. The tensile testing was performed at four different temperatures, i.e., Room Temperature (RT), 50°C, 75°C, and 100 °C. At RT, the joint experienced net-sectioning in the CFRP sheet along with minute secondary bending. Further, distinct failure modes were noticed for each ply orientation where the inherent porosity/voids appeared as the governing factor for the damage progression. Novel constitutive models were developed using accrued strain and change in energy dissipation to estimate the damage progression. The damage accumulation was found to be more uniform in the 0° layer as compared to 90°. Moreover, the 90° layer exhibited a more catastrophic damage pattern toward final failure. At elevated temperatures, a significant reduction in mechanical properties along with a non-uniform warping/bending of the plies was noticed due to viscoelastic behavior change. The computational analysis, having a hierarchical approach, was performed for the validation of the experimental results, and both were found to be in good agreement.
  •  
2.
  • Munir, Muhammad Akhtar, et al. (författare)
  • Bridging Precision and Confidence: A Train-Time Loss for Calibrating Object Detection
  • 2023
  • Ingår i: 2023 IEEE/CVF CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION (CVPR). - : IEEE COMPUTER SOC. - 9798350301298 - 9798350301304 ; , s. 11474-11483
  • Konferensbidrag (refereegranskat)abstract
    • Deep neural networks (DNNs) have enabled astounding progress in several vision-based problems. Despite showing high predictive accuracy, recently, several works have revealed that they tend to provide overconfident predictions and thus are poorly calibrated. The majority of the works addressing the miscalibration of DNNs fall under the scope of classification and consider only in-domain predictions. However, there is little to no progress in studying the calibration of DNN-based object detection models, which are central to many vision-based safety-critical applications. In this paper, inspired by the train-time calibration methods, we propose a novel auxiliary loss formulation that explicitly aims to align the class confidence of bounding boxes with the accurateness of predictions (i.e. precision). Since the original formulation of our loss depends on the counts of true positives and false positives in a mini-batch, we develop a differentiable proxy of our loss that can be used during training with other application-specific loss functions. We perform extensive experiments on challenging in-domain and out-domain scenarios with six benchmark datasets including MS-COCO, Cityscapes, Sim10k, and BDD100k. Our results reveal that our train-time loss surpasses strong calibration baselines in reducing calibration error for both in and out-domain scenarios. Our source code and pre-trained models are available at https://github.com/akhtarvision/bpc_calibration
  •  
3.
  • Thawakar, Omkar, et al. (författare)
  • Video Instance Segmentation via Multi-Scale Spatio-Temporal Split Attention Transformer
  • 2022
  • Ingår i: COMPUTER VISION, ECCV 2022, PT XXIX. - Cham : SPRINGER INTERNATIONAL PUBLISHING AG. - 9783031198175 - 9783031198182 ; , s. 666-681
  • Konferensbidrag (refereegranskat)abstract
    • State-of-the-art transformer-based video instance segmentation (VIS) approaches typically utilize either single-scale spatio-temporal features or per-frame multi-scale features during the attention computations. We argue that such an attention computation ignores the multiscale spatio-temporal feature relationships that are crucial to tackle target appearance deformations in videos. To address this issue, we propose a transformer-based VIS framework, named MS-STS VIS, that comprises a novel multi-scale spatio-temporal split (MS-STS) attention module in the encoder. The proposed MS-STS module effectively captures spatio-temporal feature relationships at multiple scales across frames in a video. We further introduce an attention block in the decoder to enhance the temporal consistency of the detected instances in different frames of a video. Moreover, an auxiliary discriminator is introduced during training to ensure better foreground-background separability within the multiscale spatio-temporal feature space. We conduct extensive experiments on two benchmarks: Youtube-VIS (2019 and 2021). Our MS-STS VIS achieves state-of-the-art performance on both benchmarks. When using the ResNet50 backbone, our MS-STS achieves a mask AP of 50.1%, outperforming the best reported results in literature by 2.7% and by 4.8% at higher overlap threshold of AP75, while being comparable in model size and speed on Youtube-VIS 2019 val. set. When using the Swin Transformer backbone, MS-STS VIS achieves mask AP of 61.0% on Youtube-VIS 2019 val. set.
  •  
4.
  • Naseer, Muzammal, et al. (författare)
  • Cross-Domain Transferability of Adversarial Perturbations
  • 2019
  • Ingår i: ADVANCES IN NEURAL INFORMATION PROCESSING SYSTEMS 32 (NIPS 2019). - : NEURAL INFORMATION PROCESSING SYSTEMS (NIPS).
  • Konferensbidrag (refereegranskat)abstract
    • Adversarial examples reveal the blind spots of deep neural networks (DNNs) and represent a major concern for security-critical applications. The transferability of adversarial examples makes real-world attacks possible in black-box settings, where the attacker is forbidden to access the internal parameters of the model. The underlying assumption in most adversary generation methods, whether learning an instance-specific or an instance-agnostic perturbation, is the direct or indirect reliance on the original domain-specific data distribution. In this work, for the first time, we demonstrate the existence of domain-invariant adversaries, thereby showing common adversarial space among different datasets and models. To this end, we propose a framework capable of launching highly transferable attacks that crafts adversarial patterns to mislead networks trained on entirely different domains. For instance, an adversarial function learned on Paintings, Cartoons or Medical images can successfully perturb ImageNet samples to fool the classifier, with success rates as high as similar to 99% (l(infinity) <= 10). The core of our proposed adversarial function is a generative network that is trained using a relativistic supervisory signal that enables domain-invariant perturbations. Our approach sets the new state-of-the-art for fooling rates, both under the white-box and black-box scenarios. Furthermore, despite being an instance-agnostic perturbation function, our attack outperforms the conventionally much stronger instance-specific attack methods.
  •  
5.
  • Pang, Yanwei, et al. (författare)
  • Mask-Guided Attention Network for Occluded Pedestrian Detection
  • 2019
  • Ingår i: 2019 IEEE/CVF INTERNATIONAL CONFERENCE ON COMPUTER VISION (ICCV 2019). - : IEEE COMPUTER SOC. - 9781728148038 ; , s. 4966-4974
  • Konferensbidrag (refereegranskat)abstract
    • Pedestrian detection relying on deep convolution neural networks has made significant progress. Though promising results have been achieved on standard pedestrians, the performance on heavily occluded pedestrians remains far from satisfactory. The main culprits are intra-class occlusions involving other pedestrians and inter-class occlusions caused by other objects, such as cars and bicycles. These result in a multitude of occlusion patterns. We propose an approach for occluded pedestrian detection with the following contributions. First, we introduce a novel mask-guided attention network that fits naturally into popular pedestrian detection pipelines. Our attention network emphasizes on visible pedestrian regions while suppressing the occluded ones by modulating full body features. Second, we empirically demonstrate that coarse-level segmentation annotations provide reasonable approximation to their dense pixel-wise counterparts. Experiments are performed on CityPersons and Caltech datasets. Our approach sets a new state-of-the-art on both datasets. Our approach obtains an absolute gain of 9.5% in log-average miss rate, compared to the best reported results [31] on the heavily occluded HO pedestrian set of CityPersons test set. Further, on the HO pedestrian set of Caltech dataset, our method achieves an absolute gain of 5.0% in log-average miss rate, compared to the best reported results [13]. Code and models are available at: https://github.com/Leotju/MGAN.
  •  
6.
  • Shamshad, Fahad, et al. (författare)
  • Transformers in medical imaging: A survey
  • 2023
  • Ingår i: Medical Image Analysis. - : ELSEVIER. - 1361-8415 .- 1361-8423. ; 88
  • Tidskriftsartikel (refereegranskat)abstract
    • Following unprecedented success on the natural language tasks, Transformers have been successfully applied to several computer vision problems, achieving state-of-the-art results and prompting researchers to reconsider the supremacy of convolutional neural networks (CNNs) as de facto operators. Capitalizing on these advances in computer vision, the medical imaging field has also witnessed growing interest for Transformers that can capture global context compared to CNNs with local receptive fields. Inspired from this transition, in this survey, we attempt to provide a comprehensive review of the applications of Transformers in medical imaging covering various aspects, ranging from recently proposed architectural designs to unsolved issues. Specifically, we survey the use of Transformers in medical image segmentation, detection, classification, restoration, synthesis, registration, clinical report generation, and other tasks. In particular, for each of these applications, we develop taxonomy, identify application-specific challenges as well as provide insights to solve them, and highlight recent trends. Further, we provide a critical discussion of the fields current state as a whole, including the identification of key challenges, open problems, and outlining promising future directions. We hope this survey will ignite further interest in the community and provide researchers with an up-to-date reference regarding applications of Transformer models in medical imaging. Finally, to cope with the rapid development in this field, we intend to regularly update the relevant latest papers and their open-source implementations at https://github.com/fahadshamshad/awesome-transformers-in-medical-imaging.
  •  
7.
  • Iqbal, Sajid, et al. (författare)
  • Essential oils of four wild plants inhibit the blood seeking behaviour of female Aedes aegytpi
  • 2023
  • Ingår i: Experimental parasitology. - : Elsevier BV. - 0014-4894 .- 1090-2449. ; 244
  • Tidskriftsartikel (refereegranskat)abstract
    • Aedes aegypti (Diptera: Culicidae) mosquito is an important vector of many disease-causing pathogens. An effective way to escape from these mosquito-borne diseases is to prevent mosquito bites. In the current study, essential oils of Lepidium pinnatifidum, Mentha longifolia, Origanum vulgare, and Agrimonia eupatoria were evaluated for their repellent potential against Ae. aegypti females. Essential oils were extracted using steam distillation from freshly collected aerial parts of the plants and tested against 4–5 day old females of Ae. aegypti through the human bait technique for repellency and repellent longevity assays. The chemical composition of extracted essential oils was explored by gas chromatography coupled with mass spectrometry (GC-MS). The essential oils of L. pinnatifidum, M. longifolia, O. vulgare, and A. eupatoria at a dose of 33 μg/cm2 showed 100%, 94%, 87%, and 83% mosquito repellent activity, respectively. Furthermore, M. longifolia and O. vulgare essential oils exhibited 100% repellency at a dose of 165 μg/cm2, whereas A. eupatoria essential oil showed 100% repellency only at 330 μg/cm2. In the time-span bioassay, M. longifolia and O. vulgare essential oils showed protection against Ae. aegypti bites for 90 and 75 min, respectively whereas both A. eupatoria and L. pinnatifidum were found active for 45 min. Phenylacetonitrile (94%), piperitone oxide (34%), carvacrol (20%) and α-pinene (62%) were the most abundant compounds in L. pinnatifidum, M. longifolia, O. vulgare and A. eupatoria essential oils, respectively. The current study demonstrates that M. longifolia and O. vulgare essential oils possess the potential to be used as an alternative to synthetic chemicals to protect humans from mosquito bites.
  •  
8.
  • Javed, Sajid, et al. (författare)
  • Visual Object Tracking With Discriminative Filters and Siamese Networks: A Survey and Outlook
  • 2023
  • Ingår i: IEEE Transactions on Pattern Analysis and Machine Intelligence. - : IEEE COMPUTER SOC. - 0162-8828 .- 1939-3539. ; 45:5, s. 6552-6574
  • Tidskriftsartikel (refereegranskat)abstract
    • Accurate and robust visual object tracking is one of the most challenging and fundamental computer vision problems. It entails estimating the trajectory of the target in an image sequence, given only its initial location, and segmentation, or its rough approximation in the form of a bounding box. Discriminative Correlation Filters (DCFs) and deep Siamese Networks (SNs) have emerged as dominating tracking paradigms, which have led to significant progress. Following the rapid evolution of visual object tracking in the last decade, this survey presents a systematic and thorough review of more than 90 DCFs and Siamese trackers, based on results in nine tracking benchmarks. First, we present the background theory of both the DCF and Siamese tracking core formulations. Then, we distinguish and comprehensively review the shared as well as specific open research challenges in both these tracking paradigms. Furthermore, we thoroughly analyze the performance of DCF and Siamese trackers on nine benchmarks, covering different experimental aspects of visual tracking: datasets, evaluation metrics, performance, and speed comparisons. We finish the survey by presenting recommendations and suggestions for distinguished open challenges based on our analysis.
  •  
9.
  • Kristan, Matej, et al. (författare)
  • The Visual Object Tracking VOT2015 challenge results
  • 2015
  • Ingår i: Proceedings 2015 IEEE International Conference on Computer Vision Workshops ICCVW 2015. - : IEEE. - 9780769557205 ; , s. 564-586
  • Konferensbidrag (refereegranskat)abstract
    • The Visual Object Tracking challenge 2015, VOT2015, aims at comparing short-term single-object visual trackers that do not apply pre-learned models of object appearance. Results of 62 trackers are presented. The number of tested trackers makes VOT 2015 the largest benchmark on short-term tracking to date. For each participating tracker, a short description is provided in the appendix. Features of the VOT2015 challenge that go beyond its VOT2014 predecessor are: (i) a new VOT2015 dataset twice as large as in VOT2014 with full annotation of targets by rotated bounding boxes and per-frame attribute, (ii) extensions of the VOT2014 evaluation methodology by introduction of a new performance measure. The dataset, the evaluation kit as well as the results are publicly available at the challenge website(1).
  •  
10.
  • Kristan, Matej, et al. (författare)
  • The Visual Object Tracking VOT2016 Challenge Results
  • 2016
  • Ingår i: COMPUTER VISION - ECCV 2016 WORKSHOPS, PT II. - Cham : SPRINGER INT PUBLISHING AG. - 9783319488813 - 9783319488806 ; , s. 777-823
  • Konferensbidrag (refereegranskat)abstract
    • The Visual Object Tracking challenge VOT2016 aims at comparing short-term single-object visual trackers that do not apply pre-learned models of object appearance. Results of 70 trackers are presented, with a large number of trackers being published at major computer vision conferences and journals in the recent years. The number of tested state-of-the-art trackers makes the VOT 2016 the largest and most challenging benchmark on short-term tracking to date. For each participating tracker, a short description is provided in the Appendix. The VOT2016 goes beyond its predecessors by (i) introducing a new semi-automatic ground truth bounding box annotation methodology and (ii) extending the evaluation system with the no-reset experiment.
  •  
Skapa referenser, mejla, bekava och länka
  • Resultat 1-10 av 11

Kungliga biblioteket hanterar dina personuppgifter i enlighet med EU:s dataskyddsförordning (2018), GDPR. Läs mer om hur det funkar här.
Så här hanterar KB dina uppgifter vid användning av denna tjänst.

 
pil uppåt Stäng

Kopiera och spara länken för att återkomma till aktuell vy