SwePub
Sök i SwePub databas

  Extended search

Träfflista för sökning "WFRF:(Dubois Mathieu) "

Search: WFRF:(Dubois Mathieu)

  • Result 1-9 of 9
Sort/group result
   
EnumerationReferenceCoverFind
1.
  • Acharya, B. S., et al. (author)
  • Introducing the CTA concept
  • 2013
  • In: Astroparticle physics. - : Elsevier BV. - 0927-6505 .- 1873-2852. ; 43, s. 3-18
  • Journal article (other academic/artistic)abstract
    • The Cherenkov Telescope Array (CTA) is a new observatory for very high-energy (VHE) gamma rays. CTA has ambitions science goals, for which it is necessary to achieve full-sky coverage, to improve the sensitivity by about an order of magnitude, to span about four decades of energy, from a few tens of GeV to above 100 TeV with enhanced angular and energy resolutions over existing VHE gamma-ray observatories. An international collaboration has formed with more than 1000 members from 27 countries in Europe, Asia, Africa and North and South America. In 2010 the CTA Consortium completed a Design Study and started a three-year Preparatory Phase which leads to production readiness of CTA in 2014. In this paper we introduce the science goals and the concept of CTA, and provide an overview of the project. (C) 2013 Elsevier B.V. All rights reserved.
  •  
2.
  • Bardel, Emilie, et al. (author)
  • Intradermal immunisation using the TLR3-ligand Poly (I:C) as adjuvant induces mucosal antibody responses and protects against genital HSV-2 infection
  • 2016
  • In: npj Vaccines. - : Springer Science and Business Media LLC. - 2059-0105. ; 1
  • Journal article (peer-reviewed)abstract
    • © The Author(s) 2016. Development of vaccines able to induce mucosal immunity in the genital and gastrointestinal tracts is a major challenge to counter sexually transmitted pathogens such as HIV-1 and HSV-2. Herein, we showed that intradermal (ID) immunisation with sub-unit vaccine antigens (i.e., HIV-1 gp140 and HSV-2 gD) delivered with Poly(I:C) or CpG1668 as adjuvant induces long-lasting virus-specific immunoglobulin (Ig)-G and IgA antibodies in the vagina and feces. Poly(I:C)-supplemented sub-unit viral vaccines caused minimal skin reactogenicity at variance to those containing CpG1668, promoted a delayed-type hypersensitivity (DTH) to the vaccine and protected mice from genital and neurological symptoms after a lethal vaginal HSV-2 challenge. Interestingly, Poly(I:C 12U) (Ampligen), a Poly(I:C) structural analogue that binds to TLR3 but not MDA-5, promoted robust mucosal and systemic IgG antibodies, a weak skin DTH to the vaccine but not IgA responses and failed to confer protection against HSV-2 infection. Moreover, Poly(I:C) was far superior to Poly(I:C 12U) at inducing prompt and robust upregulation of IFNß transcripts in lymph nodes draining the injection site. These data illustrate that ID vaccination with glycoproteins and Poly(I:C) as adjuvant promotes long-lasting mucosal immunity and protection from genital HSV-2 infection, with an acceptable skin reactogenicity profile. The ID route thus appears to be an unexpected inductive site for mucosal immunity and anti-viral protection suitable for sub-unit vaccines. This works further highlights that TLR3/MDA5 agonists such as Poly(I:C) may be valuable adjuvants for ID vaccination against sexually transmitted diseases.
  •  
3.
  • Buatois, Alexis, 1989, et al. (author)
  • A comparative analysis of foraging route development by bumblebees and honey bees
  • 2024
  • In: Behavioral Ecology and Sociobiology. - 0340-5443 .- 1432-0762. ; 78:1
  • Journal article (peer-reviewed)abstract
    • Many pollinators, such as bees, hummingbirds and bats, use multi-destination routes (traplines) to exploit familiar plant resources. However, it is not clear to what extent the mechanisms underpinning trapline development and optimisation are comparable across species. Here we compared route formation, repeatability and efficiency by foragers of two social bee species, the solo foraging bumblebee Bombus terrestris and the mass foraging honey bee Apis mellifera, in the same laboratory conditions. In a simple routing task (with four artificial flowers), all bumblebees and honey bees developed a route, although honey bees were slower to do so. In a more complex routing task (with six flowers), however, only bumblebees developed a route between the 6 flowers. Honey bees took a longer time to discover all flowers and developed routes between fewer flowers. Comparing bumblebees and honey bees using the same experimental paradigm thus revealed key behavioural differences likely resulting from their contrasting collective foraging strategies.
  •  
4.
  • de Pierrefeu, Amicie, et al. (author)
  • Structured Sparse Principal Components Analysis With the TV-Elastic Net Penalty
  • 2018
  • In: IEEE Transactions on Medical Imaging. - : IEEE. - 0278-0062 .- 1558-254X. ; 37:2, s. 396-407
  • Journal article (peer-reviewed)abstract
    • Principal component analysis (PCA) is an exploratory tool widely used in data analysis to uncover the dominant patterns of variability within a population. Despite its ability to represent a data set in a low-dimensional space, PCA’s interpretability remains limited. Indeed, the components produced by PCA are often noisy or exhibit no visually meaningful patterns. Furthermore, the fact that the components are usually non-sparse may also impede interpretation, unless arbitrary thresholding is applied. However, in neuroimaging, it is essential to uncover clinically interpretable phenotypic markers that would account for the main variability in the brain images of a population. Recently, some alternatives to the standard PCA approach, such as sparse PCA (SPCA), have been proposed, their aim being to limit the density of the components. Nonetheless, sparsity alone does not entirely solve the interpretability problem in neuroimaging, since it may yield scattered andunstable components. We hypothesized that the incorporation of prior information regarding the structure of the data may lead to improved relevance and interpretability of brain patterns. We therefore present a simple extension of the popular PCA framework that adds structured sparsity penalties on the loading vectors in order to identify the few stable regions in the brain images that capture most of the variability. Such structured sparsity can be obtained by combining, e.g., l1 and total variation (TV) penalties, where the TV regularization encodes information on the underlying structure of the data. This paper presents the structured SPCA (denoted SPCA-TV) optimization framework and its resolution. We demonstrate SPCA-TV’s effectiveness and versatility on three different data sets. It can be applied to any kind of structured data, such as, e.g., N-dimensional array images or meshes of cortical surfaces. The gains of SPCA-TV over unstructured approaches (such as SPCA and ElasticNet PCA) or structured approach (such as GraphNet PCA) are significant, since SPCA-TV reveals the variability within a data set in the form of intelligible brain patterns that are easier to interpret and more stable across different samples.
  •  
5.
  • Dubois, Mathieu, et al. (author)
  • Predictive support recovery with TV-Elastic Net penalty and logistic regression: An application to structural MRI
  • 2014
  • Conference paper (peer-reviewed)abstract
    • The use of machine-learning in neuroimaging offers new perspectives in early diagnosis and prognosis of brain diseases. Although such multivariate methods can capture complex relationships in the data, traditional approaches provide irregular (l12 penalty) or scattered (l1 penalty) predictive pattern with a very limited relevance. A penalty like Total Variation (TV) that exploits the natural 3D structure of the images can increase the spatial coherence of the weight map. However, TV penalization leads to non-smooth optimization problems that are hard to minimize. We propose an optimization framework that minimizes any combination of l1, l2, and TV penalties while preserving the exact l1 penalty. This algorithm uses Nesterov's smoothing technique to approximate the TV penalty with a smooth function such that the loss and the penalties are minimized with an exact accelerated proximal gradient algorithm. We propose an original continuation algorithm that uses successively smaller values of the smoothing parameter to reach a prescribed precision while achieving the best possible convergence rate. This algorithm can be used with other losses or penalties. The algorithm is applied on a classification problem on the ADNI dataset. We observe that the TV penalty does not necessarily improve the prediction but provides a major breakthrough in terms of support recovery of the predictive brain regions.
  •  
6.
  • Festari, Cristina, et al. (author)
  • European consensus for the diagnosis of MCI and mild dementia : Preparatory phase
  • 2023
  • In: Alzheimer's and Dementia. - : Wiley. - 1552-5260 .- 1552-5279. ; 19:5, s. 1729-1741
  • Journal article (peer-reviewed)abstract
    • Introduction: Etiological diagnosis of neurocognitive disorders of middle-old age relies on biomarkers, although evidence for their rational use is incomplete. A European task force is defining a diagnostic workflow where expert experience fills evidence gaps for biomarker validity and prioritization. We report methodology and preliminary results. Methods: Using a Delphi consensus method supported by a systematic literature review, 22 delegates from 11 relevant scientific societies defined workflow assumptions. Results: We extracted diagnostic accuracy figures from literature on the use of biomarkers in the diagnosis of main forms of neurocognitive disorders. Supported by this evidence, panelists defined clinical setting (specialist outpatient service), application stage (MCI-mild dementia), and detailed pre-assessment screening (clinical-neuropsychological evaluations, brain imaging, and blood tests). Discussion: The Delphi consensus on these assumptions set the stage for the development of the first pan-European workflow for biomarkers’ use in the etiological diagnosis of middle-old age neurocognitive disorders at MCI-mild dementia stages. Highlights: Rational use of biomarkers in neurocognitive disorders lacks consensus in Europe. A consensus of experts will define a workflow for the rational use of biomarkers. The diagnostic workflow will be patient-centered and based on clinical presentation. The workflow will be updated as new evidence accrues.
  •  
7.
  • Frisoni, Giovanni B., et al. (author)
  • European intersocietal recommendations for the biomarker-based diagnosis of neurocognitive disorders
  • 2024
  • In: The Lancet Neurology. - 1474-4422 .- 1474-4465. ; 23:3, s. 302-312
  • Research review (peer-reviewed)abstract
    • The recent commercialisation of the first disease-modifying drugs for Alzheimer's disease emphasises the need for consensus recommendations on the rational use of biomarkers to diagnose people with suspected neurocognitive disorders in memory clinics. Most available recommendations and guidelines are either disease-centred or biomarker-centred. A European multidisciplinary taskforce consisting of 22 experts from 11 European scientific societies set out to define the first patient-centred diagnostic workflow that aims to prioritise testing for available biomarkers in individuals attending memory clinics. After an extensive literature review, we used a Delphi consensus procedure to identify 11 clinical syndromes, based on clinical history and examination, neuropsychology, blood tests, structural imaging, and, in some cases, EEG. We recommend first-line and, if needed, second-line testing for biomarkers according to the patient's clinical profile and the results of previous biomarker findings. This diagnostic workflow will promote consistency in the diagnosis of neurocognitive disorders across European countries.
  •  
8.
  • Hadj-Selem, Fouad, et al. (author)
  • Continuation of Nesterov's Smoothing for Regression With Structured Sparsity in High-Dimensional Neuroimaging
  • 2018
  • In: IEEE Transactions on Medical Imaging. - : IEEE. - 0278-0062 .- 1558-254X. ; 37:11, s. 2403-2413
  • Journal article (peer-reviewed)abstract
    • Predictive models can be used on high-dimensional brain images to decode cognitive states or diagnosis/prognosis of a clinical condition/evolution. Spatial regularization through structured sparsity offers new perspectives in this context and reduces the risk of overfitting the model while providing interpretable neuroimaging signatures by forcing the solution to adhere to domain-specific constraints. Total variation (TV) is a promising candidate for structured penalization: it enforces spatial smoothness of the solution while segmenting predictive regions from the background. We consider the problem of minimizing the sum of a smooth convex loss, a non-smooth convex penalty (whose proximal operator is known) and a wide range of possible complex, non-smooth convex structured penalties such as TV or overlapping group Lasso. Existing solvers are either limited in the functions they can minimize or in their practical capacity to scale to high-dimensional imaging data. Nesterov’s smoothing technique can be used to minimize a large number of non-smooth convex structured penalties. However, reasonable precision requires a small smoothing parameter, which slows down the convergence speed to unacceptable levels. To benefit from the versatility of Nesterov’s smoothing technique, we propose a first order continuation algorithm, CONESTA, which automatically generates a sequence of decreasing smoothing parameters. The generated sequence maintains the optimal convergence speed toward any globally desired precision. Our main contributions are: gap to probe the current distance to the global optimum in order to adapt the smoothing parameter and the To propose an expression of the duality convergence speed. This expression is applicable to many penalties and can be used with other solvers than CONESTA. We also propose an expression for the particular smoothing parameter that minimizes the number of iterations required to reach a given precision. Furthermore, we provide a convergence proof and its rate, which is an improvement over classical proximal gradient smoothing methods. We demonstrate on both simulated and high-dimensional structural neuroimaging data that CONESTA significantly outperforms many state-of-the-art solvers in regard to convergence speed and precision.
  •  
9.
  • Rubio-Magnieto, Jenifer, et al. (author)
  • Self-assembly and hybridization mechanisms of DNA with cationic polythiophene
  • 2015
  • In: Soft Matter. - : Royal Society of Chemistry. - 1744-683X .- 1744-6848. ; 11:2, s. 6460-6471
  • Journal article (peer-reviewed)abstract
    • The combination of DNA and pi-conjugated polyelectrolytes (CPEs) represents a promising approach to develop DNA hybridization biosensors, with potential applications for instance in the detection of DNA lesions and single-nucleotide polymorphisms. Here we exploit the remarkable optical properties of a cationic poly[3-(6'-(trimethylphosphonium)hexyl)thiophene-2,5-diyl] (CPT) to decipher the self-assembly of DNA and CPT. The ssDNA/ CPT complexes have chiroptical signatures in the CPT absorption region that are strongly dependent on the DNA sequence, which we relate to differences in supramolecular interactions between the thiophene monomers and the various nucleobases. By studying DNA-DNA hybridization and melting processes on preformed ssDNA/ CPT complexes, we observe sequence-dependent mechanisms that can yield DNA-condensed aggregates. Heating-cooling cycles show that non-equilibrium mixtures can form, noticeably depending on the working sequence of the hybridization experiment. These results are of high importance for the use of pi-conjugated polyelectrolytes in DNA hybridization biosensors and in polyplexes.
  •  
Skapa referenser, mejla, bekava och länka
  • Result 1-9 of 9

Kungliga biblioteket hanterar dina personuppgifter i enlighet med EU:s dataskyddsförordning (2018), GDPR. Läs mer om hur det funkar här.
Så här hanterar KB dina uppgifter vid användning av denna tjänst.

 
pil uppåt Close

Copy and save the link in order to return to this view