SwePub
Sök i SwePub databas

  Extended search

Träfflista för sökning "WFRF:(Maggio Martina) "

Search: WFRF:(Maggio Martina)

  • Result 1-50 of 131
Sort/group result
   
EnumerationReferenceCoverFind
1.
  •  
2.
  • Adam, R., et al. (author)
  • Planck intermediate results XLVII. Planck constraints on reionization history
  • 2016
  • In: Astronomy and Astrophysics. - : EDP Sciences. - 0004-6361 .- 1432-0746. ; 596
  • Journal article (peer-reviewed)abstract
    • We investigate constraints on cosmic reionization extracted from the Planck cosmic microwave background (CMB) data. We combine the Planck CMB anisotropy data in temperature with the low-multipole polarization data to fit Lambda CDM models with various parameterizations of the reionization history. We obtain a Thomson optical depth tau = 0.058 +/- 0.012 for the commonly adopted instantaneous reionization model. This confirms, with data solely from CMB anisotropies, the low value suggested by combining Planck 2015 results with other data sets, and also reduces the uncertainties. We reconstruct the history of the ionization fraction using either a symmetric or an asymmetric model for the transition between the neutral and ionized phases. To determine better constraints on the duration of the reionization process, we also make use of measurements of the amplitude of the kinetic Sunyaev-Zeldovich (kSZ) effect using additional information from the high-resolution Atacama Cosmology Telescope and South Pole Telescope experiments. The average redshift at which reionization occurs is found to lie between z = 7.8 and 8.8, depending on the model of reionization adopted. Using kSZ constraints and a redshift-symmetric reionization model, we find an upper limit to the width of the reionization period of Delta z < 2.8. In all cases, we find that the Universe is ionized at less than the 10% level at redshifts above z similar or equal to 10. This suggests that an early onset of reionization is strongly disfavoured by the Planck data. We show that this result also reduces the tension between CMB-based analyses and constraints from other astrophysical sources.
  •  
3.
  • Aghanim, N., et al. (author)
  • Planck 2015 results XI. CMB power spectra, likelihoods, and robustness of parameters
  • 2016
  • In: Astronomy and Astrophysics. - : EDP Sciences. - 0004-6361 .- 1432-0746. ; 594
  • Journal article (peer-reviewed)abstract
    • This paper presents the Planck 2015 likelihoods, statistical descriptions of the 2-point correlation functions of the cosmic microwave background (CMB) temperature and polarization fluctuations that account for relevant uncertainties, both instrumental and astrophysical in nature. They are based on the same hybrid approach used for the previous release, i.e., a pixel-based likelihood at low multipoles (l < 30) and a Gaussian approximation to the distribution of cross-power spectra at higher multipoles. The main improvements are the use of more and better processed data and of Planck polarization information, along with more detailed models of foregrounds and instrumental uncertainties. The increased redundancy brought by more than doubling the amount of data analysed enables further consistency checks and enhanced immunity to systematic effects. It also improves the constraining power of Planck, in particular with regard to small-scale foreground properties. Progress in the modelling of foreground emission enables the retention of a larger fraction of the sky to determine the properties of the CMB, which also contributes to the enhanced precision of the spectra. Improvements in data processing and instrumental modelling further reduce uncertainties. Extensive tests establish the robustness and accuracy of the likelihood results, from temperature alone, from polarization alone, and from their combination. For temperature, we also perform a full likelihood analysis of realistic end-to-end simulations of the instrumental response to the sky, which were fed into the actual data processing pipeline; this does not reveal biases from residual low-level instrumental systematics. Even with the increase in precision and robustness, the Lambda CDM cosmological model continues to offer a very good fit to the Planck data. The slope of the primordial scalar fluctuations, n(s), is confirmed smaller than unity at more than 5 sigma from Planck alone. We further validate the robustness of the likelihood results against specific extensions to the baseline cosmology, which are particularly sensitive to data at high multipoles. For instance, the effective number of neutrino species remains compatible with the canonical value of 3.046. For this first detailed analysis of Planck polarization spectra, we concentrate at high multipoles on the E modes, leaving the analysis of the weaker B modes to future work. At low multipoles we use temperature maps at all Planck frequencies along with a subset of polarization data. These data take advantage of Planck's wide frequency coverage to improve the separation of CMB and foreground emission. Within the baseline Lambda CDM cosmology this requires tau = 0.078 +/- 0.019 for the reionization optical depth, which is significantly lower than estimates without the use of high-frequency data for explicit monitoring of dust emission. At high multipoles we detect residual systematic errors in E polarization, typically at the mu K-2 level; we therefore choose to retain temperature information alone for high multipoles as the recommended baseline, in particular for testing non-minimal models. Nevertheless, the high-multipole polarization spectra from Planck are already good enough to enable a separate high-precision determination of the parameters of the Lambda CDM model, showing consistency with those established independently from temperature information alone.
  •  
4.
  • Aghanim, N., et al. (author)
  • Planck 2018 results I. Overview and the cosmological legacy of Planck
  • 2020
  • In: Astronomy and Astrophysics. - : EDP Sciences. - 0004-6361 .- 1432-0746. ; 641
  • Journal article (peer-reviewed)abstract
    • The European Space Agency's Planck satellite, which was dedicated to studying the early Universe and its subsequent evolution, was launched on 14 May 2009. It scanned the microwave and submillimetre sky continuously between 12 August 2009 and 23 October 2013, producing deep, high-resolution, all-sky maps in nine frequency bands from 30 to 857 GHz. This paper presents the cosmological legacy of Planck, which currently provides our strongest constraints on the parameters of the standard cosmological model and some of the tightest limits available on deviations from that model. The 6-parameter Lambda CDM model continues to provide an excellent fit to the cosmic microwave background data at high and low redshift, describing the cosmological information in over a billion map pixels with just six parameters. With 18 peaks in the temperature and polarization angular power spectra constrained well, Planck measures five of the six parameters to better than 1% (simultaneously), with the best-determined parameter (theta (*)) now known to 0.03%. We describe the multi-component sky as seen by Planck, the success of the Lambda CDM model, and the connection to lower-redshift probes of structure formation. We also give a comprehensive summary of the major changes introduced in this 2018 release. The Planck data, alone and in combination with other probes, provide stringent constraints on our models of the early Universe and the large-scale structure within which all astrophysical objects form and evolve. We discuss some lessons learned from the Planck mission, and highlight areas ripe for further experimental advances.
  •  
5.
  • Aghanim, N., et al. (author)
  • Planck 2018 results III. High Frequency Instrument data processing and frequency maps
  • 2020
  • In: Astronomy and Astrophysics. - : EDP Sciences. - 0004-6361 .- 1432-0746. ; 641
  • Journal article (peer-reviewed)abstract
    • This paper presents the High Frequency Instrument (HFI) data processing procedures for the Planck 2018 release. Major improvements in mapmaking have been achieved since the previous Planck 2015 release, many of which were used and described already in an intermediate paper dedicated to the Planck polarized data at low multipoles. These improvements enabled the first significant measurement of the reionization optical depth parameter using Planck-HFI data. This paper presents an extensive analysis of systematic effects, including the use of end-to-end simulations to facilitate their removal and characterize the residuals. The polarized data, which presented a number of known problems in the 2015 Planck release, are very significantly improved, especially the leakage from intensity to polarization. Calibration, based on the cosmic microwave background (CMB) dipole, is now extremely accurate and in the frequency range 100-353 GHz reduces intensity-to-polarization leakage caused by calibration mismatch. The Solar dipole direction has been determined in the three lowest HFI frequency channels to within one arc minute, and its amplitude has an absolute uncertainty smaller than 0.35 mu K, an accuracy of order 10(-4). This is a major legacy from the Planck HFI for future CMB experiments. The removal of bandpass leakage has been improved for the main high-frequency foregrounds by extracting the bandpass-mismatch coefficients for each detector as part of the mapmaking process; these values in turn improve the intensity maps. This is a major change in the philosophy of frequency maps, which are now computed from single detector data, all adjusted to the same average bandpass response for the main foregrounds. End-to-end simulations have been shown to reproduce very well the relative gain calibration of detectors, as well as drifts within a frequency induced by the residuals of the main systematic effect (analogue-to-digital convertor non-linearity residuals). Using these simulations, we have been able to measure and correct the small frequency calibration bias induced by this systematic effect at the 10(-4) level. There is no detectable sign of a residual calibration bias between the first and second acoustic peaks in the CMB channels, at the 10(-3) level.
  •  
6.
  • Aghanim, N., et al. (author)
  • Planck 2018 results XII. Galactic astrophysics using polarized dust emission
  • 2020
  • In: Astronomy and Astrophysics. - : EDP Sciences. - 0004-6361 .- 1432-0746. ; 641
  • Journal article (peer-reviewed)abstract
    • Observations of the submillimetre emission from Galactic dust, in both total intensity I and polarization, have received tremendous interest thanks to the Planck full-sky maps. In this paper we make use of such full-sky maps of dust polarized emission produced from the third public release of Planck data. As the basis for expanding on astrophysical studies of the polarized thermal emission from Galactic dust, we present full-sky maps of the dust polarization fraction p, polarization angle psi, and dispersion function of polarization angles ?. The joint distribution (one-point statistics) of p and N-H confirms that the mean and maximum polarization fractions decrease with increasing N-H. The uncertainty on the maximum observed polarization fraction, (max) = 22.0(-1.4)(+3.5) p max = 22 . 0 - 1.4 + 3.5 % at 353 GHz and 80 ' resolution, is dominated by the uncertainty on the Galactic emission zero level in total intensity, in particular towards diffuse lines of sight at high Galactic latitudes. Furthermore, the inverse behaviour between p and ? found earlier is seen to be present at high latitudes. This follows the ?proportional to p(-1) relationship expected from models of the polarized sky (including numerical simulations of magnetohydrodynamical turbulence) that include effects from only the topology of the turbulent magnetic field, but otherwise have uniform alignment and dust properties. Thus, the statistical properties of p, psi, and ? for the most part reflect the structure of the Galactic magnetic field. Nevertheless, we search for potential signatures of varying grain alignment and dust properties. First, we analyse the product map ?xp, looking for residual trends. While the polarization fraction p decreases by a factor of 3-4 between N-H=10(20) cm(-2) and N-H=2x10(22) cm(-2), out of the Galactic plane, this product ?xp only decreases by about 25%. Because ? is independent of the grain alignment efficiency, this demonstrates that the systematic decrease in p with N-H is determined mostly by the magnetic-field structure and not by a drop in grain alignment. This systematic trend is observed both in the diffuse interstellar medium (ISM) and in molecular clouds of the Gould Belt. Second, we look for a dependence of polarization properties on the dust temperature, as we would expect from the radiative alignment torque (RAT) theory. We find no systematic trend of ?xp with the dust temperature T-d, whether in the diffuse ISM or in the molecular clouds of the Gould Belt. In the diffuse ISM, lines of sight with high polarization fraction p and low polarization angle dispersion ? tend, on the contrary, to have colder dust than lines of sight with low p and high ?. We also compare the Planck thermal dust polarization with starlight polarization data in the visible at high Galactic latitudes. The agreement in polarization angles is remarkable, and is consistent with what we expect from the noise and the observed dispersion of polarization angles in the visible on the scale of the Planck beam. The two polarization emission-to-extinction ratios, R-P/p and R-S/V, which primarily characterize dust optical properties, have only a weak dependence on the column density, and converge towards the values previously determined for translucent lines of sight. We also determine an upper limit for the polarization fraction in extinction, p(V)/E(B-V), of 13% at high Galactic latitude, compatible with the polarization fraction p approximate to 20% observed at 353 GHz. Taken together, these results provide strong constraints for models of Galactic dust in diffuse gas.
  •  
7.
  • Aghanim, N., et al. (author)
  • Planck intermediate results L. Evidence of spatial variation of the polarized thermal dust spectral energy distribution and implications for CMB B-mode analysis
  • 2017
  • In: Astronomy and Astrophysics. - : EDP Sciences. - 0004-6361 .- 1432-0746. ; 599
  • Journal article (peer-reviewed)abstract
    • The characterization of the Galactic foregrounds has been shown to be the main obstacle in the challenging quest to detect primordial B-modes in the polarized microwave sky. We make use of the Planck-HFI 2015 data release at high frequencies to place new constraints on the properties of the polarized thermal dust emission at high Galactic latitudes. Here, we specifically study the spatial variability of the dust polarized spectral energy distribution (SED), and its potential impact on the determination of the tensor-to-scalar ratio, r. We use the correlation ratio of the CBB `angular power spectra between the 217 and 353 GHz channels as a tracer of these potential variations, computed on different high Galactic latitude regions, ranging from 80% to 20% of the sky. The new insight from Planck data is a departure of the correlation ratio from unity that cannot be attributed to a spurious decorrelation due to the cosmic microwave background, instrumental noise, or instrumental systematics. The effect is marginally detected on each region, but the statistical combination of all the regions gives more than 99% confidence for this variation in polarized dust properties. In addition, we show that the decorrelation increases when there is a decrease in the mean column density of the region of the sky being considered, and we propose a simple power-law empirical model for this dependence, which matches what is seen in the Planck data. We explore the effect that this measured decorrelation has on simulations of the BICEP2-Keck Array/Planck analysis and show that the 2015 constraints from these data still allow a decorrelation between the dust at 150 and 353 GHz that is compatible with our measured value. Finally, using simplified models, we show that either spatial variation of the dust SED or of the dust polarization angle are able to produce decorrelations between 217 and 353 GHz data similar to the values we observe in the data.
  •  
8.
  • Aghanim, N., et al. (author)
  • Planck intermediate results LI. Features in the cosmic microwave background temperature power spectrum and shifts in cosmological parameters
  • 2017
  • In: Astronomy and Astrophysics. - : EDP Sciences. - 0004-6361 .- 1432-0746. ; 607
  • Journal article (peer-reviewed)abstract
    • The six parameters of the standard Lambda CDM model have best-fit values derived from the Planck temperature power spectrum that are shifted somewhat from the best-fit values derived from WMAP data. These shifts are driven by features in the Planck temperature power spectrum at angular scales that had never before been measured to cosmic-variance level precision. We have investigated these shifts to determine whether they are within the range of expectation and to understand their origin in the data. Taking our parameter set to be the optical depth of the reionized intergalactic medium tau, the baryon density omega(b), the matter density omega(m), the angular size of the sound horizon theta(*), the spectral index of the primordial power spectrum, n(s), and A(s)e(-2 pi) (where As is the amplitude of the primordial power spectrum), we have examined the change in best-fit values between a WMAP-like large angular-scale data set (with multipole moment l < 800 in the Planck temperature power spectrum) and an all angular-scale data set (l < 2500 Planck temperature power spectrum), each with a prior on tau of 0.07 +/- 0.02. We find that the shifts, in units of the 1 sigma expected dispersion for each parameter, are {Delta tau, Delta A(s)e(-2 tau), Delta n(s), Delta omega(m), Delta omega(b), Delta theta(*)} = {-1.7, -2.2, 1.2, 2.0, 1.1, 0.9}, with a chi(2) value of 8.0. We find that this chi(2) value is exceeded in 15% of our simulated data sets, and that a parameter deviates by more than 2.2 sigma in 9% of simulated data sets, meaning that the shifts are not unusually large. Comparing l < 800 instead to l > 800, or splitting at a different multipole, yields similar results. We examined the l < 800 model residuals in the l > 800 power spectrum data and find that the features there that drive these shifts are a set of oscillations across a broad range of angular scales. Although they partly appear similar to the effects of enhanced gravitational lensing, the shifts in Lambda CDM parameters that arise in response to these features correspond to model spectrum changes that are predominantly due to non-lensing effects; the only exception is tau, which, at fixed A(s)e(-2 tau), affects the l > 800 temperature power spectrum solely through the associated change in As and the impact of that on the lensing potential power spectrum. We also ask, what is it about the power spectrum at l < 800 that leads to somewhat different best-fit parameters than come from the full l range? We find that if we discard the data at l < 30, where there is a roughly 2 sigma downward fluctuation in power relative to the model that best fits the full l range, the l < 800 best-fit parameters shift significantly towards the l < 2500 best-fit parameters. In contrast, including l < 30, this previously noted low-l deficit drives ns up and impacts parameters correlated with ns, such as omega(m) and H-0. As expected, the l < 30 data have a much greater impact on the l < 800 best fit than on the l < 2500 best fit. So although the shifts are not very significant, we find that they can be understood through the combined effects of an oscillatory-like set of high-l residuals and the deficit in low-l power, excursions consistent with sample variance that happen to map onto changes in cosmological parameters. Finally, we examine agreement between Planck TT data and two other CMB data sets, namely the Planck lensing reconstruction and the TT power spectrum measured by the South Pole Telescope, again finding a lack of convincing evidence of any significant deviations in parameters, suggesting that current CMB data sets give an internally consistent picture of the Lambda CDM model.
  •  
9.
  • Aghanim, N., et al. (author)
  • Planck intermediate results LIII. Detection of velocity dispersion from the kinetic Sunyaev-Zeldovich effect
  • 2018
  • In: Astronomy and Astrophysics. - : EDP Sciences. - 0004-6361 .- 1432-0746. ; 617
  • Journal article (peer-reviewed)abstract
    • Using the Planck full-mission data, we present a detection of the temperature (and therefore velocity) dispersion due to the kinetic Sunyaev-Zeldovich (kSZ) effect from clusters of galaxies. To suppress the primary CMB and instrumental noise we derive a matched filter and then convolve it with the Planck foreground-cleaned 2D- ILC maps. By using the Meta Catalogue of X-ray detected Clusters of galaxies (MCXC), we determine the normalized rms dispersion of the temperature fluctuations at the positions of clusters, finding that this shows excess variance compared with the noise expectation. We then build an unbiased statistical estimator of the signal, determining that the normalized mean temperature dispersion of 1526 clusters is <(Delta T/T)(2))> = (1.64 +/- 0.48) x 10(-11). However, comparison with analytic calculations and simulations suggest that around 0.7 sigma of this result is due to cluster lensing rather than the kSZ effect. By correcting this, the temperature dispersion is measured to be <(Delta T/T)(2))> = (1.35 +/- 0.48) x 10(-11), which gives a detection at the 2.8 sigma level. We further convert uniform-weight temperature dispersion into a measurement of the line-of-sight velocity dispersion, by using estimates of the optical depth of each cluster (which introduces additional uncertainty into the estimate). We find that the velocity dispersion is (v(2)) = (123 000 +/- 71 000) (km s(-1))(2), which is consistent with findings from other large-scale structure studies, and provides direct evidence of statistical homogeneity on scales of 600 h(-1) Mpc. Our study shows the promise of using cross-correlations of the kSZ effect with large-scale structure in order to constrain the growth of structure.
  •  
10.
  • Aghanim, N., et al. (author)
  • Planck intermediate results XLIV. Structure of the Galactic magnetic field from dust polarization maps of the southern Galactic cap
  • 2016
  • In: Astronomy and Astrophysics. - : EDP Sciences. - 0004-6361 .- 1432-0746. ; 596
  • Journal article (peer-reviewed)abstract
    • Using data from the Planck satellite, we study the statistical properties of interstellar dust polarization at high Galactic latitudes around the south pole (b < -60 degrees). Our aim is to advance the understanding of the magnetized interstellar medium (ISM), and to provide a modelling framework of the polarized dust foreground for use in cosmic microwave background (CMB) component-separation procedures. We examine the Stokes I, Q, and U maps at 353 GHz, and particularly the statistical distribution of the polarization fraction (p) and angle (Psi), in order to characterize the ordered and turbulent components of the Galactic magnetic field (GMF) in the solar neighbourhood. The Q and U maps show patterns at large angular scales, which we relate to the mean orientation of the GMF towards Galactic coordinates (l(0); b(0)) = (70 degrees +/- 5 degrees, 24 degrees +/- 5 degrees). The histogram of the observed p values shows a wide dispersion up to 25%. The histogram Psi of has a standard deviation of 12 degrees about the regular pattern expected from the ordered GMF. We build a phenomenological model that connects the distributions of p and Psi to a statistical description of the turbulent component of the GMF, assuming a uniform effective polarization fraction (p(0)) of dust emission. To compute the Stokes parameters, we approximate the integration along the line of sight (LOS) as a sum over a set of N independent polarization layers, in each of which the turbulent component of the GMF is obtained from Gaussian realizations of a power-law power spectrum. We are able to reproduce the observed p and distributions using a p0 value of 26%, a ratio of 0.9 between the strengths of the turbulent and mean components of the GMF, and a small value of N. The mean value of p (inferred from the fit of the large-scale patterns in the Stokes maps) is 12 +/- 1%. We relate the polarization layers to the density structure and to the correlation length of the GMF along the LOS. We emphasize the simplicity of our model (involving only a few parameters), which can be easily computed on the celestial sphere to produce simulated maps of dust polarization. Our work is an important step towards a model that can be used to assess the accuracy of component-separation methods in present and future CMB experiments designed to search the B mode CMB polarization from primordial gravity waves.
  •  
11.
  • Aghanim, N., et al. (author)
  • Planck intermediate results XLIX. Parity-violation constraints from polarization data
  • 2016
  • In: Astronomy and Astrophysics. - : EDP Sciences. - 0004-6361 .- 1432-0746. ; 596
  • Journal article (peer-reviewed)abstract
    • Parity-violating extensions of the standard electromagnetic theory cause in vacuo rotation of the plane of polarization of propagating photons. This effect, also known as cosmic birefringence, has an impact on the cosmic microwave background (CMB) anisotropy angular power spectra, producing non-vanishing T-B and E-B correlations that are otherwise null when parity is a symmetry. Here we present new constraints on an isotropic rotation, parametrized by the angle alpha, derived from Planck 2015 CMB polarization data. To increase the robustness of our analyses, we employ two complementary approaches, in harmonic space and in map space, the latter based on a peak stacking technique. The two approaches provide estimates for alpha that are in agreement within statistical uncertainties and are very stable against several consistency tests. Considering the T-B and E-B information jointly, we find alpha = 0 degrees: 31 +/- 0 degrees.05 (stat:) +/- 0 degrees:28 (syst:) from the harmonic analysis and alpha = 0 degrees.35 +/- 0 degrees.05 (stat :) 0 degrees.28 (syst :) from the stacking approach. These constraints are compatible with no parity violation and are dominated by the systematic uncertainty in the orientation of Planck's polarization-sensitive bolometers.
  •  
12.
  • Aghanim, N., et al. (author)
  • Planck intermediate results XLVI. Reduction of large-scale systematic effects in HFI polarization maps and estimation of the reionization optical depth
  • 2016
  • In: Astronomy and Astrophysics. - : EDP Sciences. - 0004-6361 .- 1432-0746. ; 596
  • Journal article (peer-reviewed)abstract
    • This paper describes the identification, modelling, and removal of previously unexplained systematic effects in the polarization data of the Planck High Frequency Instrument (HFI) on large angular scales, including new mapmaking and calibration procedures, new and more complete end-to-end simulations, and a set of robust internal consistency checks on the resulting maps. These maps, at 100, 143, 217, and 353 GHz, are early versions of those that will be released in final form later in 2016. The improvements allow us to determine the cosmic reionization optical depth tau using, for the first time, the low-multipole EE data from HFI, reducing significantly the central value and uncertainty, and hence the upper limit. Two different likelihood procedures are used to constrain tau from two estimators of the CMB E- and B-mode angular power spectra at 100 and 143 GHz, after debiasing the spectra from a small remaining systematic contamination. These all give fully consistent results. A further consistency test is performed using cross-correlations derived from the Low Frequency Instrument maps of the Planck 2015 data release and the new HFI data. For this purpose, end-to-end analyses of systematic effects from the two instruments are used to demonstrate the near independence of their dominant systematic error residuals. The tightest result comes from the HFI-based tau posterior distribution using the maximum likelihood power spectrum estimator from EE data only, giving a value 0.055 +/- 0.009. In a companion paper these results are discussed in the context of the best-fit Planck Lambda CDM cosmological model and recent models of reionization.
  •  
13.
  • Aghanim, N., et al. (author)
  • Planck intermediate results XLVIII. Disentangling Galactic dust emission and cosmic infrared background anisotropies
  • 2016
  • In: Astronomy and Astrophysics. - : EDP Sciences. - 0004-6361 .- 1432-0746. ; 596
  • Journal article (peer-reviewed)abstract
    • Using the Planck 2015 data release (PR2) temperature maps, we separate Galactic thermal dust emission from cosmic infrared background (CIB) anisotropies. For this purpose, we implement a specifically tailored component-separation method, the so-called generalized needlet internal linear combination (GNILC) method, which uses spatial information (the angular power spectra) to disentangle the Galactic dust emission and CIB anisotropies. We produce significantly improved all-sky maps of Planck thermal dust emission, with reduced CIB contamination, at 353, 545, and 857 GHz. By reducing the CIB contamination of the thermal dust maps, we provide more accurate estimates of the local dust temperature and dust spectral index over the sky with reduced dispersion, especially at high Galactic latitudes above b = +/- 20 degrees. We find that the dust temperature is T = (19.4 +/- 1.3) K and the dust spectral index is beta = 1.6 +/- 0.1 averaged over the whole sky, while T = (19.4 +/- 1.5) K and beta = 1.6 +/- 0.2 on 21% of the sky at high latitudes. Moreover, subtracting the new CIB-removed thermal dust maps from the CMB-removed Planck maps gives access to the CIB anisotropies over 60% of the sky at Galactic latitudes vertical bar b vertical bar > 20 degrees. Because they are a significant improvement over previous Planck products, the GNILC maps are recommended for thermal dust science. The new CIB maps can be regarded as indirect tracers of the dark matter and they are recommended for exploring cross-correlations with lensing and large-scale structure optical surveys. The reconstructed GNILC thermal dust and CIB maps are delivered as Planck products.
  •  
14.
  • Akrami, Y., et al. (author)
  • Planck 2018 results II. Low Frequency Instrument data processing
  • 2020
  • In: Astronomy and Astrophysics. - : EDP Sciences. - 0004-6361 .- 1432-0746. ; 641
  • Journal article (peer-reviewed)abstract
    • We present a final description of the data-processing pipeline for the Planck Low Frequency Instrument (LFI), implemented for the 2018 data release. Several improvements have been made with respect to the previous release, especially in the calibration process and in the correction of instrumental features such as the effects of nonlinearity in the response of the analogue-to-digital converters. We provide a brief pedagogical introduction to the complete pipeline, as well as a detailed description of the important changes implemented. Self-consistency of the pipeline is demonstrated using dedicated simulations and null tests. We present the final version of the LFI full sky maps at 30, 44, and 70 GHz, both in temperature and polarization, together with a refined estimate of the solar dipole and a final assessment of the main LFI instrumental parameters.
  •  
15.
  • Akrami, Y., et al. (author)
  • Planck 2018 results IV. Diffuse component separation
  • 2020
  • In: Astronomy and Astrophysics. - : EDP Sciences. - 0004-6361 .- 1432-0746. ; 641
  • Journal article (peer-reviewed)abstract
    • We present full-sky maps of the cosmic microwave background (CMB) and polarized synchrotron and thermal dust emission, derived from the third set of Planck frequency maps. These products have significantly lower contamination from instrumental systematic effects than previous versions. The methodologies used to derive these maps follow closely those described in earlier papers, adopting four methods (Commander, NILC, SEVEM, and SMICA) to extract the CMB component, as well as three methods (Commander, GNILC, and SMICA) to extract astrophysical components. Our revised CMB temperature maps agree with corresponding products in the Planck 2015 delivery, whereas the polarization maps exhibit significantly lower large-scale power, reflecting the improved data processing described in companion papers; however, the noise properties of the resulting data products are complicated, and the best available end-to-end simulations exhibit relative biases with respect to the data at the few percent level. Using these maps, we are for the first time able to fit the spectral index of thermal dust independently over 3 degrees regions. We derive a conservative estimate of the mean spectral index of polarized thermal dust emission of beta (d)=1.55 +/- 0.05, where the uncertainty marginalizes both over all known systematic uncertainties and different estimation techniques. For polarized synchrotron emission, we find a mean spectral index of beta (s)=-3.1 +/- 0.1, consistent with previously reported measurements. We note that the current data processing does not allow for construction of unbiased single-bolometer maps, and this limits our ability to extract CO emission and correlated components. The foreground results for intensity derived in this paper therefore do not supersede corresponding Planck 2015 products. For polarization the new results supersede the corresponding 2015 products in all respects.
  •  
16.
  • Akrami, Y., et al. (author)
  • Planck 2018 results VII. Isotropy and statistics of the CMB
  • 2020
  • In: Astronomy and Astrophysics. - : EDP Sciences. - 0004-6361 .- 1432-0746. ; 641
  • Journal article (peer-reviewed)abstract
    • Analysis of the Planck 2018 data set indicates that the statistical properties of the cosmic microwave background (CMB) temperature anisotropies are in excellent agreement with previous studies using the 2013 and 2015 data releases. In particular, they are consistent with the Gaussian predictions of the Lambda CDM cosmological model, yet also confirm the presence of several so-called anomalies on large angular scales. The novelty of the current study, however, lies in being a first attempt at a comprehensive analysis of the statistics of the polarization signal over all angular scales, using either maps of the Stokes parameters, Q and U, or the E-mode signal derived from these using a new methodology (which we describe in an appendix). Although remarkable progress has been made in reducing the systematic effects that contaminated the 2015 polarization maps on large angular scales, it is still the case that residual systematics (and our ability to simulate them) can limit some tests of non-Gaussianity and isotropy. However, a detailed set of null tests applied to the maps indicates that these issues do not dominate the analysis on intermediate and large angular scales (i.e., l less than or similar to 400). In this regime, no unambiguous detections of cosmological non-Gaussianity, or of anomalies corresponding to those seen in temperature, are claimed. Notably, the stacking of CMB polarization signals centred on the positions of temperature hot and cold spots exhibits excellent agreement with the Lambda CDM cosmological model, and also gives a clear indication of how Planck provides state-of-the-art measurements of CMB temperature and polarization on degree scales.
  •  
17.
  • Akrami, Y., et al. (author)
  • Planck 2018 results X. Constraints on inflation
  • 2020
  • In: Astronomy and Astrophysics. - : EDP Sciences. - 0004-6361 .- 1432-0746. ; 641
  • Journal article (peer-reviewed)abstract
    • We report on the implications for cosmic inflation of the 2018 release of the Planck cosmic microwave background (CMB) anisotropy measurements. The results are fully consistent with those reported using the data from the two previous Planck cosmological releases, but have smaller uncertainties thanks to improvements in the characterization of polarization at low and high multipoles. Planck temperature, polarization, and lensing data determine the spectral index of scalar perturbations to be n(s)=0.9649 +/- 0.0042 at 68% CL. We find no evidence for a scale dependence of n(s), either as a running or as a running of the running. The Universe is found to be consistent with spatial flatness with a precision of 0.4% at 95% CL by combining Planck with a compilation of baryon acoustic oscillation data. The Planck 95% CL upper limit on the tensor-to-scalar ratio, r(0.002)< 0.10, is further tightened by combining with the BICEP2/Keck Array BK15 data to obtain r(0.002)< 0.056. In the framework of standard single-field inflationary models with Einstein gravity, these results imply that: (a) the predictions of slow-roll models with a concave potential, V(phi) < 0, are increasingly favoured by the data; and (b) based on two different methods for reconstructing the inflaton potential, we find no evidence for dynamics beyond slow roll. Three different methods for the non-parametric reconstruction of the primordial power spectrum consistently confirm a pure power law in the range of comoving scales 0.005 Mpc(-1)k less than or similar to 0.2 Mpc(-1). A complementary analysis also finds no evidence for theoretically motivated parameterized features in the Planck power spectra. For the case of oscillatory features that are logarithmic or linear in k, this result is further strengthened by a new combined analysis including the Planck bispectrum data. The new Planck polarization data provide a stringent test of the adiabaticity of the initial conditions for the cosmological fluctuations. In correlated, mixed adiabatic and isocurvature models, the non-adiabatic contribution to the observed CMB temperature variance is constrained to 1.3%, 1.7%, and 1.7% at 95% CL for cold dark matter, neutrino density, and neutrino velocity, respectively. Planck power spectra plus lensing set constraints on the amplitude of compensated cold dark matter-baryon isocurvature perturbations that are consistent with current complementary measurements. The polarization data also provide improved constraints on inflationary models that predict a small statistically anisotropic quadupolar modulation of the primordial fluctuations. However, the polarization data do not support physical models for a scale-dependent dipolar modulation. All these findings support the key predictions of the standard single-field inflationary models, which will be further tested by future cosmological observations.
  •  
18.
  • Akrami, Y., et al. (author)
  • Planck intermediate results LII. Planet flux densities
  • 2017
  • In: Astronomy and Astrophysics. - : EDP Sciences. - 0004-6361 .- 1432-0746. ; 607
  • Journal article (peer-reviewed)abstract
    • Measurements of flux density are described for five planets, Mars, Jupiter, Saturn, Uranus, and Neptune, across the six Planck High Frequency Instrument frequency bands (100-857 GHz) and these are then compared with models and existing data. In our analysis, we have also included estimates of the brightness of Jupiter and Saturn at the three frequencies of the Planck Low Frequency Instrument (30, 44, and 70 GHz). The results provide constraints on the intrinsic brightness and the brightness time-variability of these planets. The majority of the planet flux density estimates are limited by systematic errors, but still yield better than 1% measurements in many cases. Applying data from Planck HFI, the Wilkinson Microwave Anisotropy Probe (WMAP), and the Atacama Cosmology Telescope (ACT) to a model that incorporates contributions from Saturn's rings to the planet's total flux density suggests a best fit value for the spectral index of Saturn's ring system of beta(ring) = 2 : 30 +/- 0 : 03 over the 30-1000 GHz frequency range. Estimates of the polarization amplitude of the planets have also been made in the four bands that have polarization-sensitive detectors (100-353 GHz); this analysis provides a 95% confidence level upper limit on Mars's polarization of 1.8, 1.7, 1.2, and 1.7% at 100, 143, 217, and 353 GHz, respectively. The average ratio between the Planck-HFI measurements and the adopted model predictions for all five planets (excluding Jupiter observations for 353 GHz) is 1.004, 1.002, 1.021, and 1.033 for 100, 143, 217, and 353 GHz, respectively. Model predictions for planet thermodynamic temperatures are therefore consistent with the absolute calibration of Planck-HFI detectors at about the three-percent level. We compare our measurements with published results from recent cosmic microwave background experiments. In particular, we observe that the flux densities measured by Planck HFI and WMAP agree to within 2%. These results allow experiments operating in the mm-wavelength range to cross-calibrate against Planck and improve models of radiative transport used in planetary science.
  •  
19.
  • Akrami, Y., et al. (author)
  • Planck intermediate results LIV. The Planck multi-frequency catalogue of non-thermal sources
  • 2018
  • In: Astronomy and Astrophysics. - : EDP Sciences. - 0004-6361 .- 1432-0746. ; 619
  • Journal article (peer-reviewed)abstract
    • This paper presents the Planck Multi-frequency Catalogue of Non-thermal (i.e. synchrotron-dominated) Sources (PCNT) observed between 30 and 857 GHz by the ESA Planck mission. This catalogue was constructed by selecting objects detected in the full mission all-sky temperature maps at 30 and 143 GHz, with a signal-to-noise ratio (S/N) > 3 in at least one of the two channels after filtering with a particular Mexican hat wavelet. As a result, 29 400 source candidates were selected. Then, a multi-frequency analysis was performed using the Matrix Filters methodology at the position of these objects, and flux densities and errors were calculated for all of them in the nine Planck channels. This catalogue was built using a different methodology than the one adopted for the Planck Catalogue of Compact Sources (PCCS) and the Second Planck Catalogue of Compact Sources (PCCS2), although the initial detection was done with the same pipeline that was used to produce them. The present catalogue is the first unbiased, full-sky catalogue of synchrotron-dominated sources published at millimetre and submillimetre wavelengths and constitutes a powerful database for statistical studies of non-thermal extragalactic sources, whose emission is dominated by the central active galactic nucleus. Together with the full multi-frequency catalogue, we also define the Bright Planck Multi-frequency Catalogue of Non-thermal Sources (PCNTb), where only those objects with a S/N > 4 at both 30 and 143 GHz were selected. In this catalogue 1146 compact sources are detected outside the adopted Planck GAL070 mask; thus, these sources constitute a highly reliable sample of extragalactic radio sources. We also flag the high-significance subsample (PCNThs), a subset of 151 sources that are detected with S/N > 4 in all nine Planck channels, 75 of which are found outside the Planck mask adopted here. The remaining 76 sources inside the Galactic mask are very likely Galactic objects.
  •  
20.
  •  
21.
  • Bartolini, Davide Basilio, et al. (author)
  • The Autonomic Operating System Research Project - Achievements and Future Directions
  • 2013
  • Conference paper (peer-reviewed)abstract
    • Traditionally, hypervisors, operating systems, and runtime systems have been providing an abstraction layer over the bare-metal hardware. Traditional abstractions, however, do not consider for non-functional requirements such as system-level constraints or users' objectives. As these requirements are gaining increasing importance, researchers are looking into making user-specified and system-level objectives first-class citizens in the computer systems' realm. This paper describes the Autonomic Operating System (AcOS) project; AcOS enhances commodity operating systems with an autonomic layer that enables self-* properties through adaptive resource allocation. With AcOS, we investigate intelligent resource allocation to achieve user-specified service-level objectives on application performance and to respect system-level thresholds on CPU temperature. We give a broad overview of \system, elaborate on its achievements, and discuss research perspectives.
  •  
22.
  • Bini, Enrico, et al. (author)
  • Zero-Jitter Chains of Periodic LET Tasks via Algebraic Rings
  • 2023
  • In: IEEE Transactions on Computers. - 0018-9340. ; 72:11, s. 3057-3071
  • Journal article (peer-reviewed)abstract
    • In embedded computing domains, including the automotive industry, complex functionalities are split across multiple tasks that form task chains. These tasks are functionally dependent and communicate partial computations through shared memory slots based on the Logical Execution Time (LET) paradigm. This paper introduces a model that captures the behavior of a producer-consumer pair of tasks in a chain, characterizing the timing of reading and writing events. Using ring algebra, the combined behavior of the pair can be modeled as a single periodic task. The paper also presents a lightweight mechanism to eliminate jitter in an entire chain of any size, resulting in a single periodic LET task with zero jitter. All presented methods are available in a public repository.
  •  
23.
  • Brandenburg, Björn, et al. (author)
  • Message from the Chairs : RTAS 2019
  • 2019
  • In: 2019 IEEE Real-Time and Embedded Technology and Applications Symposium (RTAS). - 1545-3421. - 9781728106786 ; 2019-April
  • Journal article (other academic/artistic)abstract
    • Presents the introductory welcome message from the conference proceedings. May include the conference officers' congratulations to all involved with the conference event and publication of the proceedings record.
  •  
24.
  • Chasparis, Georgios, et al. (author)
  • Design and Implementation of Distributed Resource Management for Time Sensitive Applications
  • 2016
  • In: Automatica. - : Elsevier BV. - 0005-1098. ; 64, s. 44-53
  • Journal article (peer-reviewed)abstract
    • In this paper, we address distributed convergence to fair allocations of CPU resources for time-sensitive applications. We propose a novel resource management framework where a centralized objective for fair allocations is decomposed into a pair of performance-driven recursive processes for updating: (a) the allocation of computing bandwidth to the applications (resource adaptation), executed by the resource manager, and (b) the computational demand of each application (service-level adaptation), executed by each application independently. We provide conditions under which the distributed recursive scheme exhibits convergence to solutions of the centralized objective (i.e., fair allocations). Contrary to prior work on centralized optimization schemes, the proposed framework exhibits adaptivity and robustness to changes both in the number and nature of applications, while it assumes minimum information available to both applications and the resource manager. We finally validate our framework with simulations using the TrueTime toolbox in MATLAB/Simulink.
  •  
25.
  • Chasparis, Georgios, et al. (author)
  • Distributed Management of CPU Resources for Time-Sensitive Applications
  • 2012
  • Reports (other academic/artistic)abstract
    • The number of applications sharing the same embedded device is increasing dramatically. Very efficient mechanisms (resource managers) for assigning the CPU time to all demanding aplications are needed. Unfortunately existing optimization-based resource managers consume too much resource themselves. In this paper, we address the problem of distributed convergence to efficient CPU allocation for time-sensitive applications. We propose a novel resource management framework where both applications and the resource manager act independently trying to maximize their own performance measure and according to a utility-based adjustment process. Contrary to prior work on centralized optimization schemes, the proposed framework exhibits adaptivity and robustness to changes both in the number and nature of applications, while it assumes minimum information available to both applications and the resource manager. It is shown analytically that efficient resource allocation can be achieved in a distributed fashion through the proposed adjustment process. Experiments using the TrueTime Matlab toolbox show the validity of our proposed approach.
  •  
26.
  • Chasparis, Georgios, et al. (author)
  • Distributed Management of CPU Resources for Time-Sensitive Applications
  • 2013
  • In: [Host publication title missing]. - 0743-1619. ; , s. 5305-5312
  • Conference paper (peer-reviewed)abstract
    • The number of applications sharing the same embedded device is increasing dramatically. Very efficient mechanisms (resource managers) for assigning the CPU time to all demanding applications are needed. Unfortunately, existing optimization-based resource managers consume too much resource themselves. In this paper, we address the problem of distributed convergence to fair allocation of CPU resources for time-sensitive applications. We propose a novel resource management framework where both applications and the resource manager act independently trying to maximize their own performance measure according to a utility-based adjustment process. Contrary to prior work on centralized optimization schemes, the proposed framework exhibits adaptivity and robustness to changes both in the number and nature of applications, while it assumes minimum information available to both applications and the resource manager. It is shown analytically that fair resource allocation can be achieved through the proposed adjustment process when the CPU is overloaded. Experiments using the TrueTime Matlab toolbox show the validity of the proposed approach.
  •  
27.
  • Dürango, Jonas, et al. (author)
  • A control theoretical approach to non-intrusive geo-replication for cloud services
  • 2016
  • In: 2016 IEEE 55TH CONFERENCE ON DECISION AND CONTROL (CDC). - : IEEE. - 9781509018376 ; , s. 1649-1656
  • Conference paper (peer-reviewed)abstract
    • Complete data center failures may occur due to disastrous events such as earthquakes or fires. To attain robustness against such failures and reduce the probability of data loss, data must be replicated in another data center sufficiently geographically separated from the original data center. Implementing geo-replication is expensive as every data update operation in the original data center must be replicated in the backup. Running the application and the replication service in parallel is cost effective but creates a trade-off between potential replication consistency and data loss and reduced application performance due to network resource contention. We model this trade-off and provide a control-theoretical solution based on Model Predictive Control to dynamically allocate network bandwidth to accommodate the objectives of both replication and application data streams. We evaluate our control solution through simulations emulating the individual services, their traffic flows, and the shared network resource. The MPC solution is able to maintain a consistent performance over periods of persistent overload, and is quickly able to indiscriminately recover once the system return to a stable state. Additionally, the MPC balances the two objectives of consistency and performance according to the proportions specified in the objective function.
  •  
28.
  • Dürango, Jonas, et al. (author)
  • Control-theoretical load-balancing for cloud applications with brownout
  • 2014
  • In: 2014 IEEE 53RD ANNUAL CONFERENCE ON DECISION AND CONTROL (CDC). - 9781467360906 - 9781479977468 ; , s. 5320-5327
  • Conference paper (peer-reviewed)abstract
    • Cloud applications are often subject to unexpected events like flash crowds and hardware failures. Without a predictable behaviour, users may abandon an unresponsive application. This problem has been partially solved on two separate fronts: first, by adding a self-adaptive feature called brownout inside cloud applications to bound response times by modulating user experience, and, second, by introducing replicas - copies of the applications having the same functionalities - for redundancy and adding a load-balancer to direct incoming traffic. However, existing load-balancing strategies interfere with brownout self-adaptivity. Load-balancers are often based on response times, that are already controlled by the self-adaptive features of the application, hence they are not a good indicator of how well a replica is performing. In this paper, we present novel load-balancing strategies, specifically designed to support brownout applications. They base their decision not on response time, but on user experience degradation. We implemented our strategies in a self-adaptive application simulator, together with some state-of-the-art solutions. Results obtained in multiple scenarios show that the proposed strategies bring significant improvements when compared to the state-of-the-art ones.
  •  
29.
  • Edpalm, Viktor, et al. (author)
  • Camera networks dimensioning and scheduling with quasi worst-case transmission time
  • 2018
  • In: 30th Euromicro Conference on Real-Time Systems, ECRTS 2018. - 9783959770750 ; 106
  • Conference paper (peer-reviewed)abstract
    • This paper describes a method to compute frame size estimates to be used in quasi Worst-Case Transmission Times (qWCTT) for cameras that transmit frames over IP-based communication networks. The precise determination of qWCTT allows us to model the network access scheduling problem as a multiframe problem and to re-use theoretical results for network scheduling. The paper presents a set of experiments, conducted in an industrial testbed, that validate the qWCTT estimation. We believe that a more precise estimation will lead to savings for network infrastructure and to better network utilization.
  •  
30.
  • Edpalm, Viktor, et al. (author)
  • H.264 Video Frame Size Estimation
  • 2018
  • Reports (other academic/artistic)abstract
    • This report describes a method to estimate the video bandwidth for IP cameras using the H.264 standard. The precise determination of bandwidth allows us to model the network access as a scheduling problem and/or estimate the amount of data that would traverse it during different periods. The paper is written to be as didactic as possible and presents a set of experiments, conducted in an industrial testbed, that validate the estimation. We believe that a more precise estimation will lead to savings for network infrastructure and to better network utilization
  •  
31.
  • Filieri, Antonio, et al. (author)
  • Automated Design of Self-Adaptive Software with Control-Theoretical Formal Guarantees
  • 2014
  • Conference paper (peer-reviewed)abstract
    • Self-adaptation enables software to execute successfully in dynamic, unpredictable, and uncertain environments. Control theory provides a broad set of mathematically grounded techniques for adapting the behavior of dynamic systems. While it has been applied to specific software control problems, it has proved difficult to define methodologies allowing non-experts to systematically apply control techniques to create adaptive software. These difficulties arise because computer systems are usually non-linear, with varying workloads and heterogeneous components, making it difficult to model software as a dynamic system; i.e., by means of differential or difference equations. This paper proposes a broad scope methodology for automatically constructing both an approximate dynamic model of a software system and a suitable controller for managing its non-functional requirements. Despite its generality, this methodology provides formal guarantees concerning the system's dynamic behavior by keeping its model continuously updated to compensate for changes in the execution environment and effects of the initial approximation. We apply the methodology to three case studies, demonstrating its generality by tackling different domains (and different non-functional requirements) with the same approach. Being broadly applicable and fully automated, this methodology may allow the adoption of control theoretical solutions (and their formal properties) for a wide range of software adaptation problems.
  •  
32.
  • Filieri, Antonio, et al. (author)
  • Automated Design of Self-Adaptive Software with Control-Theoretical Formal Guarantees
  • 2015
  • In: Software Engineering and Management 2015 : Multikonferenz der GI-Fachbereiche Softwaretechnik (SWT) und Wirtschaftsinformatik (WI), FA WI-MAW - Multikonferenz der GI-Fachbereiche Softwaretechnik (SWT) und Wirtschaftsinformatik (WI), FA WI-MAW. - 1617-5468. - 9783885796336 ; P-239, s. 112-113
  • Conference paper (peer-reviewed)abstract
    • Self-adaptation enables software to execute successfully in dynamic, unpredictable, and uncertain environments. However, most of the current approaches lack formal guarantees on the effectiveness and dependability of the adaptation mechanisms, limiting their applicability in practice. Control theory established a broad set of mathematically grounded techniques for the control of dynamic systems for several engineering fields. While control shares self-evident similarities with software adaptation, modeling software behavior as a system of differential or difference equations is not straightforward, nor is mastering the mathematical background needed for synthesizing a suitable controller. In this paper we focus on the automatic modeling and controller synthesis for systems with a single knob affecting the satisfaction of a quantitative requirements. Effectiveness and performance of the controller are guaranteed by construction. The approach is fully automated and implemented in several programming languages, empowering non-experts with the ability of applying control principles to a wide range of software adaptation problems.
  •  
33.
  • Filieri, Antonio, et al. (author)
  • Automated Multi-Objective Control for Self-Adaptive Software Design
  • 2015
  • In: Proceedings of the 2015 10th Joint Meeting on Foundations of Software Engineering. - New York, NY, USA : ACM. - 9781450336758 ; , s. 13-24
  • Conference paper (peer-reviewed)abstract
    • While software is becoming more complex everyday, the requirements on its behavior are not getting any easier to satisfy. An application should offer a certain quality of service, adapt to the current environmental conditions and withstand runtime variations that were simply unpredictable during the design phase. To tackle this complexity, control theory has been proposed as a technique for managing software’s dynamic behavior, obviating the need for human intervention. Control-theoretical solutions, however, are either tailored for the specific application or do not handle the complexity of multiple interacting components and multiple goals. In this paper, we develop an automated control synthesis methodology that takes, as input, the configurable software components (or knobs) and the goals to be achieved. Our approach automatically constructs a control system that manages the specified knobs and guarantees the goals are met. These claims are backed up by experimental studies on three different software applications, where we show how the proposed automated approach handles the complexity of multiple knobs and objectives.
  •  
34.
  • Filieri, Antonio, et al. (author)
  • Autotuning control structures for reliability-driven dynamic binding
  • 2012
  • In: IEEE 51st Annual Conference on Decision and Control (CDC), 2012. - 0743-1546. - 9781467320658 ; , s. 418-423
  • Conference paper (peer-reviewed)abstract
    • This paper explores a formally grounded ap- proach to solve the problem of dynamic binding in service- oriented software architecture. Dynamic binding is a widely adopted mean to automatically bind exposed software interfaces to actual implementations. The execution of an operation on one or another implementation, though providing the same result, could turn out in different quality of service, e.g. due to failure occurrence. Dynamic binding is thus of primary importance to achieve what in the Software Engineering domain is called “self- adaptiveness”, the capability to preserve a desired quality of service, if this is feasible. It is important to reach this goal also in the presence of environmental fluctuations – a route congestion increase – or even abrupt variations – a server breakdown. A quite general dynamic binding problem is here reformulated as a discrete-time feedback control one, and the use of autotuning techniques is discussed, extending previous research, in a view to guaranteeing the desired quality of service without the need for computationally-intensive optimisations.
  •  
35.
  • Filieri, Antonio, et al. (author)
  • Control Strategies for Self-Adaptive Software Systems
  • 2017
  • In: ACM Transactions on Autonomous and Adaptive Systems. - : Association for Computing Machinery (ACM). - 1556-4665 .- 1556-4703. ; 11:4
  • Journal article (peer-reviewed)abstract
    • The pervasiveness and growing complexity of software systems are challenging software engineering to design systems that can adapt their behavior to withstand unpredictable, uncertain, and continuously changing execution environments. Control theoretical adaptation mechanisms have received growing interest from the software engineering community in the last few years for their mathematical grounding, allowing formal guarantees on the behavior of the controlled systems. However, most of these mechanisms are tailored to specific applications and can hardly be generalized into broadly applicable software design and development processes.This article discusses a reference control design process, from goal identification to the verification and validation of the controlled system. A taxonomy of the main control strategies is introduced, analyzing their applicability to software adaptation for both functional and nonfunctional goals. A brief extract on how to deal with uncertainty complements the discussion. Finally, the article highlights a set of open challenges, both for the software engineering and the control theory research communities.
  •  
36.
  • Filieri, Antonio, et al. (author)
  • Control theory for software engineering : Technical briefing
  • 2016
  • In: ICSE '16 Proceedings of the 38th International Conference on Software Engineering Companion. - New York, NY, USA : ACM. - 9781450341615 - 9781450342056 ; , s. 908-910
  • Conference paper (peer-reviewed)abstract
    • Pervasiveness and complexity of modern software are challenging engineers to design applications able to guarantee the desired quality of service despite unpredictable runtime variations in their execution environment. A variety of techniques have been proposed in the last year for the design of self-adaptive applications; however, most of them is tailored to specific applications or can provide limited guarantees of effectiveness and dependability. Control theory has, on the other hand, developed a wide set of mathematically grounded methods for many engineering domains that interact with the physical world. However, applying these methods to software systems is not straightforward. Software is rarely designed to be controllable and its behavior is hard to model in formalisms amenable to control. In this technical briefing we will recall a set of foundational concepts of control theory, explain how they can be transposed in the software engineering domain, and discuss some insights into the design of controllable software.
  •  
37.
  • Filieri, Antonio, et al. (author)
  • Discrete-time dynamic modeling for software and services composition as an extension of the Markov chain approach
  • 2012
  • In: IEEE International Conference on Control Applications (CCA), 2012. - 1085-1992. - 9781467345033 ; , s. 557-562
  • Conference paper (peer-reviewed)abstract
    • Discrete Time Markov Chains (DTMCs) and Con- tinuous Time Markov Chains (CTMCs) are often used to model various types of phenomena, such as, for example, the behavior of software products. In that case, Markov chains are widely used to describe possible time-varying behavior of “self-adaptive” software systems, where the transition from one state to another represents alternative choices at the software code level, taken according to a certain probability distribution. From a control-theoretical standpoint, some of these probabil- ities can be interpreted as control signals and others can just be observed. However, the translation between a DTMC or CTMC model and a corresponding first principle model, that can be used to design a control system is not immediate. This paper investigates a possible solution for translating a CTMC model into a dynamic system, with focus on the control of computing systems components. Notice that DTMC models can be translated as well, providing additional information.
  •  
38.
  • Filieri, Antonio, et al. (author)
  • Reliability-driven dynamic binding via feedback control
  • 2012
  • In: 2012 ICSE Workshop on Software Engineering for Adaptive and Self-Managing Systems (SEAMS). - 9781467317887 ; , s. 43-52
  • Conference paper (peer-reviewed)abstract
    • We are concerned with software that can self-adapt to satisfy certain reliability requirements, in spite of adverse changes affecting the environment in which it is embedded. Self-adapting software architectures are heavily based on dynamic binding. The bindings among components are dynamically set as the conditions that require a self-adaptation are discovered during the system's lifetime. By adopting a suitable modeling approach, the dynamic binding problem can be formulated as a discrete-time feedback control problem, and solved with very simple techniques based on linear blocks. Doing so, reliability objectives are in turn formulated as set point tracking ones in the presence of disturbances, and attained without the need for optimization. At design time, the proposed formulation has the advantage of naturally providing system sizing clues, while at operation time, the inherent computational simplicity of the obtained controllers results in a low overhead. Finally, the formulation allows for a rigorous assessment of the achieved results in both nominal and off-design conditions for any desired operation point.
  •  
39.
  • Filieri, Antonio, et al. (author)
  • Software Engineering Meets Control Theory
  • 2015
  • In: 2015 10th International Symposium on Software Engineering for Adaptive and Self-Managing Systems. - Piscataway, NJ, USA : IEEE Press. - 9780769555676 ; , s. 71-82
  • Conference paper (peer-reviewed)abstract
    • The software engineering community has proposed numerous approaches for making software self-adaptive. These approaches take inspiration from machine learning and control theory, constructing software that monitors and modifies its own behavior to meet goals. Control theory, in particular, has received considerable attention as it represents a general methodology for creating adaptive systems. Control-theoretical software implementations, however, tend to be ad hoc. While such solutions often work in practice, it is difficult to understand and reason about the desired properties and behavior of the resulting adaptive software and its controller. This paper discusses a control design process for software systems which enables automatic analysis and synthesis of a controller that is guaranteed to have the desired properties and behavior. The paper documents the process and illustrates its use in an example that walks through all necessary steps for self-adaptive controller synthesis.
  •  
40.
  • Gunnarsson, Martin, et al. (author)
  • Trusted Execution of Periodic Tasks for Embedded Systems
  • 2023
  • In: IFAC Proceedings Volumes (IFAC-PapersOnline). - 2405-8963. ; 56:2, s. 8845-8850
  • Journal article (peer-reviewed)abstract
    • Systems that interact with the environment around them generally run some periodic tasks. This class of systems include, among others, embedded control systems. Embedded controllers have been proven vulnerable to various security attacks, including attacks that alter sensor and actuator data and attacks that disrupt the calculation of the control signals. In this paper, we propose, and implement, a mechanism to execute a periodic task and its communication interfaces in a trusted execution environment. This allows us to execute an isolated controller, thus offering higher security guarantees. We analyse the overhead of switching between the regular (possibly compromised) execution environment and the trusted execution environment and quantify the effect of this defence mechanism on the control performance.
  •  
41.
  •  
42.
  • Hoffmann, Henry, et al. (author)
  • A generalized software framework for accurate and efficient management of performance goals
  • 2013
  • In: 2013 Proceedings of the International Conference on Embedded Software, EMSOFT 2013. - 9781479914432
  • Conference paper (peer-reviewed)abstract
    • A number of techniques have been proposed to provide runtime performance guarantees while minimizing power consumption. One drawback of existing approaches is that they work only on a fixed set of components (or actuators) that must be specified at design time. If new components become available, these management systems must be redesigned and reimplemented. In this paper, we propose PTRADE, a novel performance management framework that is general with respect to the components it manages. PTRADE can be deployed to work on a new system with different components without redesign and reimplementation. PTRADE's generality is demonstrated through the management of performance goals for a variety of benchmarks on two different Linux/x86 systems and a simulated 128-core system, each with different components governing power and performance tradeoffs. Our experimental results show that PTRADE provides generality while meeting performance goals with low error and close to optimal power consumption.
  •  
43.
  • Hoffmann, Henry, et al. (author)
  • A Generalized Software System for Accurate and Efficient Management of Application Performance Goals
  • 2013
  • Conference paper (peer-reviewed)abstract
    • A number of techniques have been proposed to provide run- time performance guarantees while minimizing power consumption. One drawback of existing approaches is that they work only on a fixed set of components (or actuators) that must be specified at design time. If new components become available, these management systems must be redesigned and reimplemented. In this paper, we propose PTRADE, a novel performance management framework that is general with respect to the components it manages. PTRADE can be deployed to work on a new system with different components without redesign and reimplementation. PTRADE’s generality is demonstrated through the management of performance goals for a variety of benchmarks on two different Linux/x86 systems and a simulated 128-core system, each with different components governing power and performance tradeoffs. Our experimental results show that PTRADE provides generality while meeting performance goals and ap- proaching optimal power consumption. PTRADE consumes only 7% more power than optimal on the Linux/x86 systems and 3% more power than optimal on the simulated many-core.
  •  
44.
  • Hoffmann, Henry, et al. (author)
  • PCP: A Generalized Approach to Optimizing Performance Under Power Constraints through Resource Management
  • 2014
  • Conference paper (peer-reviewed)abstract
    • Many computing systems are constrained by power budgets. While they could temporarily draw more power, doing so creates unsustainable temperatures and unwanted electricity consumption. Developing systems that operate within power budgets is a constrained optimization problem: configuring the components within the system to maximize performance while maintaining sustainable power consumption. This is a challenging problem because many different components within a system affect power/performance tradeoffs and they interact in complex ways. Prior approaches address these challenges by fixing a set of components and designing a power budgeting framework that manages only that one set of components. If new components become available, then this framework must be redesigned and reimplemented. This paper presents PCP, a general solution to the power budgeting problem that works with arbitrary sets of components, even if they are not known at design time or change during runtime. To demonstrate PCP we implement it in software and deploy it on a Linux/x86 platform.
  •  
45.
  • Hoffmann, Henry, et al. (author)
  • Self-aware computing in the Angstrom processor
  • 2012
  • In: Proceedings of the 49th Annual Design Automation Conference. - New York, NY, USA : ACM. - 9781450311991 ; , s. 259-264
  • Conference paper (peer-reviewed)abstract
    • Addressing the challenges of extreme scale computing re- quires holistic design of new programming models and sys- tems that support those models. This paper discusses the Angstrom processor, which is designed to support a new Self-aware Computing (SEEC) model. In SEEC, applications explicitly state goals, while other systems components provide actions that the SEEC runtime system can use to meet those goals. Angstrom supports this model by ex- posing sensors and adaptations that traditionally would be managed independently by hardware. This exposure allows SEEC to coordinate hardware actions with actions specified by other parts of the system, and allows the SEEC runtime system to meet application goals while reducing costs (e.g., power consumption).
  •  
46.
  • Imes, Connor, et al. (author)
  • POET: A Portable Approach to Minimizing Energy Under Soft Real-time Constraints
  • 2015
  • In: Real-Time and Embedded Technology and Applications Symposium (RTAS), 2015 IEEE. - 1545-3421. - 9781479986040 - 9781479986033 ; , s. 75-86
  • Conference paper (peer-reviewed)abstract
    • Embedded real-time systems must meet timing constraints while minimizing energy consumption. To this end, many energy optimizations are introduced for specific platforms or specific applications. These solutions are not portable, however, and when the application or the platform change, these solutions must be redesigned. Portable techniques are hard to develop due to the varying tradeoffs experienced with different application/platform configurations. This paper addresses the problem of finding and exploiting general tradeoffs, using control theory and mathematical optimization to achieve energy minimization under soft real-time application constraints. The paper presents POET, an open-source C library and runtime system that takes a specification of the platform resources and optimizes the application execution. We test POET's ability to deliver portable energy reduction on two embedded systems with different tradeoff spaces - the first with a mobile Intel Haswell processor, and the second with an ARM big.LITTLE System on Chip. POET achieves the desired latency goals with small error while consuming, on average, only 1.3% more energy than the dynamic optimal oracle on the Haswell and 2.9% more on the ARM. We believe this open-source, library-based approach to resource management will simplify the process of writing portable, energy-efficient code for embedded systems.
  •  
47.
  • Imes, Connor, et al. (author)
  • Portable Multicore Resource Management for Applications with Performance Constraints
  • 2016
  • In: IEEE 10th International Symposium on Embedded Multicore/Many-core Systems-on-Chip. - 9781509035304 ; , s. 305-312
  • Conference paper (peer-reviewed)abstract
    • Many modern software applications have performance requirements, like mobile and embedded systems that must keep up with sensor data, or web services that must return results to users within an acceptable latency bound. For such applications, the goal is not to run as fast as possible, but to meet their performance requirements with minimal resource usage, the key resource in most systems being energy. Heuristic solutions have been proposed to minimize energy under a performance constraint, but recent studies show that these approaches are not portable - heuristics that are near-optimal on one system can waste integer factors of energy on others. The POET library and runtime system provides a portable method for resource management that achieves near-optimal energy consumption while meeting soft real-time constraints across a range of devices. Although POET was originally designed and tested on embedded and mobile platforms, in this paper we evaluate it on a manycore server-class system. The larger scale of manycore systems adds some overhead to adjusting resource allocations, but POET still meets timing constraints and achieves near-optimal energy consumption. We demonstrate that POET achieves portable energy efficiency on platforms ranging from low-power ARM big.LITTLE architectures to powerful x86 server-class systems.
  •  
48.
  •  
49.
  • Josephrexon, Brindha Jeniefer, et al. (author)
  • Experimenting with networked control software subject to faults
  • 2022
  • In: 2022 IEEE 61st Conference on Decision and Control, CDC 2022. - 2576-2370 .- 0743-1546. - 9781665467612 ; 2022-December, s. 1547-1552
  • Conference paper (peer-reviewed)abstract
    • Faults and errors are common in the execution of digital controllers on top of embedded hardware. Researchers from the embedded system domain devised models to understand and bound the occurrence of these faults. Using these models, control researchers have demonstrated robustness properties of control systems, and of their corresponding digital implementations. In this paper, we build a framework to experiment with the injection of faults in a networked control system that regulates the behaviour of a Furuta pendulum. We use the software framework to experiment on computational problems that cause the control signals not to be available on time, and network faults that cause dropped packets during the transmission of sensor data and actuator commands.
  •  
50.
  •  
Skapa referenser, mejla, bekava och länka
  • Result 1-50 of 131
Type of publication
conference paper (67)
journal article (49)
book chapter (7)
reports (5)
licentiate thesis (2)
book (1)
show more...
show less...
Type of content
peer-reviewed (118)
other academic/artistic (13)
Author/Editor
Maggio, Martina (112)
Årzén, Karl-Erik (24)
Gerbino, Martina (18)
Gudmundsson, Jón E. (18)
de Bernardis, P. (18)
Bouchet, F. R. (18)
show more...
Calabrese, E. (18)
Delabrouille, J. (18)
Di Valentino, E. (18)
Finelli, F. (18)
Jaffe, A. H. (18)
Matarrese, S. (18)
Paoletti, D. (18)
Frailis, M. (18)
Maciás-Pérez, J. F. (18)
Aumont, J. (18)
Baccigalupi, C. (18)
Banday, A. J. (18)
Barreiro, R. B. (18)
Bartolo, N. (18)
Benabed, K. (18)
Bersanelli, M. (18)
Bielewicz, P. (18)
Bond, J. R. (18)
Borrill, J. (18)
Burigana, C. (18)
Crill, B. P. (18)
de Zotti, G. (18)
Diego, J. M. (18)
Ducout, A. (18)
Dupac, X. (18)
Elsner, F. (18)
Ensslin, T. A. (18)
Eriksen, H. K. (18)
Galeotta, S. (18)
Galli, S. (18)
Ganga, K. (18)
Gruppuso, A. (18)
Herranz, D. (18)
Keihanen, E. (18)
Keskitalo, R. (18)
Kunz, M. (18)
Kurki-Suonio, H. (18)
Lamarre, J. -M. (18)
Lasenby, A. (18)
Lattanzi, M. (18)
Levrier, F. (18)
Lilje, P. B. (18)
Lopez-Caniego, M. (18)
Maggio, G. (18)
show less...
University
Lund University (106)
Stockholm University (18)
Mälardalen University (15)
Umeå University (13)
Linnaeus University (7)
Royal Institute of Technology (2)
show more...
Uppsala University (1)
Linköping University (1)
RISE (1)
Karolinska Institutet (1)
show less...
Language
English (131)
Research subject (UKÄ/SCB)
Engineering and Technology (103)
Natural sciences (39)
Medical and Health Sciences (1)

Year

Kungliga biblioteket hanterar dina personuppgifter i enlighet med EU:s dataskyddsförordning (2018), GDPR. Läs mer om hur det funkar här.
Så här hanterar KB dina uppgifter vid användning av denna tjänst.

 
pil uppåt Close

Copy and save the link in order to return to this view