SwePub
Sök i SwePub databas

  Extended search

Träfflista för sökning "WFRF:(Schoonderwaldt Erwin) "

Search: WFRF:(Schoonderwaldt Erwin)

  • Result 1-10 of 28
Sort/group result
   
EnumerationReferenceCoverFind
1.
  • Fober, D., et al. (author)
  • IMUTUS : an interactive music tuition system
  • 2004
  • In: Proc. of the Sound and Music Computing Conference (SMC 04), October 20-22, 2004, IRCAM, Paris, France. ; , s. 97-103
  • Conference paper (other academic/artistic)
  •  
2.
  • Friberg, Anders, et al. (author)
  • Automatic real-time extraction of musical expression
  • 2002
  • In: Proceedings of the International Computer Music Conference 2002. ; , s. 365-367
  • Conference paper (peer-reviewed)abstract
    • Previous research has identified a set of acoustical cues that are important in communicating different emotions in music performance. We have applied these findings in the development of a system that automatically predicts the expressive intention of the player. First, low-level cues of music performances are extracted from audio. Important cues include average and variability values of sound level, tempo, articulation, attack velocity, and spectral content. Second, linear regression models obtained from listening experiments are used to predict the intended emotion. Third, the prediction data can be visually displayed using, for example, color mappings in accordance with synesthesia research. Preliminary test results indicate that the system accurately predicts the intended emotion and is robust to minor errors in the cue extraction.
  •  
3.
  • Friberg, Anders, et al. (author)
  • CUEX: An algorithm for automatic extraction of expressive tone parameters in music performance from acoustic signals
  • 2007
  • In: Acta Acoustica united with Acustica. - 1610-1928 .- 1861-9959. ; 93:3, s. 411-420
  • Journal article (peer-reviewed)abstract
    • CUEX is an algorithm that from recordings of solo music performances extracts the tone parameters for tempo, sound level, articulation, onset velocity, spectrum, vibrato rate, and vibrato extent. The aim is to capture the expressive variations in a music performance, rather than to identify the musical notes played. The CUEX algorithm uses a combination of traditional methods to segment the audio stream into tones based on fundamental frequency contour and sound level envelope. From the resulting onset and offset positions, the different tone parameters are computed. CUEX has been evaulated using both synthesized performances and recordings of human performances. For the synthesized performances, tone recognition of 98.7% was obtained on average. The onset and offset precision was 8 ms and 20 ms, respectively, and sound level precision about 1 dB. For human performances, the recognition rate was 91.8 % on average. Various applications of the CUEX algorithm are discussed.
  •  
4.
  • Friberg, Anders, et al. (author)
  • Perceptual ratings of musical parameters
  • 2011
  • In: Gemessene Interpretation - Computergestützte Aufführungsanalyse im Kreuzverhör der Disziplinen. - : Mainz: Schott 2011, (Klang und Begriff 4). ; , s. 237-253
  • Book chapter (peer-reviewed)
  •  
5.
  • Friberg, Anders, et al. (author)
  • Using listener-based perceptual features as intermediate representations in music information retrieval
  • 2014
  • In: Journal of the Acoustical Society of America. - : Acoustical Society of America (ASA). - 0001-4966 .- 1520-8524. ; 136:4, s. 1951-1963
  • Journal article (peer-reviewed)abstract
    • The notion of perceptual features is introduced for describing general music properties based on human perception. This is an attempt at rethinking the concept of features, aiming to approach the underlying human perception mechanisms. Instead of using concepts from music theory such as tones, pitches, and chords, a set of nine features describing overall properties of the music was selected. They were chosen from qualitative measures used in psychology studies and motivated from an ecological approach. The perceptual features were rated in two listening experiments using two different data sets. They were modeled both from symbolic and audio data using different sets of computational features. Ratings of emotional expression were predicted using the perceptual features. The results indicate that (1) at least some of the perceptual features are reliable estimates; (2) emotion ratings could be predicted by a small combination of perceptual features with an explained variance from 75% to 93% for the emotional dimensions activity and valence; (3) the perceptual features could only to a limited extent be modeled using existing audio features. Results clearly indicated that a small number of dedicated features were superior to a "brute force" model using a large number of general audio features.
  •  
6.
  • Friberg, Anders, et al. (author)
  • Using perceptually defined music features in music information retrieval
  • 2014
  • Other publication (other academic/artistic)abstract
    • In this study, the notion of perceptual features is introduced for describing general music properties based on human perception. This is an attempt at rethinking the concept of features, in order to understand the underlying human perception mechanisms. Instead of using concepts from music theory such as tones, pitches, and chords, a set of nine features describing overall properties of the music was selected. They were chosen from qualitative measures used in psychology studies and motivated from an ecological approach. The selected perceptual features were rated in two listening experiments using two different data sets. They were modeled both from symbolic (MIDI) and audio data using different sets of computational features. Ratings of emotional expression were predicted using the perceptual features. The results indicate that (1) at least some of the perceptual features are reliable estimates; (2) emotion ratings could be predicted by a small combination of perceptual features with an explained variance up to 90%; (3) the perceptual features could only to a limited extent be modeled using existing audio features. The results also clearly indicated that a small number of dedicated features were superior to a 'brute force' model using a large number of general audio features.
  •  
7.
  • Juslin, Patrik N, 1969-, et al. (author)
  • Feedback learning of musical expressivity
  • 2004
  • In: Musical excellence - Strategies and techniques to enhance performance. - New York : Oxford University Press. - 0198525346 ; , s. 247-270
  • Book chapter (peer-reviewed)abstract
    • Communication of emotion is of fundamental importance to the performance of music. However, recent research indicates that expressive aspects of performance are neglected in music education, with teachers spending more time and effort on technical aspects. Moreover, traditional strategies for teaching expressivity rarely provide informative feedback to the performer. In this chapter we explore the nature of expressivity in music performance and evaluate novel methods for teaching expressivity based on recent advances in musical science, psychology, technology, and acoustics. First, we provide a critical discussion of traditional views on expressivity, and dispel some of the myths that surround the concept of expressivity. Then, we propose a revised view of expressivity based on modern research. Finally, a new and empirically based approach to learning expressivity termed cognitive feedback is described and evaluated. The goal of cognitive feedback is to allow the performer to compare a model of his or her playing to an “optimal” model based on listeners’ judgments of expressivity. This method is being implemented in user-friendly software, which is evaluated in collaboration with musicians and music teachers.
  •  
8.
  • Juslin, Patrik N, et al. (author)
  • Play it again with feeling : Computer feedback in musical communication of emotions
  • 2006
  • In: Journal of experimental psychology. Applied. - 1076-898X .- 1939-2192. ; 12:2, s. 79-95
  • Journal article (peer-reviewed)abstract
    • Communication of emotions is of crucial importance in music performance. Yet research has suggested that this skill is neglected in music education. This article presents and evaluates a computer program that automatically analyzes music performances and provides feedback to musicians in order to enhance their communication of emotions. Thirty-six semiprofessional jazz/rock guitar players were randomly assigned to one of 3 conditions: (1) feedback from the computer program, (2) feedback from music teachers, and (3) repetition without feedback. Performance measures revealed the greatest improvement in communication accuracy for the computer program, but usability measures indicated that certain aspects of the program could be improved. Implications for music education are discussed.
  •  
9.
  • Raptis, S., et al. (author)
  • IMUTUS – An effective practicing environment for music tuition
  • 2005
  • In: Proc. of International Computer Music Conference (ICMC 2005). - : International Computer Music Association. ; , s. 383-386
  • Conference paper (peer-reviewed)abstract
    • This paper presents some major results from the IMUTUS project. IMUTUS was an RTD project that aimed at the development of an open platform for training students on the recorder. The paper focuses on one of the most important and innovative parts of the IMUTUS system, the practicing environment. This environment integrates technological tools for the automatic analysis and evaluation of student performances along with enhanced interaction schemes to provide an effective approach to music learning. Testing and validation activities that have been carried out indicate that the IMUTUS approach is appreciated by both students and teacher, and that it clearly has a strong potential.
  •  
10.
  •  
Skapa referenser, mejla, bekava och länka
  • Result 1-10 of 28

Kungliga biblioteket hanterar dina personuppgifter i enlighet med EU:s dataskyddsförordning (2018), GDPR. Läs mer om hur det funkar här.
Så här hanterar KB dina uppgifter vid användning av denna tjänst.

 
pil uppåt Close

Copy and save the link in order to return to this view