SwePub
Sök i LIBRIS databas

  Extended search

onr:"swepub:oai:research.chalmers.se:36264799-84bd-4162-aaa8-ab17b064d96c"
 

Search: onr:"swepub:oai:research.chalmers.se:36264799-84bd-4162-aaa8-ab17b064d96c" > Systematic benchmar...

  • 1 of 1
  • Previous record
  • Next record
  •    To hitlist

Systematic benchmarking for reproducibility of computer vision algorithms for real-time systems: The example of optic flow estimation

Nguyen, Björnborg, 1992 (author)
Chalmers tekniska högskola,Chalmers University of Technology
Berger, Christian, 1980 (author)
Gothenburg University,Göteborgs universitet,Institutionen för data- och informationsteknik (GU),Department of Computer Science and Engineering (GU)
Benderius, Ola, 1985 (author)
Chalmers tekniska högskola,Chalmers University of Technology
 (creator_code:org_t)
IEEE, 2019
2019
English.
In: IEEE International Conference on Intelligent Robots and Systems. - : IEEE. - 2153-0858 .- 2153-0866. ; , s. 5264-5269
  • Conference paper (peer-reviewed)
Abstract Subject headings
Close  
  • Until now there have been few formalized methods for conducting systematic benchmarking aiming at reproducible results when it comes to computer vision algorithms. This is evident from lists of algorithms submitted to prominent datasets, authors of a novel method in many cases primarily state the performance of their algorithms in relation to a shallow description of the hardware system where it was evaluated. There are significant problems linked to this non-systematic approach of reporting performance, especially when comparing different approaches and when it comes to the reproducibility of claimed results. Furthermore how to conduct retrospective performance analysis such as an algorithm's suitability for embedded real-time systems over time with underlying hardware and software changes in place. This paper proposes and demonstrates a systematic way of addressing such challenges by adopting containerization of software aiming at formalization and reproducibility of benchmarks. Our results show maintainers of broadly accepted datasets in the computer vision community to strive for systematic comparison and reproducibility of submissions to increase the value and adoption of computer vision algorithms in the future.

Subject headings

NATURVETENSKAP  -- Data- och informationsvetenskap -- Datorteknik (hsv//swe)
NATURAL SCIENCES  -- Computer and Information Sciences -- Computer Engineering (hsv//eng)
NATURVETENSKAP  -- Data- och informationsvetenskap -- Programvaruteknik (hsv//swe)
NATURAL SCIENCES  -- Computer and Information Sciences -- Software Engineering (hsv//eng)
NATURVETENSKAP  -- Data- och informationsvetenskap -- Datavetenskap (hsv//swe)
NATURAL SCIENCES  -- Computer and Information Sciences -- Computer Sciences (hsv//eng)
TEKNIK OCH TEKNOLOGIER  -- Elektroteknik och elektronik -- Datorsystem (hsv//swe)
ENGINEERING AND TECHNOLOGY  -- Electrical Engineering, Electronic Engineering, Information Engineering -- Computer Systems (hsv//eng)
NATURVETENSKAP  -- Data- och informationsvetenskap -- Datorseende och robotik (hsv//swe)
NATURAL SCIENCES  -- Computer and Information Sciences -- Computer Vision and Robotics (hsv//eng)

Keyword

benchmarking
reproducibility
virtualization
embedded systems
real-time computing
computer vision
run-time
containerization
optic flow
performance

Publication and Content Type

kon (subject category)
ref (subject category)

Find in a library

To the university's database

  • 1 of 1
  • Previous record
  • Next record
  •    To hitlist

Search outside SwePub

Kungliga biblioteket hanterar dina personuppgifter i enlighet med EU:s dataskyddsförordning (2018), GDPR. Läs mer om hur det funkar här.
Så här hanterar KB dina uppgifter vid användning av denna tjänst.

 
pil uppåt Close

Copy and save the link in order to return to this view