Sökning: onr:"swepub:oai:DiVA.org:mau-41238" >
Constraining neural...
Constraining neural networks output by an interpolating loss function with region priors
-
- Bergkvist, Hannes (författare)
- Malmö universitet,Institutionen för datavetenskap och medieteknik (DVMT),Sony, R&D Center Europe, Lund, Sweden
-
- Exner, Peter (författare)
- Sony, R&D Center Europe, Lund, Sweden
-
- Davidsson, Paul (författare)
- Malmö universitet,Institutionen för datavetenskap och medieteknik (DVMT)
-
(creator_code:org_t)
- 2020
- 2020
- Engelska.
-
Ingår i: NeurIPS workshop on Interpretable Inductive Biases and Physically Structured Learning.
- Relaterad länk:
-
https://inductive-bi...
-
visa fler...
-
https://neurips.cc/v...
-
https://mau.diva-por... (primary) (Raw object)
-
https://urn.kb.se/re...
-
visa färre...
Abstract
Ämnesord
Stäng
- Deep neural networks have the ability to generalize beyond observed training data. However, for some applications they may produce output that apriori is known to be invalid. If prior knowledge of valid output regions is available, one way of imposing constraints on deep neural networks is by introducing these priors in a loss function. In this paper, we introduce a novel way of constraining neural network output by using encoded regions with a loss function based on gradient interpolation. We evaluate our method in a positioning task where a region map is used in order to reduce invalid position estimates. Results show that our approach is effective in decreasing invalid outputs for several geometrically complex environments.
Ämnesord
- NATURVETENSKAP -- Data- och informationsvetenskap -- Datavetenskap (hsv//swe)
- NATURAL SCIENCES -- Computer and Information Sciences -- Computer Sciences (hsv//eng)
Nyckelord
- Deep neural networks
- Loss function
- Constraining
- Adaptation
Publikations- och innehållstyp
- ref (ämneskategori)
- kon (ämneskategori)