Sökning: id:"swepub:oai:gup.ub.gu.se/305523" >
Can the Transformer...
Can the Transformer Learn Nested Recursion with Symbol Masking?
-
- Bernardy, Jean-Philippe, 1978 (författare)
- Gothenburg University,Göteborgs universitet,Institutionen för filosofi, lingvistik och vetenskapsteori,Department of Philosophy, Linguistics and Theory of Science
-
- Ek, Adam, 1990 (författare)
- Gothenburg University,Göteborgs universitet,Institutionen för filosofi, lingvistik och vetenskapsteori,Department of Philosophy, Linguistics and Theory of Science
-
- Maraev, Vladislav, 1986 (författare)
- Gothenburg University,Göteborgs universitet,Institutionen för filosofi, lingvistik och vetenskapsteori,Department of Philosophy, Linguistics and Theory of Science
-
(creator_code:org_t)
- Stroudsburg, PA : The Association for Computational Linguistics, 2021
- 2021
- Engelska.
-
Ingår i: Findings of the Association for Computational Linguistics: ACL-IJCNLP 2021, August 1 - 6, 2021, Online / Chengqing Zong, Fei Xia, Wenjie Li, Roberto Navigli (Editors). - Stroudsburg, PA : The Association for Computational Linguistics. - 9781954085541
- Relaterad länk:
-
https://aclanthology...
-
visa fler...
-
https://gup.ub.gu.se...
-
https://doi.org/10.1...
-
visa färre...
Abstract
Ämnesord
Stäng
- We investigate if, given a simple symbol masking strategy, self-attention models are capable of learning nested structures and generalise over their depth. We do so in the simplest setting possible, namely languages consisting of nested parentheses of several kinds. We use encoder-only models, which we train to predict randomly masked symbols, in a BERT-like fashion. We find that the accuracy is well above random baseline, with accuracy consistently above 50% both when increasing nesting depth and distances between training and testing. However, we find that the predictions made correspond to a simple parenthesis counting strategy, rather than a push-down automaton. This suggests that self-attention models are not suitable for tasks which require generalisation to more complex instances of recursive structures than those found in the training set.
Ämnesord
- NATURVETENSKAP -- Data- och informationsvetenskap -- Språkteknologi (hsv//swe)
- NATURAL SCIENCES -- Computer and Information Sciences -- Language Technology (hsv//eng)
Nyckelord
- Language Models
- Transformer
- Dyck Languages
- BERT
Publikations- och innehållstyp
- ref (ämneskategori)
- kon (ämneskategori)
Hitta via bibliotek
Till lärosätets databas