학술논문
Grammatical illusions in BERT: Attraction effects of subject-verb agreement and reflexive-antecedent dependencies
이용수 4
- 영문명
- Grammatical illusions in BERT: Attraction effects of subject-verb agreement and reflexive-antecedent dependencies
- 발행기관
- 경희대학교 언어정보연구소
- 저자명
- 조예은
- 간행물 정보
- 『언어연구』제40권 제2호, 317~352쪽, 전체 36쪽
- 주제분류
- 인문학 > 언어학
- 파일형태
- 발행일자
- 2023.06.30
7,120원
구매일시로부터 72시간 이내에 다운로드 가능합니다.
이 학술논문 정보는 (주)교보문고와 각 발행기관 사이에 저작물 이용 계약이 체결된 것으로, 교보문고를 통해 제공되고 있습니다.
국문 초록
영문 초록
The phenomenon of attraction effects, whereby a verb erroneously retrieves a syntactically inaccessible but feature-matching noun, is a type of grammatical illusions (Phillips, Wagers, and Lau 2011) that can occur in long-distance subject-verb agreement in human sentence processing (Wagers et al. 2009). In contrast, reflexive-antecedent dependencies have been claimed to lack attraction effects when the reflexive and the antecedent mismatch (Dillon et al. 2013). Yet, some other studies have shown that attraction effects have been observed in reflexive-antecedent dependencies, when the number of feature mismatch between the reflexive and the antecedent increases (Parker and Philips 2017). These findings suggest that there are different cue weightings based on the predictability of the dependency, and these cues are combined according to different cue-combination scheme, such as a linear or a non-linear cue-combination rule (Parker 2019). These linguistic phenomena can be used to analyze how linguistic features are accessed and combined within the internal states of Deep Neural Network (DNN) language models. In the linguistic representations of BERT (Devlin et al. 2018), one of the pre-trained DNN language models, various types of linguistic information are encoded in each layer (Jawahar et al. 2019) and combined while passing through the layers. By measuring the performance of Masked Language Model (MLM), this study finds that both subject-verb agreement and reflexive-antecedent dependencies show attraction effects and follow the linear-combinatoric rule in BERT. The different results from human sentence processing suggest that the self-attention mechanism of BERT may not be able to capture the differences in the predictability of the dependency as effectively as memory retrieval mechanisms in humans. These findings have important implications for developing more understandable and interpretable explainable-AI (xAI) systems that better capture the complexities of human language processing.
목차
1. Introduction
2. Background
3. Experiment
4. General discussion
5. Conclusion
References
해당간행물 수록 논문
- Non-control aspects of the Korean yaksokha-construction
- Korean reformulative multiple accusative construction as vacuous reformulative apposition
- Grammatical illusions in BERT: Attraction effects of subject-verb agreement and reflexive-antecedent dependencies
- An analysis of multiple nominative constructions in Korean: Within LFG adopting Generative Lexicon
- On the nature of CNPC effects in Korean scrambling constructions
- Concessive although-stripping and its theoretical implications
- Lexical aspect and evidential meaning
참고문헌
관련논문
인문학 > 언어학분야 BEST
- 한국어 학습자를 대상으로 한 문식성 교육의 성과와 과제
- ‘역량 함양을 위한 교육과정’으로서 2022 개정 국어과 교육과정 문학 영역의 점검
- 생성형 인공지능은 교사의 교육적 질문 생성 역할을 대신할 수 있는가
최근 이용한 논문
교보eBook 첫 방문을 환영 합니다!
신규가입 혜택 지급이 완료 되었습니다.
바로 사용 가능한 교보e캐시 1,000원 (유효기간 7일)
지금 바로 교보eBook의 다양한 콘텐츠를 이용해 보세요!