본문 바로가기

추천 검색어

실시간 인기 검색어

학술논문

Deep learning can contrast the minimal pairs of syntactic data

이용수 11

영문명
발행기관
경희대학교 언어정보연구소
저자명
Kwonsik Park Myung-Kwan Park Sanghoun Song
간행물 정보
『언어연구』제38권 제2호, 395~424쪽, 전체 30쪽
주제분류
인문학 > 언어학
파일형태
PDF
발행일자
2021.06.30
6,400

구매일시로부터 72시간 이내에 다운로드 가능합니다.
이 학술논문 정보는 (주)교보문고와 각 발행기관 사이에 저작물 이용 계약이 체결된 것으로, 교보문고를 통해 제공되고 있습니다.

1:1 문의
논문 표지

국문 초록

영문 초록

The present work aims to assess the feasibility of using deep learning as a useful tool to investigate syntactic phenomena. To this end, the present study concerns three research questions: (i) whether deep learning can detect syntactically inappropriate constructions, (ii) whether deep learning’s acceptability judgments are accountable, and (iii) whether deep learning’s aspects of acceptability judgments are similar to human judgments. As a proxy for a deep learning language model, this study chooses BERT. The current paper comprises syntactically contrasted pairs of English sentences which come from the three test suites already available. The first one is 196 grammatical -ungrammatical minimal pairs from DeKeyser (2000). The second one is examples in four published syntax textbooks excerpted from Warstadt et al. (2019). The last one is extracted from Sprouse et al. (2013), which collects the examples reported in a theoretical linguistics journal, Linguistic Inquiry. The BERT models, base BERT and large BERT, are assessed by judging acceptability of items in the test suites with an evaluation metric, surprisal, which is used to measure how ‘surprised’ a model is when encountering a word in a sequence of words, i.e., a sentence. The results are analyzed in the two frameworks: directionality and repulsion. The results of directionality reveals that the two versions of BERT are overall competent at distinguishing ungrammatical sentences from grammatical ones. The statistical results of both repulsion and directionality also reveal that the two variants of BERT do not differ significantly. Regarding repulsion, correct judgments and incorrect ones are significantly different. Additionally, the repulsion of the first test suite, which is excerpted from the items for testing learners’ grammaticality judgments, is higher than the other test suites, which are excerpted from the syntax textbooks and published literature. This study compares BERT’s acceptability judgments with magnitude estimation results reported in Sprouse et al. (2013) in order to examine if deep learning’s syntactic knowledge is akin to human knowledge. The error analyses on incorrectly judged items reveal that there are some syntactic constructions that the two BERTs have trouble learning, which indicates that BERT’s acceptability judgments are distributed not randomly.

목차

1. Introduction
2. Method
3. Results
4. Discussion
5. Conclusion
References

키워드

해당간행물 수록 논문

참고문헌

교보eBook 첫 방문을 환영 합니다!

신규가입 혜택 지급이 완료 되었습니다.

바로 사용 가능한 교보e캐시 1,000원 (유효기간 7일)
지금 바로 교보eBook의 다양한 콘텐츠를 이용해 보세요!

교보e캐시 1,000원
TOP
인용하기
APA

Kwonsik Park,Myung-Kwan Park,Sanghoun Song. (2021).Deep learning can contrast the minimal pairs of syntactic data. 언어연구, 38 (2), 395-424

MLA

Kwonsik Park,Myung-Kwan Park,Sanghoun Song. "Deep learning can contrast the minimal pairs of syntactic data." 언어연구, 38.2(2021): 395-424

결제완료
e캐시 원 결제 계속 하시겠습니까?
교보 e캐시 간편 결제