인문학
사회과학
자연과학
공학
의약학
농수해양학
예술체육학
복합학
지원사업
학술연구/단체지원/교육 등 연구자 활동을 지속하도록 DBpia가 지원하고 있어요.
커뮤니티
연구자들이 자신의 연구와 전문성을 널리 알리고, 새로운 협력의 기회를 만들 수 있는 네트워킹 공간이에요.
논문 기본 정보
- 자료유형
- 학술저널
- 저자정보
- 발행연도
- 2023.12
- 수록면
- 502 - 513 (12page)
- DOI
- 10.7232/JKIIE.2023.49.6.502
이용수
초록· 키워드
Table Question Answering (QA) aims to answer questions based on semi-structured tables. Unlike text data, tables possess a unique two-dimensional structure, driving the exploration of specialized learning approaches to enhance language models’ understanding of tables. However, while Table QA research is advancing rapidly in English, its development in Korean is still in its early stages. To mitigate this gap, we present KOreaPAS, specifically designed for Korean Table QA tasks. KOreaPAS is based on TAPAS’s architecture, and its learning process consists of two stages: pre-training and fine-tuning. In the publicly available Korean tabular dataset for pre-training language models, approximately 36.5% instances lack text information related to tables, and it can potentially hinder the models’ learning of various correlations between text and the table during pre-training. To address the issue, we introduce a table-text mapping method that retrieves the most relevant text for the table from Wikipedia pages. Further, we propose a multi-granularity fine-tuning strategy that utilizes the three granularities of the table structure for both training and inference. Experimental results robustly confirm the effectiveness of the proposed approaches in enhancing the comprehension abilities of language models towards questions over tables. Specifically, KOreaPAS demonstrated the highest performance among currently published benchmark models in tests conducted on two Korean Table QA datasets, thus establishing a new standard in Korean Table QA tasks.
상세정보 수정요청해당 페이지 내 제목·저자·목차·페이지정보가 잘못된 경우 알려주세요!
목차
- 1. 서론
- 2. 관련 연구
- 3. 제안 방법론
- 4. 실험 및 결과
- 5. 결론
- 참고문헌