인문학
사회과학
자연과학
공학
의약학
농수해양학
예술체육학
복합학
지원사업
학술연구/단체지원/교육 등 연구자 활동을 지속하도록 DBpia가 지원하고 있어요.
커뮤니티
연구자들이 자신의 연구와 전문성을 널리 알리고, 새로운 협력의 기회를 만들 수 있는 네트워킹 공간이에요.
논문 기본 정보
- 자료유형
- 학술저널
- 저자정보
- 발행연도
- 2023.4
- 수록면
- 133 - 141 (9page)
- DOI
- 10.7232/JKIIE.2023.49.2.133
이용수
초록· 키워드
In the field of natural language processing, a lot of progress has been made with the advent of Transformer having a self-attention mechanism. At the same time, the recently increasing model size causes difficulties in deploying the model for online serving that requires fast inference. To address this issue, one can employ model compression techniques when a target domain is coherent with the training corpus (i.e., a general domain) of pre-trained models such as BERT. However, the additional domain adaptation step is required along with model compression when we leverage such pre-trained models for special target domains such as medicine, law, finance, etc. In this paper, we propose an Efficient Domain Adaptive Distillation (EDAD) method to efficiently create a lightweight model capable of fast inference for a target domain by integrating knowledge distillation, which is one of the popular model compression methods, and domain adaptation processes. Experimental results demonstrate that EDAD can train a compact model for a target domain with much lower computational costs by integrating the two individual processes, adaptation and compression, into a single process and shows comparable performance with existing methods for named entity recognition (NER) tasks in the medical domain.
상세정보 수정요청해당 페이지 내 제목·저자·목차·페이지정보가 잘못된 경우 알려주세요!
목차
- 1. 서론
- 2. 선행 연구
- 3. 방법론
- 4. 실험
- 5. 결론
- 참고문헌