메뉴 건너뛰기

추천
검색
질문

논문 기본 정보

자료유형
학술저널
저자정보
(서울과학기술대학교) (서울과학기술대학교)
저널정보
대한산업공학회 대한산업공학회지 대한산업공학회지 제49권 제2호
발행연도
수록면
133 - 141 (9page)
DOI
10.7232/JKIIE.2023.49.2.133

이용수

표지
📌
연구주제
📖
연구배경
🔬
연구방법
이 논문의 연구방법이 궁금하신가요?
🏆
연구결과
이 논문의 연구결과가 궁금하신가요?
AI에게 요청하기
추천
검색
질문

초록· 키워드

In the field of natural language processing, a lot of progress has been made with the advent of Transformer having a self-attention mechanism. At the same time, the recently increasing model size causes difficulties in deploying the model for online serving that requires fast inference. To address this issue, one can employ model compression techniques when a target domain is coherent with the training corpus (i.e., a general domain) of pre-trained models such as BERT. However, the additional domain adaptation step is required along with model compression when we leverage such pre-trained models for special target domains such as medicine, law, finance, etc. In this paper, we propose an Efficient Domain Adaptive Distillation (EDAD) method to efficiently create a lightweight model capable of fast inference for a target domain by integrating knowledge distillation, which is one of the popular model compression methods, and domain adaptation processes. Experimental results demonstrate that EDAD can train a compact model for a target domain with much lower computational costs by integrating the two individual processes, adaptation and compression, into a single process and shows comparable performance with existing methods for named entity recognition (NER) tasks in the medical domain.
상세정보 수정요청해당 페이지 내 제목·저자·목차·페이지
정보가 잘못된 경우 알려주세요!

목차

  1. 1. 서론
  2. 2. 선행 연구
  3. 3. 방법론
  4. 4. 실험
  5. 5. 결론
  6. 참고문헌

참고문헌

참고문헌 신청

최근 본 자료

전체보기