인문학
사회과학
자연과학
공학
의약학
농수해양학
예술체육학
복합학
지원사업
학술연구/단체지원/교육 등 연구자 활동을 지속하도록 DBpia가 지원하고 있어요.
커뮤니티
연구자들이 자신의 연구와 전문성을 널리 알리고, 새로운 협력의 기회를 만들 수 있는 네트워킹 공간이에요.
논문 기본 정보
- 자료유형
- 학술저널
- 저자정보
- 발행연도
- 2024.6
- 수록면
- 1,004 - 1,011 (8page)
- DOI
- 10.5370/KIEE.2024.73.6.1004
이용수
초록· 키워드
The window-based self-attention vision transformer (ViT) reduces computational complexity by computing attention within a specific window. However, it is difficult to capture the interactions between pixels from different windows. To address this issue, Swin transformer, a representative window-based self-attention ViT, introduces shifted window multi-head self-attention (SW-MSA) to capture the cross-window information. However, tokens that are distant from each other still cannot be grouped into one window. This paper proposes a method to cluster tokens based on similarity in the feature-space and compute attention within the cluster. The proposed method is an alternative to the SW-MSA of the existing Swin transformer. Additionally, this paper adopts a method to refine the feature space using convolutional block attention module (CBAM) to enhance the representational power of the model. In experimental results, the proposed network outperforms existing convolutional neural networks and transformer-based backbones in the classification task for ImageNet-1K.
상세정보 수정요청해당 페이지 내 제목·저자·목차·페이지정보가 잘못된 경우 알려주세요!
목차
- Abstract
- 1. 서론
- 2. 관련 연구
- 3. 제안하는 비전 트랜스포머 모델
- 4. 결론
- References
참고문헌
참고문헌 신청최근 본 자료
UCI(KEPA) : I410-151-24-02-089831205