인문학
사회과학
자연과학
공학
의약학
농수해양학
예술체육학
복합학
지원사업
학술연구/단체지원/교육 등 연구자 활동을 지속하도록 DBpia가 지원하고 있어요.
커뮤니티
연구자들이 자신의 연구와 전문성을 널리 알리고, 새로운 협력의 기회를 만들 수 있는 네트워킹 공간이에요.
논문 기본 정보
- 자료유형
- 학술저널
- 저자정보
- 발행연도
- 2023.12
- 수록면
- 1,015 - 1,023 (9page)
- DOI
- 10.5302/J.ICROS.2023.23.0152
이용수
초록· 키워드
Monocular depth estimation (MDE) is a computer vision task that estimates the depth value for each pixel in a single monocular RGB image. Given its inherently ill-posed nature, recent studies have focused on data-driven learning-based methods, particularly deep learning techniques based on convolutional neural networks. Building on the success of vision transformers (ViTs), ViT-based models have gained widespread attention for MDE applications. To enhance the performance of MDE, we introduce an intermediate bi-directional long-short term memory (BiLSTM) module, a common component in natural language processing, to learn correlations between each token in a sequence both in the forward and backward directions. Given that transformers naturally encode sequential data, leveraging BiLSTMs to learn the correlations between each token in both directions is a reasonable approach. In addition, we enhance the weight initialization method for BiLSTMs by employing the CLS tokens of transformers, which was originally developed for natural language processing, to make it applicable for processing two-dimensional image data. Subsequently, we developed an MDE model by inserting a BiLSTM module initialized using our method between the encoder and decoder of a DPTbased architecture. Our experiments reveal notable results, with our model achieving 0.913 in 𝛿₁ and 0.097 in AbsRel for the NYU Depth V2 dataset. These results demonstrate a significant improvement over the previous models.
#monocular depth estimation
#vision transformer
#LSTM
#transfer learning
#bi-directional long-short term memory
상세정보 수정요청해당 페이지 내 제목·저자·목차·페이지정보가 잘못된 경우 알려주세요!
목차
- Abstract
- Ⅰ. INTRODUCTION
- Ⅱ. RELATED WORKS
- Ⅲ. PROPOSED ARCHITECTURE
- Ⅳ. EXPERIMENTS
- Ⅴ. CONCLUSION
- REFERENCES