본문 바로가기
  • 학술저널

표지

DBpia에서 서비스 중인 논문에 한하여 피인용 수가 반영됩니다. 내서재에 논문을 담은 이용자 수의 총합입니다.

초록·키워드 목차

A surveillance system deploys multiple cameras to monitor a wide area in real time to detect abnormal situations such as a crime scene, traffic accident, and natural disaster. An Increased number of cameras results in the same number of monitors, which makes human decisions or automatic decisions difficult. To solve the problem, a smart surveillance scheme has recently been proposed. The smart surveillance system automatically detects an object and provides an alarm to a surveillant. In this paper, we present a metadata extraction method for object-based video summary. The proposed method adopts deep learning-based object detection and background elimination to correctly estimate an object region. Finally, metadata extraction is performed on the estimated object information. The proposed metadata consists of the representative color, size, aspect ratio, and patch of an object. The proposed method can extract reliable metadata without motion features in both static and dynamic cameras. The proposed method can be applied to various object detection areas using complex metadata. #Metadata #Object segmentation #Surveillance system

Abstract
1. Introduction
2. Metadata Extraction of Static and Non-static Camera
3. Experimental Results
5. Conclusion
References

DBpia에서 서비스 중인 논문에 한하여 피인용 수가 반영됩니다.
Insert title here
논문의 정보가 복사되었습니다.
붙여넣기 하세요.