한국해양대학교

Detailed Information

Metadata Downloads

Radar-based Human Activity Recognition combining Range-Time-Doppler Map and Deep Neural Networks

Title
Radar-based Human Activity Recognition combining Range-Time-Doppler Map and Deep Neural Networks
Alternative Title
심층 신경망 및 Range-Time-Doppler Map을 결합한 레이더 기반 사람 행동 인식
Author(s)
김원열
Issued Date
2021
Publisher
한국해양대학교 대학원
URI
http://repository.kmou.ac.kr/handle/2014.oak/12774
http://kmou.dcollection.net/common/orgView/200000506448
Abstract
Human Activity Recognition (HAR) is a technology that recognizes behavior by collecting and interpreting information related to human motion using various sensors. HAR technology can utilize behavioral recognition information for the purposes of detecting various threats as well as monitoring health and behavior in daily activities. In particular, radar-based HAR has been intensively researched through the convergence of advantages such as contactless methods and personal information protection with rapidly developing deep learning (DL) technology.
Radar-based HAR using DL generally performs a preprocessing process that transforms a two-dimensional (2D) doppler spectrogram through short-time Fourier transform (STFT) to express the speed of the human body and limbs. However, in the process of converting to a 2D doppler spectrogram, information on the time-Doppler (TD) change according to each range is lost. In addition, there is a problem in that the reliability of the recognition accuracy is insufficient because the environment used in practice is very vulnerable to the effects of changes in geometrical structure and human information different from the learned data.
To overcome these problems, this thesis proposed a new radar-based HAR that combined range-time-Doppler (RTD) map and range distributed (RD)-DL model to improve generalization performance and utilize lost information for TD. Unlike TD maps, which are mainly used for radar-based HAR, the proposed RTD map extended radar information in 3D according to range, which provided detailed key features related to human activities. Also, the proposed RTD map attempted to improve the performance by using continuous wavelet transform (CWT) in the time domain with the TD maps expressing the spectrogram. CWT is a structure in which the size of the window changed to analyze the discontinuous part of the signal. Therefore, it delivered more diverse features to the RD-DL model than STFT, which performed analysis with a fixed window on the original signal. However, in the 3D RTD map, since human activity was not determined by the range, there was a trade-off that causes overfitting. Therefore, the proposed RD-DL model learned by concentrating only on information about the TD by configuring the 3D RTD map as an independent neural network according to the range. Therefore, the proposed RD-DL model was robust against changes in geometry and human information because it inferred human behavior by considering only velocity changes.
In addition, I applied the wasserstein generative adversarial network-gradient penalty (WGAN-GP) to generate an TD map to augment the training dataset. When trained through the WGAN-GP, it was easy to generate large amounts of similar data with the same distribution as the original data. These generated fake micro-Doppler (MD) images were applied in the training process of the RD-DL model to improve the learning performance.
To verify the performance of the proposed model, experiments were conducted using the University of Glasgow’s “Radar signatures of human activities”, which is an open dataset for radar-based HAR research. The results of comparing the proposed model and CNNs with the same number of parameters demonstrated that higher recognition accuracy was achieved. Additionally, the recognition error was lowered with different structures, the proposed system showed to be robust against the change in geometry and human information.
Appears in Collections:
전기전자공학과 > Thesis
Files in This Item:
There are no files associated with this item.

Items in Repository are protected by copyright, with all rights reserved, unless otherwise indicated.

Browse