[1]王科俊,劉亮亮,丁欣楠,等.基于卷積神經網絡的步態周期檢測方法[J].哈爾濱工程大學學報,2021,42(5):656-663.[doi:10.11990/jheu.202101024]
 WANG Kejun,LIU Liangliang,DING Xinnan,et al.Gait period detection method based on convolutional neural networks[J].Journal of Harbin Engineering University,2021,42(5):656-663.[doi:10.11990/jheu.202101024]
點擊復制

基于卷積神經網絡的步態周期檢測方法(/HTML)
分享到:

《哈爾濱工程大學學報》[ISSN:1006-6977/CN:61-1281/TN]

卷:
42
期數:
2021年5期
頁碼:
656-663
欄目:
出版日期:
2021-05-05

文章信息/Info

Title:
Gait period detection method based on convolutional neural networks
作者:
王科俊 劉亮亮 丁欣楠 胡鋼 徐怡博
哈爾濱工程大學 智能科學與工程學院, 黑龍江 哈爾濱 150001
Author(s):
WANG Kejun LIU Liangliang DING Xinnan HU Gang XU Yibo
College of Intelligent Systems Science and Engineering, Harbin Engineering University, Harbin 150001, China
關鍵詞:
步態周期檢測步態序列卷積神經網絡深卷積神經網絡步態識別生物特征識別
分類號:
TP391
DOI:
10.11990/jheu.202101024
文獻標志碼:
A
摘要:
針對步態周期檢測直接影響到步態識別的計算量和精度的問題,本文基于深卷積神經網絡實現步態周期檢測,分別通過將步態序列根據周期性進行分類,和將步態序列擬合為正弦函數進行步態周期性檢測的方法,對步態周期進行建模。其關鍵思想是根據步態周期的規律性,將步態起伏作為分類問題或一個正弦函數來建模,步態視頻中的每個幀對應一個可以表示其周期特征的類別或函數值。而卷積神經網絡被用于提取步態幀的周期性特征,以定位該幀在周期中的位置,得到分類或回歸結果,最終實現步態周期檢測。在CASIA-B數據集的不同視角下采用了多種網絡結構進行了周期檢測效果的驗證。實驗結果表明:該方法對步態周期性檢測具有良好的精度和魯棒性。

參考文獻/References:

[1] PHILLIPS P J. Human identification technical challenges[C]//Proceedings of 2002 International Conference on Image Processing. Rochester, NY, USA:IEEE, 2002:49-52.
[2] LI Xiang, MAKIHARA Y, XU Chi, et al. Gait recognition invariant to carried objects using alpha blending generative adversarial networks[J]. Pattern recognition, 2020, 105:107376.
[3] LIAO Rijun, YU Shiqi, AN Weizhi, et al. A model-based gait recognition method with body pose and human prior knowledge[J]. Pattern recognition, 2020, 98:107069.
[4] MAKIHARA Y, SAGAWA R, MUKAIGAWA Y, et al. Gait recognition using a view transformation model in the frequency domain[C]//Proceedings of the 9th European Conference on Computer Vision. Graz, Austria:Springer, 2006:151-163.
[5] SONG Chunfeng, HUANG Yongzhen, HUANG Yan, et al. GaitNet:an end-to-end network for gait based human identification[J]. Pattern recognition, 2019, 96:106988.
[6] COLLINS R T, GROSS R, SHI Jianbo. Silhouette-based human identification from body shape and gait[C]//Proceedings of Fifth IEEE International Conference on Automatic Face Gesture Recognition. Washington:IEEE, 2002:366-372.
[7] LEE C P, TAN A W C, TAN S C. Gait recognition with Transient Binary Patterns[J]. Journal of visual communication and image representation, 2015, 33:69-77.
[8] WANG Liang, TAN Tieniu, NING Huazhong, et al. Silhouette analysis-based gait recognition for human identification[J]. IEEE transactions on pattern analysis and machine intelligence, 2003, 25(12):1505-1518.
[9] WANG Chen, ZHANG Junping, WANG Liang, et al. Human identification using temporal information preserving gait template[J]. IEEE transactions on pattern analysis and machine intelligence, 2012, 34(11):2164-2176.
[10] SARKAR S, PHILLIPS P J, LIU Zongyi, et al. The humanID gait challenge problem:data sets, performance, and analysis[J]. IEEE transactions on pattern analysis and machine intelligence, 2005, 27(2):162-177.
[11] BEN Xianye, MENG Weixiao, YAN Rui. Dual-ellipse fitting approach for robust gait periodicity detection[J]. Neurocomputing, 2012, 79:173-178.
[12] CUN Y L, BOSER B, DENKER J S, et al. Handwritten digit recognition with a back-propagation network[M]//TOURETZKY D S. Advances in Neural Information Processing Systems 2. San Francisco:Morgan Kaufmann Publishers Inc., 1990:396-404.
[13] KRIZHEVSKY A, SUTSKEVER I, HINTON G. ImageNet classification with deep convolutional neural networks[C]//Proceedings of the 25th International Conference on Neural Information Processing Systems. Lake Tahoe:ACM, 2012:1106-1114.
[14] LECUN Y, BOTTOU L, BENGIO Y, et al. Gradient-based learning applied to document recognition[J]. Proceedings of the IEEE, 1998, 86(11):2278-2324.
[15] SIMONYAN K, ZISSERMAN A. Very deep convolutional networks for large-scale image recognition[EB/OL]. (2015-04-10)[2020-07-17]. https://arxiv.org/abs/1409.1556.
[16] SZEGEDY C, LIU Wei, JIA Yangqing, et al. Going deeper with convolutions[C]//Proceedings of 2015 IEEE Conference on Computer Vision and Pattern Recognition. Boston, USA:IEEE, 2015:1-9.
[17] HE Kaiming, ZHANG Xiangyu, REN Shaoqing, et al. Deep residual learning for image recognition[C]//Proceedings of 2016 IEEE Conference on Computer Vision and Pattern Recognition. Las Vegas, USA:IEEE, 2016:770-778.
[18] HUANG Gao, LIU Zhuang, Van Der MAATEN L, et al. Densely connected convolutional networks[C]//Proceedings of 2017 IEEE Conference on Computer Vision and Pattern Recognition. Honolulu, USA:IEEE, 2017:4700-4708.
[19] CHOLLET F. Xception:deep learning with depthwise separable convolutions[C]//Proceedings of 2017 IEEE Conference on Computer Vision and Pattern Recognition. Honolulu, USA:IEEE, 2017:1800-1807.
[20] NIXON M S, CARTER J N, CUNADO D, et al. Automatic gait recognition[M]//JAIN A K, BOLLE R, PANKANTI S. Biometrics:Personal Identification in Networked Society. Boston, MA, USA:Springer, 1996:231-249.
[21] SZEGEDY C, LIU Wei, JIA Yangqing, et al. Going deeper with convolutions[EB/OL]. (2014-09-17)[2020-07-17]. https://arxiv.org/abs/1409.4842. YU Shiqi, TAN Daoliang.
[22] TAN Tieniu. A framework for evaluating the effect of view angle, clothing and carrying condition on gait recognition[C]//Proceedings of the 18th International Conference on Pattern Recognition. Hong Kong, China:IEEE, 2006:441-444.

備注/Memo

備注/Memo:
收稿日期:2021-01-15。
基金項目:國家自然科學基金項目(61573114).
作者簡介:王科俊,男,教授,博士生導師;劉亮亮,男,博士研究生.
通訊作者:劉亮亮,E-mail:liuliangliang@hrbeu.edu.cn.
更新日期/Last Update: 2021-04-26
看真人视频a级毛片