基于多尺度特征表示的行人再识别
作者:
作者单位:

1.西安工程大学;2.南通师范高等专科学校

作者简介:

通讯作者:

中图分类号:

TP183

基金项目:

国家自然科学基金(61971339,61471161);陕西省自然科学基金重点项目(2018JZ6002)


Multi-scale Feature Representation for Person Re-identification
Author:
Affiliation:

Xi’an Polytechnic University

Fund Project:

National Natural Science Foundation of China under Grant 61971339 and Grant 61471161;Key Project of the Natural Science Foundation of Shaanxi Province under Grant 2018JZ6002

  • 摘要
  • |
  • 图/表
  • |
  • 访问统计
  • |
  • 参考文献
  • |
  • 相似文献
  • |
  • 引证文献
  • |
  • 资源附件
  • |
  • 文章评论
    摘要:

    应用于复杂场景下的行人再识别方法,常采用结合全局特征和局部特征的行人表示策略来提升模型的判别能力。但是,提取局部特征常需要针对特定的语义区域设计专门的模型,增加了算法的复杂性。为解决上述问题,本文提出了一种基于多尺度特征学习的行人再识别模型。该模型通过对不同细粒度局部特征与全局特征的联合表示,得到多层次具有互补性的判别信息,端对端地完成行人再识别任务。为了在获取高区分度信息的同时保留更多的细节信息,采用最大池化加平均池化的方式对特征进行下采样;此外,本文引入了TriHard loss约束全局特征并采用随机擦除方法增强数据以进一步提升模型对复杂场景的适应性。在Market-1501和DukeMTMC-reID数据集上进行对比实验,Rank-1的准确率分别达到了 94.9%和 87.1%,验证了本文方法的有效性。

    Abstract:

    The strategy of pedestrian representation merging global features with local features is frequently utilized to improve the discriminability of the model for person re-identification (Re-ID) in complex scenes. However, extracting local features generally necessitates specialized models for specific semantic regions, which increases the complexity of the algorithm. For resolving this problem, a Re-ID model is proposed based on multi-scale feature learning in this paper. The model acquires discrimination information of multi-level complementary with the combination of different fine-grained local features and global features, and then to realize end-to-end person re-identification. In order to obtain high-resolution information while retaining more detailed information, max pooling and average pooling are employed to downsample the features. In addition, this paper introduces TriHard loss to constrain global features and uses random erasure to enhance the data, which further ameliorates the adaptability of model in complex scenes. Comparative experiments on Market-1501 and DukeMTMC-reID datasets show that the accuracy of Rank-1 has reached 94.9% and 87.1%, respectively, which verifies the effectiveness of the method in this paper.

    参考文献
    相似文献
    引证文献
引用本文
分享
文章指标
  • 点击次数:
  • 下载次数:
  • HTML阅读次数:
  • 引用次数:
历史
  • 收稿日期:2020-07-12
  • 最后修改日期:2020-10-04
  • 录用日期:2020-10-12
  • 在线发布日期: 2020-12-01
  • 出版日期: