视频对象移除篡改的时空域定位被动取证

今年发表的两篇硕士阶段研究方向的两篇论文,都是关于视频对象移除篡改的时空域定位被动取证。欢迎讨论、引用。

一、期刊:通信学报----2020年7月刊

论文题目:《视频对象移除篡改的时空域定位被动取证

论文链接:https://kns.cnki.net/kcms/detail/detail.aspx?dbcode=CJFD&dbname=CJFDLAST2020&filename=TXXB202007011&v=LV%25mmd2B8dJInOQeSa1s%25mmd2FzOcR9tZe4FUcLv4itdj3LqmrJ%25mmd2FA%25mmd2BcBovO5Z2LWAP5VMACpw2

摘要:针对视频被动取证领域中视频内容的真实性和完整性鉴定及篡改区域定位问题,提出了一种基于视频噪声流的深度学习检测算法。首先,构建了基于空间富模型(SRM)和三维卷积(C3D)神经网络的特征提取器、帧鉴别器和基于区域建议网络(RPN)思想的空域定位器;其次,将特征提取器分别与帧鉴别器和空域定位器相结合,搭建出2个神经网络;最后,利用增强处理后的数据训练出2种深度学习模型,分别用于对视频篡改区域时域和空域的定位。测试结果表明,时域定位的准确率提高到98.5%,空域定位与篡改区域标注平均交并比达49%,可以有效对该类篡改视频进行篡改区域时空域定位。 

关键词:

        视频对象移除篡改;

        时空域定位;

        视频被动取证;

        三维卷积目标检测;

  • 专辑:

    电子技术及信息科学

  • 专题:

    计算机软件及计算机应用

  • 分类号:

    TP309.7

二、期刊:  IEEE Transactions on Circuits and Systems for Video Technology ----2020年12月刊

论文题目:Spatiotemporal Trident Networks: Detection and Localization of Object Removal Tampering in Video Passive Forensics》 >Early Access

论文链接:https://ieeexplore.ieee.org/document/9301329

Abstract:

With the development of video and image processing technology, the field of video tampering forensics is facing enormous challenges. Specifically, as the fundamental basis of judicial forensics, passive forensics for object removal video forgery is particularly essential. To extract tampering traces in video more sufficiently, the author proposed a spatiotemporal trident network based on the spatial rich model (SRM) and 3D convolution (C3D), which provides three branches and can theoretically improve the detection and localization accuracy of tampered regions. Based on the spatiotemporal trident network, a temporal detector and a spatial locator were designed to detect and locate the tampered regions in the temporal and spatial domains of videos. For the temporal detector, 3D CNNs were employed in three branches as the encoders and a bidirectional long short-term memory (BiLSTM) as the decoder. For the spatial locator, a backbone network named C3D-ResNet12 was designed as the encoder of the three branches, and the region proposal networks (RPNs) were employed as the decoders in three branches. In addition, we optimized the loss functions of the above two algorithms based on focal loss and GIoU loss. The experimental results revealed the effectiveness of spatiotemporal detection and localization algorithms: for temporal forgery detection, the accuracy of the frame classification increased to 99+%; for spatial forgery localization, the successful localization rate of the tampered regions in forged frames reached 96+%, and the mean intersection over union of the located tampered regions and the real tampered regions reached 62+%.

上一篇:DenseBox: Unifying Landmark Localization with End to End Object Detection笔记


下一篇:Graph Convolutional Networks for Temporal Action Localization