当前位置:首页 > 编程笔记 > 正文
已解决

【跟踪器攻击】IOU Attack 代码解读

来自网友在路上 187887提问 提问时间:2023-11-20 17:02:11阅读次数: 87

最佳答案 问答题库878位专家为你答疑解惑

简介

提出了 IoU 攻击,它根据当前帧和历史帧的预测 IoU 分数顺序生成扰动。通过降低 IoU 分数,所提出的攻击方法相应地降低了时间相干边界框(即对象运动)的准确性。此外,我们将学习到的扰动转移到接下来的几帧以初始化时间运动攻击。我们在最先进的深度跟踪器(即基于检测、基于相关滤波器和长期跟踪器)上验证了提议的 IoU 攻击。对基准数据集的大量实验表明了所提出的 IoU 攻击方法的有效性。源代码可在此 https URL 处获得。

代码

加载模型

def load_pretrain(model, pretrained_path):logger.info('从 {} 加载预训练模型'.format(pretrained_path))  # 使用 logger.info 记录加载预训练模型的信息device = torch.cuda.current_device()  # 获取当前 CUDA 设备pretrained_dict = torch.load(pretrained_path,  # 加载预训练权重map_location=lambda storage, loc: storage.cuda(device))if "state_dict" in pretrained_dict.keys():pretrained_dict = remove_prefix(pretrained_dict['state_dict'], 'module.')  # 去除键名中的 'module.' 前缀else:pretrained_dict = remove_prefix(pretrained_dict, 'module.')  # 去除键名中的 'module.' 前缀try:check_keys(model, pretrained_dict)  # 检查加载的权重与模型的键是否匹配except:logger.info('[警告]: 将预训练权重用作特征。添加 "features." 前缀')  # 如果键不匹配,记录警告信息new_dict = {}for k, v in pretrained_dict.items():k = 'features.' + k  # 在键名前添加 'features.' 前缀new_dict[k] = vpretrained_dict = new_dictcheck_keys(model, pretrained_dict)  # 检查修改后的权重与模型的键是否匹配model.load_state_dict(pretrained_dict, strict=False)  # 将加载的权重加载到模型中,strict=False 允许部分加载权重return model  # 返回加载了预训练权重的模型

计算IOU

def overlap_ratio(rect1, rect2):'''Compute overlap ratio between two rects- rect: 1d array of [x,y,w,h] or2d array of N x [x,y,w,h]'''rect1 = np.transpose(rect1)if rect1.ndim==1:rect1 = rect1[None,:]if rect2.ndim==1:rect2 = rect2[None,:]left = np.maximum(rect1[:,0], rect2[:,0])right = np.minimum(rect1[:,0]+rect1[:,2], rect2[:,0]+rect2[:,2])top = np.maximum(rect1[:,1], rect2[:,1])bottom = np.minimum(rect1[:,1]+rect1[:,3], rect2[:,1]+rect2[:,3])intersect = np.maximum(0,right - left) * np.maximum(0,bottom - top)union = rect1[:,2]*rect1[:,3] + rect2[:,2]*rect2[:,3] - intersectiou = np.clip(intersect / union, 0, 1)return iou

orthogonal_perturbation:生成两个样本之间的正交扰动。给定先前样本 prev_sample 和目标样本 target_sample,该函数通过计算它们之间的差异,并生成一个正交于差异向量的扰动,用于对先前样本进行微小的调整以接近目标样本。

def orthogonal_perturbation(delta, prev_sample, target_sample):# 计算调整大小的尺寸size = int(max(prev_sample.shape[0]/4, prev_sample.shape[1]/4, 224))# 调整大小的 prev_sample 和 target_sampleprev_sample_temp = np.resize(prev_sample, (size, size, 3))target_sample_temp = np.resize(target_sample, (size, size, 3))# 生成扰动perturb = np.random.randn(size, size, 3)perturb /= get_diff(perturb, np.zeros_like(perturb))  # 归一化扰动数组perturb *= delta * np.mean(get_diff(target_sample_temp, prev_sample_temp))  # 缩放扰动数组# 将扰动投影到目标周围的球面上diff = (target_sample_temp - prev_sample_temp).astype(np.float32)  # 计算目标和前一个样本之间的差异diff /= get_diff(target_sample_temp, prev_sample_temp)  # 归一化差异向量diff = diff.reshape(3, size, size)  # 重塑差异向量的形状perturb = perturb.reshape(3, size, size)  # 重塑扰动数组的形状for i, channel in enumerate(diff):proj = np.dot(perturb[i], channel)  # 计算扰动数组与差异向量通道的点积perturb[i] -= proj * channel  # 将扰动数组投影到差异向量通道的正交补空间上perturb = perturb.reshape(size, size, 3)  # 重塑扰动数组的形状perturb_temp = np.resize(perturb, (prev_sample.shape[0], prev_sample.shape[1], 3))  # 调整大小以匹配 prev_sample 的形状return perturb_temp

forward_perturbation

def forward_perturbation(epsilon, prev_sample, target_sample):perturb = (target_sample - prev_sample).astype(np.float32)perturb /= get_diff(target_sample, prev_sample)perturb *= epsilonreturn perturb

每个channel 计算linalg差距

def get_diff(sample_1, sample_2):sample_1 = sample_1.reshape(3, sample_1.shape[0], sample_1.shape[1])sample_2 = sample_2.reshape(3, sample_2.shape[0], sample_2.shape[1])sample_1 = np.resize(sample_1, (3, 271, 271))sample_2 = np.resize(sample_2, (3, 271, 271))diff = []for i, channel in enumerate(sample_1):diff.append(np.linalg.norm((channel - sample_2[i]).astype(np.float32)))return np.array(diff)

评估

cd pysot/experiments/siamrpn_r50_l234_dwxcorr
python -u ../../tools/test_IoU_attack.py 	\--snapshot model.pth 	\ # model path--dataset VOT2018 	\ # dataset name--config config.yaml	  # config file
查看全文

99%的人还看了

猜你感兴趣

版权申明

本文"【跟踪器攻击】IOU Attack 代码解读":http://eshow365.cn/6-40506-0.html 内容来自互联网,请自行判断内容的正确性。如有侵权请联系我们,立即删除!