物體檢測算法通常在輸入圖像中采樣大量區域,判斷這些區域是否包含感興趣的物體,并調整區域的邊界,從而更準確地預測物體的真實邊界 框。不同的模型可能采用不同的區域采樣方案。在這里,我們介紹其中一種方法:它生成多個以每個像素為中心的具有不同比例和縱橫比的邊界框。這些邊界框稱為錨框。我們將在14.7 節設計一個基于錨框的目標檢測模型。
首先,讓我們修改打印精度以獲得更簡潔的輸出。
%matplotlib inline import torch from d2l import torch as d2l torch.set_printoptions(2) # Simplify printing accuracy
%matplotlib inline from mxnet import gluon, image, np, npx from d2l import mxnet as d2l np.set_printoptions(2) # Simplify printing accuracy npx.set_np()
14.4.1。生成多個錨框
假設輸入圖像的高度為h和寬度 w. 我們以圖像的每個像素為中心生成具有不同形狀的錨框。讓規模成為s∈(0,1]縱橫比(寬高比)為 r>0. 那么anchor box的寬高分別是hsr和 hs/r, 分別。請注意,當中心位置給定時,將確定一個已知寬度和高度的錨框。
為了生成多個不同形狀的錨框,讓我們設置一系列尺度s1,…,sn和一系列縱橫比 r1,…,rm. 當以每個像素為中心使用這些尺度和縱橫比的所有組合時,輸入圖像將總共有whnm錨箱。雖然這些anchor boxes可能會覆蓋所有的ground-truth bounding boxes,但是計算復雜度很容易過高。在實踐中,我們只能考慮那些包含s1或者r1:
(14.4.1)(s1,r1),(s1,r2),…,(s1,rm),(s2,r1),(s3,r1),…,(sn,r1).
也就是說,以同一個像素為中心的anchor boxes的個數為 n+m?1. 對于整個輸入圖像,我們將生成總共 wh(n+m?1)錨箱。
上面生成anchor boxes的方法是在下面的multibox_prior函數中實現的。我們指定輸入圖像、比例列表和縱橫比列表,然后此函數將返回所有錨框。
#@save def multibox_prior(data, sizes, ratios): """Generate anchor boxes with different shapes centered on each pixel.""" in_height, in_width = data.shape[-2:] device, num_sizes, num_ratios = data.device, len(sizes), len(ratios) boxes_per_pixel = (num_sizes + num_ratios - 1) size_tensor = torch.tensor(sizes, device=device) ratio_tensor = torch.tensor(ratios, device=device) # Offsets are required to move the anchor to the center of a pixel. Since # a pixel has height=1 and width=1, we choose to offset our centers by 0.5 offset_h, offset_w = 0.5, 0.5 steps_h = 1.0 / in_height # Scaled steps in y axis steps_w = 1.0 / in_width # Scaled steps in x axis # Generate all center points for the anchor boxes center_h = (torch.arange(in_height, device=device) + offset_h) * steps_h center_w = (torch.arange(in_width, device=device) + offset_w) * steps_w shift_y, shift_x = torch.meshgrid(center_h, center_w, indexing='ij') shift_y, shift_x = shift_y.reshape(-1), shift_x.reshape(-1) # Generate `boxes_per_pixel` number of heights and widths that are later # used to create anchor box corner coordinates (xmin, xmax, ymin, ymax) w = torch.cat((size_tensor * torch.sqrt(ratio_tensor[0]), sizes[0] * torch.sqrt(ratio_tensor[1:]))) * in_height / in_width # Handle rectangular inputs h = torch.cat((size_tensor / torch.sqrt(ratio_tensor[0]), sizes[0] / torch.sqrt(ratio_tensor[1:]))) # Divide by 2 to get half height and half width anchor_manipulations = torch.stack((-w, -h, w, h)).T.repeat( in_height * in_width, 1) / 2 # Each center point will have `boxes_per_pixel` number of anchor boxes, so # generate a grid of all anchor box centers with `boxes_per_pixel` repeats out_grid = torch.stack([shift_x, shift_y, shift_x, shift_y], dim=1).repeat_interleave(boxes_per_pixel, dim=0) output = out_grid + anchor_manipulations return output.unsqueeze(0)
#@save def multibox_prior(data, sizes, ratios): """Generate anchor boxes with different shapes centered on each pixel.""" in_height, in_width = data.shape[-2:] device, num_sizes, num_ratios = data.ctx, len(sizes), len(ratios) boxes_per_pixel = (num_sizes + num_ratios - 1) size_tensor = np.array(sizes, ctx=device) ratio_tensor = np.array(ratios, ctx=device) # Offsets are required to move the anchor to the center of a pixel. Since # a pixel has height=1 and width=1, we choose to offset our centers by 0.5 offset_h, offset_w = 0.5, 0.5 steps_h = 1.0 / in_height # Scaled steps in y-axis steps_w = 1.0 / in_width # Scaled steps in x-axis # Generate all center points for the anchor boxes center_h = (np.arange(in_height, ctx=device) + offset_h) * steps_h center_w = (np.arange(in_width, ctx=device) + offset_w) * steps_w shift_x, shift_y = np.meshgrid(center_w, center_h) shift_x, shift_y = shift_x.reshape(-1), shift_y.reshape(-1) # Generate `boxes_per_pixel` number of heights and widths that are later # used to create anchor box corner coordinates (xmin, xmax, ymin, ymax) w = np.concatenate((size_tensor * np.sqrt(ratio_tensor[0]), sizes[0] * np.sqrt(ratio_tensor[1:]))) * in_height / in_width # Handle rectangular inputs h = np.concatenate((size_tensor / np.sqrt(ratio_tensor[0]), sizes[0] / np.sqrt(ratio_tensor[1:]))) # Divide by 2 to get half height and half width anchor_manipulations = np.tile(np.stack((-w, -h, w, h)).T, (in_height * in_width, 1)) / 2 # Each center point will have `boxes_per_pixel` number of anchor boxes, so # generate a grid of all anchor box centers with `boxes_per_pixel` repeats out_grid = np.stack([shift_x, shift_y, shift_x, shift_y], axis=1).repeat(boxes_per_pixel, axis=0) output = out_grid + anchor_manipulations return np.expand_dims(output, axis=0)
我們可以看到返回的anchor box變量的shapeY為(batch size, number of anchor boxes, 4)。
img = d2l.plt.imread('../img/catdog.jpg') h, w = img.shape[:2] print(h, w) X = torch.rand(size=(1, 3, h, w)) # Construct input data Y = multibox_prior(X, sizes=[0.75, 0.5, 0.25], ratios=[1, 2, 0.5]) Y.shape
561 728
torch.Size([1, 2042040, 4])
img = image.imread('../img/catdog.jpg').asnumpy() h, w = img.shape[:2] print(h, w) X = np.random.uniform(size=(1, 3, h, w)) # Construct input data Y = multibox_prior(X, sizes=[0.75, 0.5, 0.25], ratios=[1, 2, 0.5]) Y.shape
561 728
(1, 2042040, 4)
將anchor box變量的shape修改Y為(圖像高度,圖像寬度,以同一像素為中心的anchor boxes個數,4),我們就可以得到以指定像素位置為中心的所有anchor boxes。在下文中,我們訪問以 (250, 250) 為中心的第一個錨框。它有四個要素:(x,y)- 軸坐標在左上角和(x,y)錨框右下角的軸坐標。兩個軸的坐標值分別除以圖像的寬度和高度。
boxes = Y.reshape(h, w, 5, 4) boxes[250, 250, 0, :]
tensor([0.06, 0.07, 0.63, 0.82])
boxes = Y.reshape(h, w, 5, 4) boxes[250, 250, 0, :]
array([0.06, 0.07, 0.63, 0.82])
為了顯示圖像中以一個像素為中心的所有錨框,我們定義以下show_bboxes函數在圖像上繪制多個邊界框。
#@save def show_bboxes(axes, bboxes, labels=None, colors=None): """Show bounding boxes.""" def make_list(obj, default_values=None): if obj is None: obj = default_values elif not isinstance(obj, (list, tuple)): obj = [obj] return obj labels = make_list(labels) colors = make_list(colors, ['b', 'g', 'r', 'm', 'c']) for i, bbox in enumerate(bboxes): color = colors[i % len(colors)] rect = d2l.bbox_to_rect(bbox.detach().numpy(), color) axes.add_patch(rect) if labels and len(labels) > i: text_color = 'k' if color == 'w' else 'w' axes.text(rect.xy[0], rect.xy[1], labels[i], va='center', ha='center', fontsize=9, color=text_color, bbox=dict(facecolor=color, lw=0))
#@save def show_bboxes(axes, bboxes, labels=None, colors=None): """Show bounding boxes.""" def make_list(obj, default_values=None): if obj is None: obj = default_values elif not isinstance(obj, (list, tuple)): obj = [obj] return obj labels = make_list(labels) colors = make_list(colors, ['b', 'g', 'r', 'm', 'c']) for i, bbox in enumerate(bboxes): color = colors[i % len(colors)] rect = d2l.bbox_to_rect(bbox.asnumpy(), color) axes.add_patch(rect) if labels and len(labels) > i: text_color = 'k' if color == 'w' else 'w' axes.text(rect.xy[0], rect.xy[1], labels[i], va='center', ha='center', fontsize=9, color=text_color, bbox=dict(facecolor=color, lw=0))
正如我們剛剛看到的,x和y 變量中的軸boxes分別除以圖像的寬度和高度。在繪制anchor boxes時,我們需要恢復它們原來的坐標值;因此,我們在下面定義變量 bbox_scale。現在,我們可以繪制圖像中所有以 (250, 250) 為中心的錨框。如您所見,比例為 0.75、縱橫比為 1 的藍色錨框很好地包圍了圖像中的狗。
d2l.set_figsize() bbox_scale = torch.tensor((w, h, w, h)) fig = d2l.plt.imshow(img) show_bboxes(fig.axes, boxes[250, 250, :, :] * bbox_scale, ['s=0.75, r=1', 's=0.5, r=1', 's=0.25, r=1', 's=0.75, r=2', 's=0.75, r=0.5'])
d2l.set_figsize() bbox_scale = np.array((w, h, w, h)) fig = d2l.plt.imshow(img) show_bboxes(fig.axes, boxes[250, 250, :, :] * bbox_scale, ['s=0.75, r=1', 's=0.5, r=1', 's=0.25, r=1', 's=0.75, r=2', 's=0.75, r=0.5'])
14.4.2。并集交集 (IoU)
我們剛剛提到圖像中的狗周圍有一個錨框“井”。如果物體的ground-truth bounding box是已知的,那么這里的“well”怎么量化呢?直觀上,我們可以衡量錨框和真實邊界框之間的相似度。我們知道杰卡德指數可以衡量兩個集合之間的相似度。給定的集合A和B,它們的 Jaccard 指數是交集的大小除以并集的大小:
(14.4.2)J(A,B)=|A∩B||A∪B|.
事實上,我們可以將任何邊界框的像素區域視為一組像素。這樣,我們就可以通過它們像素集的 Jaccard 指數來衡量兩個邊界框的相似度。對于兩個邊界框,我們通常將它們的 Jaccard 指數稱為intersection over union ( IoU ),即它們的交集面積與它們的并集面積之比,如圖14.4.1所示。IoU 的范圍在 0 到 1 之間:0 表示兩個邊界框完全不重疊,而 1 表示兩個邊界框相等。
圖 14.4.1 IoU 是兩個邊界框的交集面積與并集面積之比。
對于本節的其余部分,我們將使用 IoU 來衡量錨框和真實邊界框之間以及不同錨框之間的相似性。給定兩個錨點或邊界框列表,以下box_iou計算它們在這兩個列表中的成對 IoU。
#@save def box_iou(boxes1, boxes2): """Compute pairwise IoU across two lists of anchor or bounding boxes.""" box_area = lambda boxes: ((boxes[:, 2] - boxes[:, 0]) * (boxes[:, 3] - boxes[:, 1])) # Shape of `boxes1`, `boxes2`, `areas1`, `areas2`: (no. of boxes1, 4), # (no. of boxes2, 4), (no. of boxes1,), (no. of boxes2,) areas1 = box_area(boxes1) areas2 = box_area(boxes2) # Shape of `inter_upperlefts`, `inter_lowerrights`, `inters`: (no. of # boxes1, no. of boxes2, 2) inter_upperlefts = torch.max(boxes1[:, None, :2], boxes2[:, :2]) inter_lowerrights = torch.min(boxes1[:, None, 2:], boxes2[:, 2:]) inters = (inter_lowerrights - inter_upperlefts).clamp(min=0) # Shape of `inter_areas` and `union_areas`: (no. of boxes1, no. of boxes2) inter_areas = inters[:, :, 0] * inters[:, :, 1] union_areas = areas1[:, None] + areas2 - inter_areas return inter_areas / union_areas
#@save def box_iou(boxes1, boxes2): """Compute pairwise IoU across two lists of anchor or bounding boxes.""" box_area = lambda boxes: ((boxes[:, 2] - boxes[:, 0]) * (boxes[:, 3] - boxes[:, 1])) # Shape of `boxes1`, `boxes2`, `areas1`, `areas2`: (no. of boxes1, 4), # (no. of boxes2, 4), (no. of boxes1,), (no. of boxes2,) areas1 = box_area(boxes1) areas2 = box_area(boxes2) # Shape of `inter_upperlefts`, `inter_lowerrights`, `inters`: (no. of # boxes1, no. of boxes2, 2) inter_upperlefts = np.maximum(boxes1[:, None, :2], boxes2[:, :2]) inter_lowerrights = np.minimum(boxes1[:, None, 2:], boxes2[:, 2:]) inters = (inter_lowerrights - inter_upperlefts).clip(min=0) # Shape of `inter_areas` and `union_areas`: (no. of boxes1, no. of boxes2) inter_areas = inters[:, :, 0] * inters[:, :, 1] union_areas = areas1[:, None] + areas2 - inter_areas return inter_areas / union_areas
14.4.3。在訓練數據中標記錨框
在訓練數據集中,我們將每個錨框視為訓練示例。為了訓練目標檢測模型,我們需要每個錨框的類 和偏移標簽,其中前者是與錨框相關的對象的類別,后者是真實邊界框相對于錨箱。在預測過程中,我們為每張圖像生成多個anchor boxes,為所有anchor boxes預測類別和偏移量,根據預測的偏移量調整它們的位置以獲得預測的bounding boxes,最后只輸出那些滿足一定條件的預測bounding boxes .
正如我們所知,對象檢測訓練集帶有用于真實邊界框位置及其周圍對象類別的標簽。為了標記任何生成的錨框,我們參考其分配的最接近錨框的地面實況邊界框的標記位置和類別。在下文中,我們描述了一種將最接近的地面實況邊界框分配給錨框的算法。
14.4.3.1。將真實邊界框分配給錨框
給定一張圖像,假設錨框是 A1,A2,…,Ana真實邊界框是B1,B2,…,Bnb, 在哪里na≥nb. 讓我們定義一個矩陣X∈Rna×nb, 其元素xij在里面ith行和 jth列是anchor box的IoUAi 和真實邊界框Bj. 該算法包括以下步驟:
找到矩陣中的最大元素X并將其行和列索引表示為i1和j1, 分別。然后是真實邊界框Bj1被分配到anchor boxAi1. 這是非常直觀的,因為 Ai1和Bj1是所有成對的錨框和真實邊界框中最接近的。第一次賦值后,丟棄所有元素 i1th行和j1th 矩陣中的列X.
在矩陣中找到最大的剩余元素 X并將其行和列索引表示為 i2和j2, 分別。我們分配地面實況邊界框Bj2錨框Ai2并丟棄其中的所有元素i2th行和 j2th矩陣中的列X.
此時,矩陣中兩行兩列的元素 X已被丟棄。我們繼續進行,直到所有元素都在nb矩陣中的列X被丟棄。這時候,我們已經為每一個都分配了一個ground-truth bounding box nb錨箱。
只遍歷剩下的na?nb錨箱。例如,給定任何錨框Ai, 找到真實邊界框Bj具有最大的 IoUAi 在整個ith矩陣行 X, 并賦值Bj到Ai僅當此 IoU 大于預定義閾值時。
讓我們用一個具體的例子來說明上述算法。如圖 14.4.2 (左)所示,假設矩陣中的最大值X是x23,我們分配地面實況邊界框B3到錨箱A2. 然后,我們舍棄矩陣第2行第3列的所有元素,找到最大的x71在剩余的元素(陰影區域)中,并分配地面實況邊界框B1到錨箱 A7. 接下來,如圖14.4.2 (中)所示,舍棄矩陣第7行第1列的所有元素,找出最大的x54在剩余的元素(陰影區域)中,并分配地面實況邊界框B4到錨箱 A5. 最后,如圖14.4.2 (右)所示,舍去矩陣第5行第4列的所有元素,找到最大的x92在剩余的元素(陰影區域)中,并分配地面實況邊界框B2到錨箱 A9. 之后我們只需要遍歷剩下的anchor boxesA1,A3,A4,A6,A8并根據閾值決定是否給它們分配ground-truth邊界框。
圖 14.4.2將真實邊界框分配給錨框。
該算法在以下函數中實現assign_anchor_to_bbox 。
#@save def assign_anchor_to_bbox(ground_truth, anchors, device, iou_threshold=0.5): """Assign closest ground-truth bounding boxes to anchor boxes.""" num_anchors, num_gt_boxes = anchors.shape[0], ground_truth.shape[0] # Element x_ij in the i-th row and j-th column is the IoU of the anchor # box i and the ground-truth bounding box j jaccard = box_iou(anchors, ground_truth) # Initialize the tensor to hold the assigned ground-truth bounding box for # each anchor anchors_bbox_map = torch.full((num_anchors,), -1, dtype=torch.long, device=device) # Assign ground-truth bounding boxes according to the threshold max_ious, indices = torch.max(jaccard, dim=1) anc_i = torch.nonzero(max_ious >= iou_threshold).reshape(-1) box_j = indices[max_ious >= iou_threshold] anchors_bbox_map[anc_i] = box_j col_discard = torch.full((num_anchors,), -1) row_discard = torch.full((num_gt_boxes,), -1) for _ in range(num_gt_boxes): max_idx = torch.argmax(jaccard) # Find the largest IoU box_idx = (max_idx % num_gt_boxes).long() anc_idx = (max_idx / num_gt_boxes).long() anchors_bbox_map[anc_idx] = box_idx jaccard[:, box_idx] = col_discard jaccard[anc_idx, :] = row_discard return anchors_bbox_map
#@save def assign_anchor_to_bbox(ground_truth, anchors, device, iou_threshold=0.5): """Assign closest ground-truth bounding boxes to anchor boxes.""" num_anchors, num_gt_boxes = anchors.shape[0], ground_truth.shape[0] # Element x_ij in the i-th row and j-th column is the IoU of the anchor # box i and the ground-truth bounding box j jaccard = box_iou(anchors, ground_truth) # Initialize the tensor to hold the assigned ground-truth bounding box for # each anchor anchors_bbox_map = np.full((num_anchors,), -1, dtype=np.int32, ctx=device) # Assign ground-truth bounding boxes according to the threshold max_ious, indices = np.max(jaccard, axis=1), np.argmax(jaccard, axis=1) anc_i = np.nonzero(max_ious >= iou_threshold)[0] box_j = indices[max_ious >= iou_threshold] anchors_bbox_map[anc_i] = box_j col_discard = np.full((num_anchors,), -1) row_discard = np.full((num_gt_boxes,), -1) for _ in range(num_gt_boxes): max_idx = np.argmax(jaccard) # Find the largest IoU box_idx = (max_idx % num_gt_boxes).astype('int32') anc_idx = (max_idx / num_gt_boxes).astype('int32') anchors_bbox_map[anc_idx] = box_idx jaccard[:, box_idx] = col_discard jaccard[anc_idx, :] = row_discard return anchors_bbox_map
14.4.3.2。標注類別和偏移量
現在我們可以為每個錨框標記類別和偏移量。假設一個錨框A被分配了一個真實邊界框 B. 一方面,anchor box的類A將被標記為B. 另一方面,anchor box的偏移量A會根據中心坐標之間的相對位置進行標注B和A以及這兩個框之間的相對大小。給定數據集中不同框的不同位置和大小,我們可以對那些可能導致更容易擬合的更均勻分布的偏移量應用轉換到那些相對位置和大小。這里我們描述一個常見的轉換。給定中心坐標A和B 作為(xa,ya)和(xb,yb), 它們的寬度為 wa和wb, 他們的身高為ha和 hb, 分別。我們可以標記偏移量A作為
(14.4.3)(xb?xawa?μxσx,yb?yaha?μyσy,log?wbwa?μwσw,log?hbha?μhσh),
其中常量的默認值是 μx=μy=μw=μh=0,σx=σy=0.1, 和 σw=σh=0.2. 此轉換在下面的函數中實現offset_boxes。
#@save def offset_boxes(anchors, assigned_bb, eps=1e-6): """Transform for anchor box offsets.""" c_anc = d2l.box_corner_to_center(anchors) c_assigned_bb = d2l.box_corner_to_center(assigned_bb) offset_xy = 10 * (c_assigned_bb[:, :2] - c_anc[:, :2]) / c_anc[:, 2:] offset_wh = 5 * torch.log(eps + c_assigned_bb[:, 2:] / c_anc[:, 2:]) offset = torch.cat([offset_xy, offset_wh], axis=1) return offset
#@save def offset_boxes(anchors, assigned_bb, eps=1e-6): """Transform for anchor box offsets.""" c_anc = d2l.box_corner_to_center(anchors) c_assigned_bb = d2l.box_corner_to_center(assigned_bb) offset_xy = 10 * (c_assigned_bb[:, :2] - c_anc[:, :2]) / c_anc[:, 2:] offset_wh = 5 * np.log(eps + c_assigned_bb[:, 2:] / c_anc[:, 2:]) offset = np.concatenate([offset_xy, offset_wh], axis=1) return offset
如果沒有為錨框分配真實邊界框,我們只需將錨框的類別標記為“背景”。類別為背景的錨框通常稱為負錨框,其余稱為正錨框。我們實現了以下函數,使用真實邊界框(參數)來標記錨框(參數)multibox_target的類和偏移量。此函數將背景類設置為零,并將新類的整數索引遞增 1。anchorslabels
#@save def multibox_target(anchors, labels): """Label anchor boxes using ground-truth bounding boxes.""" batch_size, anchors = labels.shape[0], anchors.squeeze(0) batch_offset, batch_mask, batch_class_labels = [], [], [] device, num_anchors = anchors.device, anchors.shape[0] for i in range(batch_size): label = labels[i, :, :] anchors_bbox_map = assign_anchor_to_bbox( label[:, 1:], anchors, device) bbox_mask = ((anchors_bbox_map >= 0).float().unsqueeze(-1)).repeat( 1, 4) # Initialize class labels and assigned bounding box coordinates with # zeros class_labels = torch.zeros(num_anchors, dtype=torch.long, device=device) assigned_bb = torch.zeros((num_anchors, 4), dtype=torch.float32, device=device) # Label classes of anchor boxes using their assigned ground-truth # bounding boxes. If an anchor box is not assigned any, we label its # class as background (the value remains zero) indices_true = torch.nonzero(anchors_bbox_map >= 0) bb_idx = anchors_bbox_map[indices_true] class_labels[indices_true] = label[bb_idx, 0].long() + 1 assigned_bb[indices_true] = label[bb_idx, 1:] # Offset transformation offset = offset_boxes(anchors, assigned_bb) * bbox_mask batch_offset.append(offset.reshape(-1)) batch_mask.append(bbox_mask.reshape(-1)) batch_class_labels.append(class_labels) bbox_offset = torch.stack(batch_offset) bbox_mask = torch.stack(batch_mask) class_labels = torch.stack(batch_class_labels) return (bbox_offset, bbox_mask, class_labels)
#@save def multibox_target(anchors, labels): """Label anchor boxes using ground-truth bounding boxes.""" batch_size, anchors = labels.shape[0], anchors.squeeze(0) batch_offset, batch_mask, batch_class_labels = [], [], [] device, num_anchors = anchors.ctx, anchors.shape[0] for i in range(batch_size): label = labels[i, :, :] anchors_bbox_map = assign_anchor_to_bbox( label[:, 1:], anchors, device) bbox_mask = np.tile((np.expand_dims((anchors_bbox_map >= 0), axis=-1)), (1, 4)).astype('int32') # Initialize class labels and assigned bounding box coordinates with # zeros class_labels = np.zeros(num_anchors, dtype=np.int32, ctx=device) assigned_bb = np.zeros((num_anchors, 4), dtype=np.float32, ctx=device) # Label classes of anchor boxes using their assigned ground-truth # bounding boxes. If an anchor box is not assigned any, we label its # class as background (the value remains zero) indices_true = np.nonzero(anchors_bbox_map >= 0)[0] bb_idx = anchors_bbox_map[indices_true] class_labels[indices_true] = label[bb_idx, 0].astype('int32') + 1 assigned_bb[indices_true] = label[bb_idx, 1:] # Offset transformation offset = offset_boxes(anchors, assigned_bb) * bbox_mask batch_offset.append(offset.reshape(-1)) batch_mask.append(bbox_mask.reshape(-1)) batch_class_labels.append(class_labels) bbox_offset = np.stack(batch_offset) bbox_mask = np.stack(batch_mask) class_labels = np.stack(batch_class_labels) return (bbox_offset, bbox_mask, class_labels)
14.4.3.3。一個例子
讓我們通過一個具體的例子來說明錨框標記。我們為加載圖像中的狗和貓定義地面真實邊界框,其中第一個元素是類(0 代表狗,1 代表貓),其余四個元素是(x,y)- 左上角和右下角的軸坐標(范圍在 0 和 1 之間)。我們還使用左上角和右下角的坐標構造了五個要標記的錨框: A0,…,A4(索引從0開始)。然后我們在圖像中繪制這些真實邊界框和錨框。
ground_truth = torch.tensor([[0, 0.1, 0.08, 0.52, 0.92], [1, 0.55, 0.2, 0.9, 0.88]]) anchors = torch.tensor([[0, 0.1, 0.2, 0.3], [0.15, 0.2, 0.4, 0.4], [0.63, 0.05, 0.88, 0.98], [0.66, 0.45, 0.8, 0.8], [0.57, 0.3, 0.92, 0.9]]) fig = d2l.plt.imshow(img) show_bboxes(fig.axes, ground_truth[:, 1:] * bbox_scale, ['dog', 'cat'], 'k') show_bboxes(fig.axes, anchors * bbox_scale, ['0', '1', '2', '3', '4']);
ground_truth = np.array([[0, 0.1, 0.08, 0.52, 0.92], [1, 0.55, 0.2, 0.9, 0.88]]) anchors = np.array([[0, 0.1, 0.2, 0.3], [0.15, 0.2, 0.4, 0.4], [0.63, 0.05, 0.88, 0.98], [0.66, 0.45, 0.8, 0.8], [0.57, 0.3, 0.92, 0.9]]) fig = d2l.plt.imshow(img) show_bboxes(fig.axes, ground_truth[:, 1:] * bbox_scale, ['dog', 'cat'], 'k') show_bboxes(fig.axes, anchors * bbox_scale, ['0', '1', '2', '3', '4']);
使用multibox_target上面定義的函數,我們可以根據狗和貓的真實邊界框來標記這些錨框的類別和偏移量。在此示例中,背景、狗和貓類的索引分別為 0、1 和 2。下面我們為anchor boxes和ground-truth bounding boxes的例子添加一個維度。
labels = multibox_target(anchors.unsqueeze(dim=0), ground_truth.unsqueeze(dim=0))
labels = multibox_target(np.expand_dims(anchors, axis=0), np.expand_dims(ground_truth, axis=0))
返回結果中有三項,都是張量格式。第三項包含輸入錨框的標記類。
讓我們根據圖像中的錨框和真實邊界框位置分析下面返回的類標簽。首先,在所有的anchor boxes和ground-truth bounding boxes對中,anchor boxes的IoUA4貓的真實邊界框是最大的。因此,類A4被標記為貓。取出包含的對A4或貓的真實邊界框,其余的一對錨框A1狗的真實邊界框具有最大的 IoU。所以類A1被標記為狗。接下來,我們需要遍歷剩下的三個未標記的anchor boxes:A0, A2, 和A3. 為了A0,具有最大IoU的ground-truth邊界框的類別是狗,但IoU低于預定義的閾值(0.5),因此該類別被標記為背景;為了A2,具有最大IoU的ground-truth bounding box的類別是貓,并且IoU超過閾值,因此該類別被標記為貓;為了A3,具有最大IoU的ground-truth bounding box的類別是貓,但該值低于閾值,因此該類別被標記為背景。
labels[2]
tensor([[0, 1, 2, 0, 2]])
labels[2]
array([[0, 1, 2, 0, 2]], dtype=int32)
第二個返回項是形狀的掩碼變量(批量大小,錨框數量的四倍)。掩碼變量中每四個元素對應每個錨框的四個偏移值。由于我們不關心背景檢測,這個負類的偏移量不應該影響目標函數。通過逐元素乘法,掩碼變量中的零將在計算目標函數之前過濾掉負類偏移。
labels[1]
tensor([[0., 0., 0., 0., 1., 1., 1., 1., 1., 1., 1., 1., 0., 0., 0., 0., 1., 1., 1., 1.]])
labels[1]
array([[0, 0, 0, 0, 1, 1, 1, 1, 1, 1, 1, 1, 0, 0, 0, 0, 1, 1, 1, 1]], dtype=int32)
第一個返回的項目包含為每個錨框標記的四個偏移值。請注意,負類錨框的偏移量標記為零。
labels[0]
tensor([[-0.00e+00, -0.00e+00, -0.00e+00, -0.00e+00, 1.40e+00, 1.00e+01, 2.59e+00, 7.18e+00, -1.20e+00, 2.69e-01, 1.68e+00, -1.57e+00, -0.00e+00, -0.00e+00, -0.00e+00, -0.00e+00, -5.71e-01, -1.00e+00, 4.17e-06, 6.26e-01]])
labels[0]
array([[-0.00e+00, -0.00e+00, -0.00e+00, -0.00e+00, 1.40e+00, 1.00e+01, 2.59e+00, 7.18e+00, -1.20e+00, 2.69e-01, 1.68e+00, -1.57e+00, -0.00e+00, -0.00e+00, -0.00e+00, -0.00e+00, -5.71e-01, -1.00e+00, 4.17e-06, 6.26e-01]])
14.4.4。預測具有非最大抑制的邊界框
在預測期間,我們為圖像生成多個錨框,并為每個錨框預測類別和偏移量。 因此根據具有預測偏移量的錨框獲得預測邊界框。下面我們實現offset_inverse將錨點和偏移預測作為輸入并應用逆偏移變換以返回預測的邊界框坐標的函數。
#@save def offset_inverse(anchors, offset_preds): """Predict bounding boxes based on anchor boxes with predicted offsets.""" anc = d2l.box_corner_to_center(anchors) pred_bbox_xy = (offset_preds[:, :2] * anc[:, 2:] / 10) + anc[:, :2] pred_bbox_wh = torch.exp(offset_preds[:, 2:] / 5) * anc[:, 2:] pred_bbox = torch.cat((pred_bbox_xy, pred_bbox_wh), axis=1) predicted_bbox = d2l.box_center_to_corner(pred_bbox) return predicted_bbox
#@save def offset_inverse(anchors, offset_preds): """Predict bounding boxes based on anchor boxes with predicted offsets.""" anc = d2l.box_corner_to_center(anchors) pred_bbox_xy = (offset_preds[:, :2] * anc[:, 2:] / 10) + anc[:, :2] pred_bbox_wh = np.exp(offset_preds[:, 2:] / 5) * anc[:, 2:] pred_bbox = np.concatenate((pred_bbox_xy, pred_bbox_wh), axis=1) predicted_bbox = d2l.box_center_to_corner(pred_bbox) return predicted_bbox
當有很多錨框時,可能會輸出許多相似(具有顯著重疊)的預測邊界框來包圍同一對象。為了簡化輸出,我們可以使用非最大抑制(NMS)合并屬于同一對象的相似預測邊界框 。
以下是非極大值抑制的工作原理。對于預測的邊界框 B,對象檢測模型計算每個類別的預測可能性。表示為p最大的預測似然,對應于這個概率的類就是預測的類B. 具體來說,我們參考p作為 預測邊界框的置信度(分數)B. 在同一張圖片上,將所有預測的非背景邊界框按照置信度降序排序,生成列表L. 然后我們操作排序列表L在以下步驟中:
選擇預測的邊界框B1以最高的信心L作為基礎并刪除所有非基礎預測邊界框,其 IoU 為B1超過預定義的閾值?從L. 在此刻, L保留具有最高置信度的預測邊界框,但丟棄與它太相似的其他邊界框。簡而言之,那些具有非最大置信度分數的被 抑制。
選擇預測的邊界框B2具有第二高的置信度L作為另一個基礎并刪除所有非基礎預測邊界框,其 IoU 與B2超過 ?從L.
重復上述過程,直到所有預測的邊界框在 L已被用作基礎。此時,任意一對預測邊界框的IoU在L低于閾值 ?; 因此,沒有一對彼此太相似。
輸出列表中所有預測的邊界框L.
以下nms函數按降序對置信度得分進行排序并返回它們的索引。
#@save def nms(boxes, scores, iou_threshold): """Sort confidence scores of predicted bounding boxes.""" B = torch.argsort(scores, dim=-1, descending=True) keep = [] # Indices of predicted bounding boxes that will be kept while B.numel() > 0: i = B[0] keep.append(i) if B.numel() == 1: break iou = box_iou(boxes[i, :].reshape(-1, 4), boxes[B[1:], :].reshape(-1, 4)).reshape(-1) inds = torch.nonzero(iou <= iou_threshold).reshape(-1) B = B[inds + 1] return torch.tensor(keep, device=boxes.device)
#@save def nms(boxes, scores, iou_threshold): """Sort confidence scores of predicted bounding boxes.""" B = scores.argsort()[::-1] keep = [] # Indices of predicted bounding boxes that will be kept while B.size > 0: i = B[0] keep.append(i) if B.size == 1: break iou = box_iou(boxes[i, :].reshape(-1, 4), boxes[B[1:], :].reshape(-1, 4)).reshape(-1) inds = np.nonzero(iou <= iou_threshold)[0] B = B[inds + 1] return np.array(keep, dtype=np.int32, ctx=boxes.ctx)
我們定義以下內容multibox_detection以將非最大抑制應用于預測邊界框。如果您發現實現有點復雜,請不要擔心:我們將在實現后立即通過具體示例展示它是如何工作的。
#@save def multibox_detection(cls_probs, offset_preds, anchors, nms_threshold=0.5, pos_threshold=0.009999999): """Predict bounding boxes using non-maximum suppression.""" device, batch_size = cls_probs.device, cls_probs.shape[0] anchors = anchors.squeeze(0) num_classes, num_anchors = cls_probs.shape[1], cls_probs.shape[2] out = [] for i in range(batch_size): cls_prob, offset_pred = cls_probs[i], offset_preds[i].reshape(-1, 4) conf, class_id = torch.max(cls_prob[1:], 0) predicted_bb = offset_inverse(anchors, offset_pred) keep = nms(predicted_bb, conf, nms_threshold) # Find all non-`keep` indices and set the class to background all_idx = torch.arange(num_anchors, dtype=torch.long, device=device) combined = torch.cat((keep, all_idx)) uniques, counts = combined.unique(return_counts=True) non_keep = uniques[counts == 1] all_id_sorted = torch.cat((keep, non_keep)) class_id[non_keep] = -1 class_id = class_id[all_id_sorted] conf, predicted_bb = conf[all_id_sorted], predicted_bb[all_id_sorted] # Here `pos_threshold` is a threshold for positive (non-background) # predictions below_min_idx = (conf < pos_threshold) class_id[below_min_idx] = -1 conf[below_min_idx] = 1 - conf[below_min_idx] pred_info = torch.cat((class_id.unsqueeze(1), conf.unsqueeze(1), predicted_bb), dim=1) out.append(pred_info) return torch.stack(out)
#@save def multibox_detection(cls_probs, offset_preds, anchors, nms_threshold=0.5, pos_threshold=0.009999999): """Predict bounding boxes using non-maximum suppression.""" device, batch_size = cls_probs.ctx, cls_probs.shape[0] anchors = np.squeeze(anchors, axis=0) num_classes, num_anchors = cls_probs.shape[1], cls_probs.shape[2] out = [] for i in range(batch_size): cls_prob, offset_pred = cls_probs[i], offset_preds[i].reshape(-1, 4) conf, class_id = np.max(cls_prob[1:], 0), np.argmax(cls_prob[1:], 0) predicted_bb = offset_inverse(anchors, offset_pred) keep = nms(predicted_bb, conf, nms_threshold) # Find all non-`keep` indices and set the class to background all_idx = np.arange(num_anchors, dtype=np.int32, ctx=device) combined = np.concatenate((keep, all_idx)) unique, counts = np.unique(combined, return_counts=True) non_keep = unique[counts == 1] all_id_sorted = np.concatenate((keep, non_keep)) class_id[non_keep] = -1 class_id = class_id[all_id_sorted].astype('float32') conf, predicted_bb = conf[all_id_sorted], predicted_bb[all_id_sorted] # Here `pos_threshold` is a threshold for positive (non-background) # predictions below_min_idx = (conf < pos_threshold) class_id[below_min_idx] = -1 conf[below_min_idx] = 1 - conf[below_min_idx] pred_info = np.concatenate((np.expand_dims(class_id, axis=1), np.expand_dims(conf, axis=1), predicted_bb), axis=1) out.append(pred_info) return np.stack(out)
現在讓我們將上述實現應用到一個有四個錨框的具體例子中。為簡單起見,我們假設預測的偏移量全為零。這意味著預測的邊界框是錨框。對于背景、狗和貓中的每個類別,我們還定義了它的預測可能性。
anchors = torch.tensor([[0.1, 0.08, 0.52, 0.92], [0.08, 0.2, 0.56, 0.95], [0.15, 0.3, 0.62, 0.91], [0.55, 0.2, 0.9, 0.88]]) offset_preds = torch.tensor([0] * anchors.numel()) cls_probs = torch.tensor([[0] * 4, # Predicted background likelihood [0.9, 0.8, 0.7, 0.1], # Predicted dog likelihood [0.1, 0.2, 0.3, 0.9]]) # Predicted cat likelihood
anchors = np.array([[0.1, 0.08, 0.52, 0.92], [0.08, 0.2, 0.56, 0.95], [0.15, 0.3, 0.62, 0.91], [0.55, 0.2, 0.9, 0.88]]) offset_preds = np.array([0] * d2l.size(anchors)) cls_probs = np.array([[0] * 4, # Predicted background likelihood [0.9, 0.8, 0.7, 0.1], # Predicted dog likelihood [0.1, 0.2, 0.3, 0.9]]) # Predicted cat likelihood
我們可以繪制這些預測的邊界框及其對圖像的置信度。
fig = d2l.plt.imshow(img) show_bboxes(fig.axes, anchors * bbox_scale, ['dog=0.9', 'dog=0.8', 'dog=0.7', 'cat=0.9'])
fig = d2l.plt.imshow(img) show_bboxes(fig.axes, anchors * bbox_scale, ['dog=0.9', 'dog=0.8', 'dog=0.7', 'cat=0.9'])
現在我們可以調用該multibox_detection函數來執行非極大值抑制,其中閾值設置為 0.5。請注意,我們在張量輸入中為示例添加了一個維度。
我們可以看到返回結果的shape為(batch size, anchor boxes number, 6)。最里面維度的六個元素給出了相同預測邊界框的輸出信息。第一個元素是預測的類別索引,它從 0 開始(0 是狗,1 是貓)。值 -1 表示背景或非最大抑制中的去除。第二個元素是預測邊界框的置信度。剩下的四個元素是(x,y)分別為預測邊界框的左上角和右下角的軸坐標(范圍在 0 和 1 之間)。
output = multibox_detection(cls_probs.unsqueeze(dim=0), offset_preds.unsqueeze(dim=0), anchors.unsqueeze(dim=0), nms_threshold=0.5) output
tensor([[[ 0.00, 0.90, 0.10, 0.08, 0.52, 0.92], [ 1.00, 0.90, 0.55, 0.20, 0.90, 0.88], [-1.00, 0.80, 0.08, 0.20, 0.56, 0.95], [-1.00, 0.70, 0.15, 0.30, 0.62, 0.91]]])
output = multibox_detection(np.expand_dims(cls_probs, axis=0), np.expand_dims(offset_preds, axis=0), np.expand_dims(anchors, axis=0), nms_threshold=0.5) output
array([[[ 1. , 0.9 , 0.55, 0.2 , 0.9 , 0.88], [ 0. , 0.9 , 0.1 , 0.08, 0.52, 0.92], [-1. , 0.8 , 0.08, 0.2 , 0.56, 0.95], [-1. , 0.7 , 0.15, 0.3 , 0.62, 0.91]]])
去除那些-1類的預測邊界框后,我們可以輸出非最大抑制保留的最終預測邊界框。
fig = d2l.plt.imshow(img) for i in output[0].detach().numpy(): if i[0] == -1: continue label = ('dog=', 'cat=')[int(i[0])] + str(i[1]) show_bboxes(fig.axes, [torch.tensor(i[2:]) * bbox_scale], label)
fig = d2l.plt.imshow(img) for i in output[0].asnumpy(): if i[0] == -1: continue label = ('dog=', 'cat=')[int(i[0])] + str(i[1]) show_bboxes(fig.axes, [np.array(i[2:]) * bbox_scale], label)
在實踐中,我們甚至可以在執行非最大抑制之前刪除具有較低置信度的預測邊界框,從而減少該算法的計算量。我們還可以對非最大抑制的輸出進行后處理,例如,只保留對最終輸出具有更高置信度的結果。
14.4.5。概括
我們以圖像的每個像素為中心生成具有不同形狀的錨框。
Intersection over union (IoU),也稱為 Jaccard 指數,衡量兩個邊界框的相似性。它是它們的交集面積與聯合面積的比率。
在訓練集中,我們需要為每個錨框提供兩種類型的標簽。一個是與anchor box相關的對象的類別,另一個是ground-truth bounding box相對于anchor box的偏移量。
在預測過程中,我們可以使用非最大抑制(NMS)來去除相似的預測邊界框,從而簡化輸出。
14.4.6。練習
更改函數中的sizes和的值 。生成的anchor boxes有什么變化?ratiosmultibox_prior
構造和可視化兩個 IoU 為 0.5 的邊界框。它們如何相互重疊?
修改14.4.3 節和 14.4.4 節anchors中的 變量。結果如何變化?
非極大值抑制是一種貪心算法,它通過移除預測的邊界框來抑制它們。有沒有可能其中一些被刪除的實際上有用?如何修改此算法以軟抑制?你可以參考 Soft-NMS ( Bodla et al. , 2017 )。
與其手工制作,不如學習非極大值抑制?
-
算法
+關注
關注
23文章
4622瀏覽量
93055 -
pytorch
+關注
關注
2文章
808瀏覽量
13249
發布評論請先 登錄
相關推薦
評論