You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Hi, I'm trying to add attention layer on my own detector now. But there are some problems..
First one is the training time per epoch is increasing.. it maybe caused by memory leak.. but I can't find the reason.
Second one is I can not sure whether the weight is optimizing.
My source code is below.
Copyright (c) OpenMMLab. All rights reserved.
import torch
from mmcv.runner import force_fp32, BaseModule
from torch.nn import functional as F
from ..builder import DETECTORS
from .mvx_two_stage import MVXTwoStageDetector
from mmdet3d.core import bbox3d2result, merge_aug_bboxes_3d
from mmdet3d.models.backbones.voxel_fusion_layer import build_voxel_fusion_layer
from mmdet3d.models.backbones.middle_fusion_layer import build_middle_fusion_layer
from mmdet3d.models.builder import build_middle_encoder
from mmdet3d.models.builder import build_voxel_encoder
from mmdet3d.models.builder import build_backbone
from mmdet3d.models.builder import build_neck
from mmcv.runner import force_fp32
from mmcv.ops import Voxelization
import torch.nn as nn
import torch.nn.init as init
import torch.profiler as profiler
import time
from mmcv.utils import Registry
MLP_LAYER = Registry('mlp_layer')
VELOCITY_OFFSET = Registry('velocity_offset')
def init_weights(self):
for layer in self.linear:
if isinstance(layer, nn.Linear):
init.kaiming_normal_(layer.weight, mode='fan_in', nonlinearity='relu')
if layer.bias is not None:
init.constant_(layer.bias, 0)
def forward(self, x):
out = self.linear(x)
print(f'out.grad : {out.grad}')
return out
Model/Dataset/Scheduler description
Hi, I'm trying to add attention layer on my own detector now. But there are some problems..
First one is the training time per epoch is increasing.. it maybe caused by memory leak.. but I can't find the reason.
Second one is I can not sure whether the weight is optimizing.
My source code is below.
Copyright (c) OpenMMLab. All rights reserved.
import torch
from mmcv.runner import force_fp32, BaseModule
from torch.nn import functional as F
from ..builder import DETECTORS
from .mvx_two_stage import MVXTwoStageDetector
from mmdet3d.core import bbox3d2result, merge_aug_bboxes_3d
from mmdet3d.models.backbones.voxel_fusion_layer import build_voxel_fusion_layer
from mmdet3d.models.backbones.middle_fusion_layer import build_middle_fusion_layer
from mmdet3d.models.builder import build_middle_encoder
from mmdet3d.models.builder import build_voxel_encoder
from mmdet3d.models.builder import build_backbone
from mmdet3d.models.builder import build_neck
from mmcv.runner import force_fp32
from mmcv.ops import Voxelization
import torch.nn as nn
import torch.nn.init as init
import torch.profiler as profiler
import time
from mmcv.utils import Registry
MLP_LAYER = Registry('mlp_layer')
VELOCITY_OFFSET = Registry('velocity_offset')
@MLP_LAYER.register_module()
class MLP(nn.Module):
def init(self):
super(MLP, self).init()
self.linear = nn.Sequential(
nn.Linear(10, 32),
nn.LayerNorm(32),
nn.ReLU(),
nn.Linear(32, 64),
nn.ReLU(),
nn.Linear(64, 1)
)
self.init_weights()
print("MLP Initialization")
@VELOCITY_OFFSET.register_module()
class VelocityAttention(nn.Module):
def init(self, max_pairs=50):
super(VelocityAttention, self).init()
print("VelocityAttention Initialization")
self.max_pairs = max_pairs
in main detector model, I refer my Attention model like this.
self.MLPNet = MLP().cuda()
self.VelocityAttention = VelocityAttention(max_pairs=max_pairs).to('cuda')
and in forward(), I use this layer.
velo_det_attention, velo_gt = self.VelocityAttention(gt_bboxes_3d, rad_points, bbox_list, self.MLPNet)
Open source status
Provide useful links for the implementation
No response
The text was updated successfully, but these errors were encountered: