Skip to content

Latest commit

 

History

History
340 lines (314 loc) · 15.7 KB

note.md

File metadata and controls

340 lines (314 loc) · 15.7 KB

backscatter coefficient 后向散射系数 极化

convNet

test Epoch 21 train_loss:0.14493, test_loss 0.19278, best_test_loss 0.19278, accuracy 90.34483 0.3309

test Epoch 34, lr: 0.00000 best_test_loss 0.19360, test_accuracy 86.20690, train_loss:0.18985, test_loss 0.19360 0.2705

test Epoch 43, lr: 0.00000 best_test_loss 0.16528, test_accuracy 93.10345, train_loss:0.25169, test_loss 0.16528 0.1945 rotate is awesome?

test Epoch 37, lr: 0.00000 best_test_loss 0.22386, test_accuracy 91.03448, train_loss:0.27736, test_loss 0.22386 0.2854

test Epoch 29, lr: 0.00000 best_test_loss 0.19650, test_accuracy 88.96552, train_loss:0.31724, test_loss 0.19650 0.2399

test Epoch 99, lr: 0.00156250 best_test_loss 0.15662, test_accuracy 86.89655, train_loss:0.17001, test_loss 0.32298 0.1978

test Epoch 99, lr: 0.00004883 best_test_loss 0.15658, test_accuracy 87.58621, train_loss:0.17496, test_loss 0.33669

test Epoch 99, lr: 0.00039063 best_test_loss 0.15901, test_accuracy 80.68966, train_loss:0.40502, test_loss 0.48465

test Epoch 99, lr: 0.00004883 best_test_loss 0.15368, test_accuracy 85.51724, train_loss:0.20466, test_loss 0.54482

过拟合??

test Epoch 99, lr: 0.00019531 best_test_loss 0.13829, test_accuracy 91.72414, train_loss:0.14624, test_loss 0.21511 0.2354

test Epoch 99, lr: 0.00039063 best_test_loss 0.15874, test_accuracy 86.89655, train_loss:0.10851, test_loss 0.51559

test Epoch 99, lr: 0.00002441 best_test_loss 0.13738, test_accuracy 86.89655, train_loss:0.13139, test_loss 0.61632

test Epoch 99, lr: 0.00039063 best_test_loss 0.23778, test_accuracy 83.56164, train_loss:0.18233, test_loss 0.28511

test Epoch 99, lr: 0.00625000 best_test_loss 0.22963, test_accuracy 86.30137, train_loss:0.23748, test_loss 0.34087

res18 pretrained

average test loss:0.204781

average test loss:0.223527

average test loss:0.216351

average test loss:0.272495

average test loss:0.235534

average test loss:0.273817

average test loss:0.235803

resModel

average test loss:0.214413

average test loss:0.237254

average test loss:0.171246 average test loss:0.205263 average test loss:0.205097, average train loss:.0.181294 线上0.2127

average test loss:0.233942 average test loss:0.208232, average train loss:.0.153128

average test loss:0.192417, average train loss:.0.206058 average test loss:0.185343, average train loss:.0.203189 average test loss:0.236775, average train loss:.0.201195 结果太随机了

average test loss:0.223221, average train loss:.0.202647

average test loss:0.233810, average train loss:.0.200464

average test loss:0.233780, average train loss:.0.076921

average test loss:0.209401, average train loss:.0.068871 stack

vgg16

zca白化

fold 0, Epoch 22, lr: 0.01000000 best_test_loss 0.67068, train_loss:0.76683, test_loss 0.85430

通过对输入加上Batch Normalization,以实现对输入的预处理

average test loss:0.196119, average train loss:.0.211353 线上 0.1960

对最后的输出层添加噪声,以使sigmoid的输出区分开来,而不至于在0.5附近,

在sigmoid处添加噪声 使输出更趋向于二值输出,参见deep learning的autoencoder average test loss:0.199828, average train loss:.0.085958 线上 0.1608 what happended??

在ConvNet中,把dropout层去掉

是否如论文中所说,有了bn层之后,Bias理论上确实多余,而dropout呢

写predict的集成方法

同时查看ice和ship图片,发现不同

看不出,不过发现亮点位置变化很大,大小也变化很大

使用deformable CNN

average test loss:0.233655, average train loss:.0.050323

average test loss:0.228653, average train loss:.0.051385

average test loss:0.221123, average train loss:.0.060002

average test loss:0.222726, average train loss:.0.149231

average test loss:0.232851, average train loss:.0.064902 average test loss:0.240076, average train loss:.0.007134

average test loss:0.238063, average train loss:.0.021351

average test loss:0.194955, average train loss:.0.110455 数据增强有用?

average test loss:0.183261, average train loss:.0.097679 线上0.1932 0.1997

spatial transform network

average test loss:0.204144, average train loss:.0.181775 线上 0.1729 average test loss:0.214501, average train loss:.0.100599 stack

average test loss:0.183260, average train loss:.0.092487

average test loss:0.324347, average train loss:.0.210049 有的好,有的差 average test loss:0.341616, average train loss:.0.258920 是真的不好

选出亮度大于均值+2×方差的像素的位置,然后圈定 根本不行

average test loss:0.219151, average train loss:.0.043743 线上 0.1828 不起作用?

average test loss:0.262968, average train loss:.0.121245 线上 0.1999 也不起作用

average test loss:0.205066, average train loss:.0.044473

average test loss:0.240381, average train loss:.0.039588

average test loss:0.219999, average train loss:.0.054592

average test loss:0.180112, average train loss:.0.095212 不crop更好

average test loss:0.237561, average train loss:.0.012233 过拟合严重,应该着手找去除过拟合的方法,而不是继续新网络的尝试

average test loss:0.246844, average train loss:.0.010427

average test loss:0.221798, average train loss:.0.109564

average test loss:0.220774, average train loss:.0.076189

average test loss:0.297814, average train loss:.0.061193 过拟合很严重

在全连接层添加inc_angle

average test loss:0.159343, average train loss:.0.024036 损失震荡很厉害

average test loss:0.207845, average train loss:.0.090437 归一化后效果和不加相同

average test loss:0.164736, average train loss:.0.070337 震荡太严重

average test loss:0.212593, average train loss:.0.074800 更糟糕

有可能训练根本就不会收敛?

average test loss:0.150993, average train loss:.0.053462 确实很好,比使用均值填充震荡幅度小

average test loss:0.225410, average train loss:.0.036134

发散

average test loss:0.373767, average train loss:.0.251022 不好

添加laternel结构

第一层卷积使用PReLU

average test loss:0.232106, average train loss:.0.092780

改变参数初始化方法以减少震荡或不收敛的情况

看pytorch默认的初始化方法是什么 xavier-uniform

average test loss:0.206330, average train loss:.0.221600

average test loss:0.182640, average train loss:.0.088378 好一点

average test loss:0.181852, average train loss:.0.115198

average test loss:0.194272, average train loss:.0.064499

average test loss:0.201961, average train loss:.0.081981

average test loss:0.148888, average train loss:.0.074843 加大权重衰减可以减轻过拟合 average test loss:0.179505, average train loss:.0.074203 结果还是不太稳定,有一个0.13 线上0.1861

average test loss:0.208659, average train loss:.0.089634 有一个不行

使用$2\times 2$的卷积核,以保持细节

阐释fractional pool

将dropout换成conv2d的步长

average test loss:0.173561, average train loss:.0.077523 线上 0.2195

resModel

average test loss:0.228612, average train loss:.0.202333

average test loss:0.247735, average train loss:.0.149113

average test loss:0.275267, average train loss:.0.185022

average test loss:0.233755, average train loss:.0.178425

average test loss:0.254026, average train loss:.0.149570

average test loss:0.355445, average train loss:.0.066586 crop就过拟合

average test loss:0.266209, average train loss:.0.181895

loss不同,输出的分布也不同?

average test loss:0.242483, average train loss:.0.180673

average test loss:0.252653, average train loss:.0.172899?

average test loss:0.242949, average train loss:.0.169888

average test loss:0.238308, average train loss:.0.178249

average test loss:0.232003, average train loss:.0.077536

lateral model

average test loss:0.224093, average train loss:.0.185493 线上 0.222

average test loss:0.238502, average train loss:.0.183955

average test loss:0.201659, average train loss:.0.100682

average test loss:0.214560, average train loss:.0.105047

average test loss:0.426968, average train loss:.0.335662

average test loss:0.418621, average train loss:.0.132600

average test loss:0.409492, average train loss:.0.137095

average test loss:0.368851, average train loss:.0.151549

average test loss:0.393786, average train loss:.0.111288

average test loss:0.371570, average train loss:.0.126198

average test loss:0.353302, average train loss:.0.113255

average test loss:0.372462, average train loss:.0.110490

average test loss:0.369433, average train loss:.0.123947

average test loss:0.354283, average train loss:.0.146388

average test loss:0.377611, average train loss:.0.099520

average test loss:0.388231, average train loss:.0.141772

average test loss:0.412088, average train loss:.0.292780

average test loss:0.337630, average train loss:.0.125791 可以

average test loss:0.369177, average train loss:.0.022962

average test loss:0.363353, average train loss:.0.200878

average test loss:0.340855, average train loss:.0.187562

average test loss:0.398085, average train loss:.0.345799

不收敛

average test loss:0.317586, average train loss:.0.135877 厉害 average test loss:0.366345, average train loss:.0.126594 结果很随机

average test loss:0.348928, average train loss:.0.136806

average test loss:0.390565, average train loss:.0.132121

average test loss:0.358188, average train loss:.0.133969

average test loss:0.348081, average train loss:.0.134984

average test loss:0.337618, average train loss:.0.145867

average test loss:0.307901, average train loss:.0.173047 average test loss:0.317608, average train loss:.0.159305 average test loss:0.390299, average train loss:.0.300357

average test loss:0.365219, average train loss:.0.245349

average test loss:0.379116, average train loss:.0.318695

average test loss:0.342046, average train loss:.0.189783

average test loss:0.353239, average train loss:.0.299657

average test loss:0.376087, average train loss:.0.318330

average test loss:0.388909, average train loss:.0.298582

不靠谱

average test loss:0.408406, average train loss:.0.330217

average test loss:0.324028, average train loss:.0.270278 average test loss:0.307347, average train loss:.0.240791 average test loss:0.343076, average train loss:.0.243124

average test loss:0.307148, average train loss:.0.192440 average test loss:0.305122, average train loss:.0.190761

average test loss:0.326102, average train loss:.0.228217

average test loss:0.080338, average train loss:.0.015431 average test loss:0.044228, average train loss:.0.007685

average test loss:0.078167, average train loss:.0.024920 average test loss:0.073738, average train loss:.0.016546

folder:0 best test loss:0.03260 best train loss:0.02173 folder:1 best test loss:0.05762 best train loss:0.01672 folder:2 best test loss:0.00517 best train loss:0.01049 folder:3 best test loss:0.02128 best train loss:0.02577 folder:4 best test loss:0.11475 best train loss:0.01656 average test loss:0.046285, average train loss:.0.018255 线上 0.208

线上 0.1535 线上 0.1446 convNet(stn)+getNet 线上 0.1341 resModel+convNet(stn)+getNet 线上 0.1347 +laternel 0.1626 线上 0.1423 +outsModel