@gkioxari Thanks for your great work of MeshRCNN!
I made a custom dataset following pixel3D format. The training (1100 iteration) result is as follows:
[03/08 11:21:47 d2.utils.events]: eta: 0:00:00 iter: 1099 total_loss: 1.177 loss_cls: 0.02223 loss_box_reg: 0.0114 loss_z_reg: 0.01112 loss_mask: 0.05503 loss_voxel: 0.6004 loss_chamfer: 0.09796 loss_normals: 0.2424 loss_edge: 0.02533 loss_rpn_cls: 0.00436 loss_rpn_loc: 0.0009238 time: 0.4468 data_time: 0.0106 lr: 0.0001 max_mem: 2062M
[03/08 11:21:48 d2.engine.hooks]: Overall training speed: 1098 iterations in 0:08:10 (0.4468 s / it)
[03/08 11:21:48 d2.engine.hooks]: Total training time: 0:08:24 (0:00:14 on hooks)
However, the test AP of mesh is always 0:
Distribution of testing instances among all 2 categories:
| category |
#instances |
boxAP |
maskAP |
meshAP |
| A |
55 |
1 |
1 |
0 |
| B |
53 |
1 |
1 |
0 |
|
|
|
|
|
| total |
108 |
1 |
1 |
0 |
| [03/08 11:49:33 meshrcnn.evaluation.pix3d_evaluation]: Box AP 1.00000 |
|
|
|
|
| [03/08 11:49:33 meshrcnn.evaluation.pix3d_evaluation]: Mask AP 1.00000 |
|
|
|
|
| [03/08 11:49:33 meshrcnn.evaluation.pix3d_evaluation]: Mesh AP 0.00000 |
|
|
|
|
What potential reason could it be? The model or voxel in dataset are wrong? But the training losses looks okay.
I would be very appreciate if you have time to reply!
Thanks!
@gkioxari Thanks for your great work of MeshRCNN!
I made a custom dataset following pixel3D format. The training (1100 iteration) result is as follows:
[03/08 11:21:47 d2.utils.events]: eta: 0:00:00 iter: 1099 total_loss: 1.177 loss_cls: 0.02223 loss_box_reg: 0.0114 loss_z_reg: 0.01112 loss_mask: 0.05503 loss_voxel: 0.6004 loss_chamfer: 0.09796 loss_normals: 0.2424 loss_edge: 0.02533 loss_rpn_cls: 0.00436 loss_rpn_loc: 0.0009238 time: 0.4468 data_time: 0.0106 lr: 0.0001 max_mem: 2062M
[03/08 11:21:48 d2.engine.hooks]: Overall training speed: 1098 iterations in 0:08:10 (0.4468 s / it)
[03/08 11:21:48 d2.engine.hooks]: Total training time: 0:08:24 (0:00:14 on hooks)
However, the test AP of mesh is always 0:
Distribution of testing instances among all 2 categories:
What potential reason could it be? The model or voxel in dataset are wrong? But the training losses looks okay.
I would be very appreciate if you have time to reply!
Thanks!