Expected results
https://github.com/facebookresearch/Detectron/blob/master/detectron/ops/collect_and_distribute_fpn_rpn_proposals.py#L73
Since FPN_RPN collects rois across images (batches) within a gpu, it's more reasonable for the collect size post_nms_topN relating to number of batches per gpu cfg.TRAIN.IMS_PER_BATCH. Such as something like
post_nms_topN = cfg.TRAIN.IMS_PER_BATCH * cfg[cfg_key].FPN_RPN_POST_NMS_TOP_N
Note:
There is no cfg[cfg_key].FPN_RPN_POST_NMS_TOP_N, for cfg_key in {'TRAIN', ' TEST'}, defined in config.py already.
To define them, I just follow the discipline of cfg[cfg_key].RPN_POST_NMS_TOP_N: max number of post nms rois per batch.
Take e2e_mask_rcnn_R-50-FPN_1x.yaml for example, the config file should be changed like the below to keep original behavior.
(omit above)
TRAIN:
SCALES: (800,)
MAX_SIZE: 1333
BATCH_SIZE_PER_IM: 512
RPN_PRE_NMS_TOP_N: 2000 # Per FPN level
FPN_RPN_PRE_NMS_TOP_N: 1000 # Per FPN level
TEST:
SCALE: 800
MAX_SIZE: 1333
NMS: 0.5
RPN_PRE_NMS_TOP_N: 1000 # Per FPN level
RPN_POST_NMS_TOP_N: 1000
FPN_RPN_PRE_NMS_TOP_N: 1000 # Per FPN level
Note: TRAIN.IMS_PER_BATH is 2, and on testing images-per-gpu is always 1.
Actual results
For now, when train with different TRAIN.IMS_PER_BATH than 2, post nms collect size of FPN_RPN is unchanged. I think that may not be a desired behavior.
|
IMS_PER_BATCH |
FPN_RPN post nms collect size |
| Default |
2 |
2000 |
| Changed |
1 |
2000 |
| Changed |
4 |
2000 |
Again, "FPN_RPN post nms collect size" is a total rois size for all the batches in one gpu.
Detailed steps to reproduce
None.
System information
Irrelevant.
- Operating system: ?
- Compiler version: ?
- CUDA version: ?
- cuDNN version: ?
- NVIDIA driver version: ?
- GPU models (for all devices if they are not all the same): ?
PYTHONPATH environment variable: ?
python --version output: ?
- Anything else that seems relevant: ?
Expected results
https://github.com/facebookresearch/Detectron/blob/master/detectron/ops/collect_and_distribute_fpn_rpn_proposals.py#L73
Since FPN_RPN collects rois across images (batches) within a gpu, it's more reasonable for the collect size
post_nms_topNrelating to number of batches per gpucfg.TRAIN.IMS_PER_BATCH. Such as something likeNote:
There is no
cfg[cfg_key].FPN_RPN_POST_NMS_TOP_N, for cfg_key in {'TRAIN', ' TEST'}, defined inconfig.pyalready.To define them, I just follow the discipline of
cfg[cfg_key].RPN_POST_NMS_TOP_N: max number of post nms rois per batch.Take
e2e_mask_rcnn_R-50-FPN_1x.yamlfor example, the config file should be changed like the below to keep original behavior.Note:
TRAIN.IMS_PER_BATHis 2, and on testing images-per-gpu is always 1.Actual results
For now, when train with different
TRAIN.IMS_PER_BATHthan 2, post nms collect size of FPN_RPN is unchanged. I think that may not be a desired behavior.Again, "FPN_RPN post nms collect size" is a total rois size for all the batches in one gpu.
Detailed steps to reproduce
None.
System information
Irrelevant.
PYTHONPATHenvironment variable: ?python --versionoutput: ?