Skip to content
This repository was archived by the owner on Nov 21, 2023. It is now read-only.
This repository was archived by the owner on Nov 21, 2023. It is now read-only.

Question about post_nms_topN size in collect_and_distribute_fpn_rpn_proposals.py #459

@roytseng-tw

Description

@roytseng-tw

Expected results

https://github.com/facebookresearch/Detectron/blob/master/detectron/ops/collect_and_distribute_fpn_rpn_proposals.py#L73

Since FPN_RPN collects rois across images (batches) within a gpu, it's more reasonable for the collect size post_nms_topN relating to number of batches per gpu cfg.TRAIN.IMS_PER_BATCH. Such as something like

post_nms_topN = cfg.TRAIN.IMS_PER_BATCH * cfg[cfg_key].FPN_RPN_POST_NMS_TOP_N

Note:
There is no cfg[cfg_key].FPN_RPN_POST_NMS_TOP_N, for cfg_key in {'TRAIN', ' TEST'}, defined in config.py already.
To define them, I just follow the discipline of cfg[cfg_key].RPN_POST_NMS_TOP_N: max number of post nms rois per batch.

Take e2e_mask_rcnn_R-50-FPN_1x.yaml for example, the config file should be changed like the below to keep original behavior.

(omit above)
TRAIN:
  SCALES: (800,)
  MAX_SIZE: 1333
  BATCH_SIZE_PER_IM: 512
  RPN_PRE_NMS_TOP_N: 2000  # Per FPN level
  FPN_RPN_PRE_NMS_TOP_N: 1000  # Per FPN level
TEST:
  SCALE: 800
  MAX_SIZE: 1333
  NMS: 0.5
  RPN_PRE_NMS_TOP_N: 1000  # Per FPN level
  RPN_POST_NMS_TOP_N: 1000
  FPN_RPN_PRE_NMS_TOP_N: 1000  # Per FPN level

Note: TRAIN.IMS_PER_BATH is 2, and on testing images-per-gpu is always 1.

Actual results

For now, when train with different TRAIN.IMS_PER_BATH than 2, post nms collect size of FPN_RPN is unchanged. I think that may not be a desired behavior.

IMS_PER_BATCH FPN_RPN post nms collect size
Default 2 2000
Changed 1 2000
Changed 4 2000

Again, "FPN_RPN post nms collect size" is a total rois size for all the batches in one gpu.

Detailed steps to reproduce

None.

System information

Irrelevant.

  • Operating system: ?
  • Compiler version: ?
  • CUDA version: ?
  • cuDNN version: ?
  • NVIDIA driver version: ?
  • GPU models (for all devices if they are not all the same): ?
  • PYTHONPATH environment variable: ?
  • python --version output: ?
  • Anything else that seems relevant: ?

Metadata

Metadata

Assignees

No one assigned

    Labels

    No labels
    No labels

    Type

    No type

    Projects

    No projects

    Milestone

    No milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions