Skip to content

Reproducibility issue on MoCA-Mask: Training details and checkpoint selection #4

@hahaha78123

Description

@hahaha78123

Hi authors,
Thanks for your great work and open-source code!
I am trying to reproduce your baseline results on the MoCA-Mask dataset. I am using an A800 server and strictly followed your hyperparameters (Batch Size=4, 2.5k iterations, etc.). However, my reproduced results are noticeably lower than reported.
Here is the data comparison (Reported SAM-PM vs. My Reproduction):
S_alpha: 0.728 vs. 0.700
F_beta: 0.567 vs. 0.503
E_phi: 0.813 vs. 0.756
MAE (M): 0.009 vs. 0.010
mDic: 0.594 vs. 0.527
mIoU: 0.502 vs. 0.443
Could you please clarify the following two questions to help me align with your results?
(1) Training Details: Are there any extra training details, tricks, or specific settings not mentioned in the README that I should pay attention to?
(2) Checkpoint Selection: For the final test result (0.502 mIoU), did you use the checkpoint from the very last iteration/epoch, or did you select an intermediate epoch's checkpoint? If it's not the last epoch, how was it selected?
Thanks for your time!

Metadata

Metadata

Assignees

No one assigned

    Labels

    No labels
    No labels

    Type

    No type

    Projects

    No projects

    Milestone

    No milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions