Hi Authors,
Thank you for your great work! It inspired me a lot! I'm really looking forward to your code for Cond P-Diff. May I know the estimated time for getting access to that?
Besides, I have a question about Cond P-Diff. I saw the CV task in this paper is style image generation and Cond P-Diff will generate parameters according to the conditions, namely the style image. I want to know when you test Cond P-Diff, do you give it the style image it is trained with, or a totally new/unseen style? For example, train the Cond P-Diff with 10 style-parameter pairs, and test with another 5 styles.
I noticed that in the Appendix, you mentioned the style-continuous dataset and the generalizability of Cond P-Diff to generate parameters for style in the range that is not in the trainset. But here I want to discuss with you that do you think it can generate parameters for a totally unseen style? Or do you have any insight about this?
Really appreciate your response and great work. Thank you!
Best,
Lijun
Hi Authors,
Thank you for your great work! It inspired me a lot! I'm really looking forward to your code for Cond P-Diff. May I know the estimated time for getting access to that?
Besides, I have a question about Cond P-Diff. I saw the CV task in this paper is style image generation and Cond P-Diff will generate parameters according to the conditions, namely the style image. I want to know when you test Cond P-Diff, do you give it the style image it is trained with, or a totally new/unseen style? For example, train the Cond P-Diff with 10 style-parameter pairs, and test with another 5 styles.
I noticed that in the Appendix, you mentioned the style-continuous dataset and the generalizability of Cond P-Diff to generate parameters for style in the range that is not in the trainset. But here I want to discuss with you that do you think it can generate parameters for a totally unseen style? Or do you have any insight about this?
Really appreciate your response and great work. Thank you!
Best,
Lijun