You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
The paper says y = upsample(m) + delta m. But it seems that y is predicted by the RenderNet which takes m as one input. And I also don't find the L_mask. Is the code different from the article?
The text was updated successfully, but these errors were encountered:
Hi,
Do you use the detach_texture in the LocalGenerator? I see It's False in default, why not True?Because I think the img can just effect the texture feature not for depth feature.
------------------ 原始邮件 ------------------
发件人: "seasonSH/SemanticStyleGAN" ***@***.***>;
发送时间: 2022年7月25日(星期一) 上午7:16
***@***.***>;
***@***.******@***.***>;
主题: Re: [seasonSH/SemanticStyleGAN] About Render Net (Issue #10)
Hi zhouwy19,
The mask delta is implemented at models/semantic_stylegan.py:116-117
The mask loss is implemented at train.py:252
You may consider emailing me directly for more implementation questions.
—
Reply to this email directly, view it on GitHub, or unsubscribe.
You are receiving this because you authored the thread.Message ID: ***@***.***>
Hi.
The paper says y = upsample(m) + delta m. But it seems that y is predicted by the RenderNet which takes m as one input. And I also don't find the L_mask. Is the code different from the article?
The text was updated successfully, but these errors were encountered: