You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
An early work by Bao et al. has vary similar approach with ours.
Content/style disentanglement fashion has been used in many papers for image translation, e.g. DRIT and MUNIT, with their own tweaks in objectives and model architectures. In our case, we added prior knowledge of human face as inputs and loss functions as well.
thanks for u answer, i reimplement the this paaper 《Towards Open-Set Identity Preserving Face Synthesis》 by Bao et al. But the result is not good for faces not in train set and their loss is hard to convergence. So i want to how to implement sawping arbitrarily face with only one model?
i study the face swap recently. most papaer's model can only swap special peopel face. i want to konw it is the same?
The text was updated successfully, but these errors were encountered: