Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Does the Wan model perform safety alignment to avoid the output of pornographic, violent and other content? #190

Open
hanbaoergogo opened this issue Mar 6, 2025 · 1 comment

Comments

@hanbaoergogo
Copy link

No description provided.

@GeradeHouse
Copy link

Yes, it does appear to have some safety alignment. In the example image, if the action veers into “romantic” territory, the model deploys a discreet pink blur. Of course, with enough coaxing, it might still reveal more than intended, but it’s clear there’s some safety alignment baked in.

Image

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants