Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Flagging AI-Generated content with or without human verification #2328

Open
brittnylapierre opened this issue Jan 7, 2025 · 0 comments
Open

Comments

@brittnylapierre
Copy link

As more content is being automatically generated and added to IIIF objects, how we can let IIIF content creators flag that content has been generated with AI, and if it has been human verified or not?

This will be a useful topic to consider as AI use regulations come about, and also for accessibility and research related concerns.

Human verification is an important topic discussed in accessibility, see this article, which writes: "Though there have been stories of intelligent conversations with chatbots, the ability of AI to accurately recognize content isn’t on par with a human’s. An AI’s ability to generate content is dependent on what it has been trained on, and even the most effective tools can’t deliver perfect accuracy when it comes to things like captions and translations. AI-generated captions often include misunderstood words so, although they are a good starting point, using them without double-checking their accuracy would not be suitable for accessibility."

I have also heard through user research, historians mention human verification of data they search upon being an important thing they consider for reliability of their research, too.

As such it might be the time to plan for how to allow content creators to flag content as ai generated, and also human verified.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

1 participant