You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
As more content is being automatically generated and added to IIIF objects, how we can let IIIF content creators flag that content has been generated with AI, and if it has been human verified or not?
This will be a useful topic to consider as AI use regulations come about, and also for accessibility and research related concerns.
Human verification is an important topic discussed in accessibility, see this article, which writes: "Though there have been stories of intelligent conversations with chatbots, the ability of AI to accurately recognize content isn’t on par with a human’s. An AI’s ability to generate content is dependent on what it has been trained on, and even the most effective tools can’t deliver perfect accuracy when it comes to things like captions and translations. AI-generated captions often include misunderstood words so, although they are a good starting point, using them without double-checking their accuracy would not be suitable for accessibility."
I have also heard through user research, historians mention human verification of data they search upon being an important thing they consider for reliability of their research, too.
As such it might be the time to plan for how to allow content creators to flag content as ai generated, and also human verified.
The text was updated successfully, but these errors were encountered:
As more content is being automatically generated and added to IIIF objects, how we can let IIIF content creators flag that content has been generated with AI, and if it has been human verified or not?
This will be a useful topic to consider as AI use regulations come about, and also for accessibility and research related concerns.
Human verification is an important topic discussed in accessibility, see this article, which writes: "Though there have been stories of intelligent conversations with chatbots, the ability of AI to accurately recognize content isn’t on par with a human’s. An AI’s ability to generate content is dependent on what it has been trained on, and even the most effective tools can’t deliver perfect accuracy when it comes to things like captions and translations. AI-generated captions often include misunderstood words so, although they are a good starting point, using them without double-checking their accuracy would not be suitable for accessibility."
I have also heard through user research, historians mention human verification of data they search upon being an important thing they consider for reliability of their research, too.
As such it might be the time to plan for how to allow content creators to flag content as ai generated, and also human verified.
The text was updated successfully, but these errors were encountered: