- Explain the benefit, not the technology
- Use simple, direct language to describe each explicit feedback option and its consequences
- Optimize for understanding
- Note special cases of absent or comprehensive explanation
- Include explanation via interaction
- Use example-based explanations
- Explain what’s important
- Tie explanations to user actions
- In general, avoid technical or statistical jargon
- Avoid being too specific or too general
- Show contextually relevant information
- Model confidence displays
- Decide how best to show model confidence
- Categorical
- N-best alternatives
- Numeric
- Determine if you should show confidence levels
- When you know that confidence values correspond to result quality, you generally want to avoid showing results when confidence is low.
- Consider changing how you present results based on different confidence thresholds
- In general, translate confidence values into concepts that people already understand.
- Know what your confidence values mean before you decide how to present them
- In scenarios where people expect statistical or numerical information, display confidence values that help them interpret the results.
- Confirm success
- Onboard in stages
- Help users calibrate their trust
- Introduce and set expectations for AI
- Set expectations for AI improvements
- Account for timing in the user journey
- Keep track of user needs
- Identify existing mental models
- Clearly communicate AI limits and capabilities
- Set expectations for adaptation
- Describe the system or explain the output
- Account for user expectations of human-like interaction
- Consider using attributions to help people distinguish among results
- Keep attributions factual and based on objective analysis
- Help people establish realistic expectations
- Explain how limitations can cause unsatisfactory results
- Consider telling people when limitations are resolved
- Demonstrate how to get the best results
- Make clear what the system can do
- Make clear how well the system can do what it can do
- Make clear why the system did what it did
- Convey the consequences of user actions
- Notify users about changes
- Scope services when in doubt
- Time services based on context
- Avoid asking people to participate in calibration more than once
- Make calibration quick and easy
- Make sure people know how to perform calibration successfully
- Let people cancel calibration at any time
- Give people a way to update or remove information they provided during calibration
- Always secure people's calibration information
- Give users options based on categorical / N-best alternatives
- Consider formatting
- Use multiple shortcuts to optimize key flows
- Whenever possible, help people make decisions by conveying confidence in terms of actionable suggestions
- List the most likely option first
- In situations where attributions aren't helpful, consider ranking or ordering the results in a way that implies confidence levels
- Consider offering multiple options when requesting explicit feedback
- In general, avoid providing too many options
- Prefer diverse options
- Make options easy to distinguish and choose
- Add iconography to an option description if it helps people understand it.