Zum Hauptinhalt

Ethics by Design

Truth Filters for LLMs?

Emma's curiosity deepened as she wondered how these philosophical perspectives on truth could be practically applied to help both users and developers interact more responsibly with LLMs. If truth could be seen through different lenses, then each lens might offer a way to improve the way people work with LLMs. She began to think of these theories as 'truth filters' - different ways of evaluating, interpreting and even designing model responses with specific goals in mind. These filters, she thought, could form the basis of an ethics by design framework, a system that could make interactions with LLMs more transparent, grounded, and ultimately safer for all.
She sketched out ideas for how users and developers could apply each truth filter to guide their approach:

  1. Correspondence Filter
  2. Coherence Filter
  3. Pragmatic Filter
  4. Epistemic Filter

1. Correspondence Filter

  • For users: Emma imagined a world where users approached AI responses with a filter that prioritised real-world facts. With this filter, users could be encouraged to question whether an AI response really reflects objective reality. Is the information something that can be verified, such as a date, name or location? By encouraging users to think about real-world alignment, this filter could encourage a habit of cross-checking, rather than accepting everything at face value.
  • For developers: Developers, Emma thought, could take advantage of this filter by building in fact-checking mechanisms. Integrating real-time fact-checking tools - such as APIs that pull from updated databases - could flag answers that deviate from established facts. This step would help reduce the likelihood of hallucinations by keeping responses grounded in reality, especially on fact-sensitive topics.