Zum Hauptinhalt

Ethics by Design

Truth Filters for LLMs?

Emma's curiosity deepened as she wondered how these philosophical perspectives on truth could be practically applied to help both users and developers interact more responsibly with LLMs. If truth could be seen through different lenses, then each lens might offer a way to improve the way people work with LLMs. She began to think of these theories as 'truth filters' - different ways of evaluating, interpreting and even designing model responses with specific goals in mind. These filters, she thought, could form the basis of an ethics by design framework, a system that could make interactions with LLMs more transparent, grounded, and ultimately safer for all.
She sketched out ideas for how users and developers could apply each truth filter to guide their approach:

  1. Correspondence Filter
  2. Coherence Filter
  3. Pragmatic Filter
  4. Epistemic Filter

4. Epistemic Filter

  • For users: Emma could see the epistemic filter acting as a guide for users to critically evaluate responses, especially where evidence and justification are essential. This filter would encourage users to look for answers that are supported by clear reasoning or sources, helping them to develop a natural scepticism towards answers that lack justification.
  • For developers: Developers could improve the output of the LLM by implementing confidence scoring and, where possible, citation generation. By allowing the model to signal when it's more or less confident in its answers, developers could give users insight into how "sure" the model is. In addition, training the model to draw from reputable data sources or cite references would allow users to assess the basis for the answer, leading to more informed judgements.