Zum Hauptinhalt

Ethics by Design

2. Coherence Filter

  • For users: Emma knew that sometimes the truth lies in consistency rather than in raw facts. She imagined a coherence filter that would help users assess whether an answer fits logically with a previous conversation or with their own knowledge. If a model began to contradict itself in a conversation, users would be prompted to take a closer look, encouraging critical engagement rather than passive acceptance.
  • For developers: Developers could implement this filter by designing models with an internal consistency check, allowing the LLM to 'remember' its previous responses during an ongoing session. By training models to avoid inconsistencies, especially in longer dialogues, they could improve consistency. This would make LLM output more reliable and engaging, and reduce the model's tendency to contradict itself over time.