Zum Hauptinhalt

Option A: Unverified Outputs

2. Are LLMs 'Bullshitters'?

Are LLMs 'Bullshitters'?

When LLMs operate with a high temperature setting, they select words based on a broadened range of probabilities. This setting allows them to pick from more unusual or less predictable words, resulting in responses that are often creative but potentially unmoored from facts. Here’s how high-temperature LLMs may resemble Frankfurtian bullshitters:

  1. Disregard for Truth: High-temperature settings encourage the model to select language that sounds engaging and plausible without grounding it in verified information. This approach is indifferent to truth—rather than confirming a statement’s factual basis, the LLM prioritises producing a response that seems relevant or interesting. This is precisely Frankfurt’s point: the response is produced without caring whether it’s true or false.
  2. Fluent Plausibility without Commitment: LLMs trained to respond fluently appear confident, regardless of accuracy. Like a bullshitter who creates a convincing narrative without regard for its truth, the high-temperature LLM can generate plausible-sounding responses with no intrinsic connection to reality. This can be especially problematic because, like bullshit, it’s not immediately clear when the output is fabricated or inaccurate; it may appear legitimate simply because it sounds well-formed.
  3. Risk of Eroding User Trust in Truth: The bullshitter’s approach, Frankfurt argues, has a corrosive effect on discourse because it weakens the role of truth in conversation. High-temperature LLMs can do the same. By producing outputs that prioritize linguistic appeal over truth, the LLM might subtly shift users’ expectations, making them more accustomed to “truth-like” statements that sound plausible but aren’t fact-checked. This can lead users to become more accepting of language that doesn’t necessarily represent reality accurately.