Zum Hauptinhalt

Option A: Unverified Outputs

Website: Hamburg Open Online University
Kurs: Ethics by Design
Buch: Option A: Unverified Outputs
Gedruckt von: Gast
Datum: Dienstag, 3. Juni 2025, 10:12

1. Frankfurt on 'Bullshit'

Frankfurt on 'Bullshit'

 

Choosing Option A, where the AI is set with a high temperature to allow for more creative, unverified responses, raises a significant philosophical concern rooted in Harry Frankfurt’s concept of “bullshit". In his essay On Bullshit, Frankfurt defines “bullshit” as a type of discourse characterised by a lack of regard for the truth. Unlike lying, which requires a deliberate intention to deceive by saying something the speaker knows to be false, bullshitters are unconcerned with truth altogether. They don’t intend to lie, nor do they aim to speak truthfully; instead, they aim to produce language that appears credible or useful for the moment, without any commitment to reality.

In Frankfurt’s analysis, a bullshitter’s primary goal is to produce a narrative that fits the context or satisfies the audience, regardless of whether it aligns with facts. This disregard for truth is not necessarily malicious. It’s often practical, aligning with the bullshitter’s goals to seem convincing or to navigate a particular social or conversational situation. But by sidestepping truth, bullshit risks diluting the meaning of truth in discourse and can lead to confusion and misinformation.

2. Are LLMs 'Bullshitters'?

Are LLMs 'Bullshitters'?

When LLMs operate with a high temperature setting, they select words based on a broadened range of probabilities. This setting allows them to pick from more unusual or less predictable words, resulting in responses that are often creative but potentially unmoored from facts. Here’s how high-temperature LLMs may resemble Frankfurtian bullshitters:

  1. Disregard for Truth: High-temperature settings encourage the model to select language that sounds engaging and plausible without grounding it in verified information. This approach is indifferent to truth—rather than confirming a statement’s factual basis, the LLM prioritises producing a response that seems relevant or interesting. This is precisely Frankfurt’s point: the response is produced without caring whether it’s true or false.
  2. Fluent Plausibility without Commitment: LLMs trained to respond fluently appear confident, regardless of accuracy. Like a bullshitter who creates a convincing narrative without regard for its truth, the high-temperature LLM can generate plausible-sounding responses with no intrinsic connection to reality. This can be especially problematic because, like bullshit, it’s not immediately clear when the output is fabricated or inaccurate; it may appear legitimate simply because it sounds well-formed.
  3. Risk of Eroding User Trust in Truth: The bullshitter’s approach, Frankfurt argues, has a corrosive effect on discourse because it weakens the role of truth in conversation. High-temperature LLMs can do the same. By producing outputs that prioritize linguistic appeal over truth, the LLM might subtly shift users’ expectations, making them more accustomed to “truth-like” statements that sound plausible but aren’t fact-checked. This can lead users to become more accepting of language that doesn’t necessarily represent reality accurately.

3. Is 'Bullshitting' infectious?

Is 'Bullshitting' infectious?

This lack of regard for truth can have a further-reaching impact by encouraging users to become bullshitters themselves. When users rely on these high-temperature outputs, they may begin to adopt the same indifference to truth in their own communications, for several reasons:

  1. Reliance on Plausible Responses: If users start using the AI’s outputs for information or inspiration without checking the facts, they might unwittingly spread misinformation. By passing along unverified, “truth-like” responses, users effectively become intermediaries in the spread of bullshit—statements presented as credible without real verification.
  2. Changing Standards for Truth: Regular exposure to bullshit outputs may lead users to internalise a lower standard for truth. Just as a bullshitter's goal is to sound reasonable or fitting, users may start to accept “good enough” information for convenience, especially in informal or fast-paced settings. This shift risks undermining careful, evidence-based communication, replacing it with language that’s aimed more at filling gaps than at conveying truth.
  3. Encouraging a Practical but Untruthful Approach: The Pragmatic Theory of Truth, as Emma explored, values usefulness over strict factual accuracy. While this can be beneficial in brainstorming, it can easily slide into bullshit if users adopt it universally, embracing ideas that “work” for the moment but are unconcerned with deeper factual accuracy. If this mindset spreads, it risks turning human communication into a more superficial exchange, dominated by statements meant to suffice rather than statements that are actually true.

4. Philosophical Implications: The Slippery Slope of Indifference to Truth

Philosophical Implications: The Slippery Slope of Indifference to Truth

The philosophical danger of this choice, then, is that it may encourage both AI and human users to value language based on utility or plausibility rather than a commitment to accuracy. Over time, this could subtly erode the importance of truth in communication. Just as Frankfurt warns that bullshit can undermine society’s commitment to truth, high-temperature LLM outputs could contribute to a similar effect, normalising statements that “sound right” but lack factual grounding.

If truth becomes secondary in human-AI communication, the long-term result may be a culture that gradually prioritises convenient, plausible discourse over rigorous, truth-oriented thinking. This shift can have significant impacts not just on individual understanding but on collective trust in information and discourse.

In essence, Option A allows LLMs to function as potential bullshitters, producing text that fits the moment without concern for reality. And if users adopt this model of language, they too might increasingly embody the role of the bullshitter, valuing the “feel” of truth without the accountability of actually verifying it.