Today I want to talk about something many people are excited about: artificial intelligence. AI can help us write emails, summarize reports, generate ideas, and yes—draft speeches. It’s a powerful tool. But like any powerful tool, it reveals something important about us: technology can assist judgment, but it cannot replace it.
That brings me to a very public example: Pete Hegseth.
If you’ve been paying attention to recent public discourse, you may have seen speeches and statements associated with him that sparked debate—not just about the content itself, but about how they may have been written. Many people suspect that AI tools were involved. And when those speeches fall flat, contradict themselves, or sound oddly mechanical, critics jump to one conclusion: “AI wrote this.”
But here’s the truth we should understand: bad speeches are not a failure of AI. They’re a failure of the human using it.
AI can generate structure, language, and ideas, but it cannot replace authenticity, judgment, or responsibility. A strong speech comes from clarity of thought, understanding of the audience, and a genuine message. If someone simply copies and pastes machine-generated words without reflection, editing, or ownership, the result will sound hollow—no matter how advanced the technology is.
So when people say that certain speeches are a “terrible advertisement for AI,” they’re actually pointing to something deeper. AI doesn’t stand at a podium. AI doesn’t decide what values to defend or what message to send. Humans do.
The lesson isn’t that AI makes communication worse. The lesson is that AI magnifies the communicator.
A thoughtful speaker can use AI to research faster, refine language, and test ideas. A careless speaker will use it as a shortcut—and the audience will hear that shortcut immediately.
Public speech has always required responsibility. The tools change—typewriters, teleprompters, word processors, and now AI—but the core requirement remains the same: the speaker must mean what they say.
So instead of blaming the technology when a speech fails, we should remember a simple principle:
AI can help you write words.
But it cannot help you believe them.
And the audience always knows the difference.
Thank you.
(sorry, I can’t resist replying to posts like that with AI-generated examples of what they’re complaining about; in this case, the above was generated by ChatGPT)
I didn’t even tell ChatGPT what the contents should be, I just told it to write a public speech about your initial showerthought, didn’t give it any instructions what it should or shouldn’t say.
In fact I agree with you that it ended up as an ironic illustration of what AI writing is like at its worst.
My bad, the way you posted it felt like you were kind of trying to troll me by posting an AI response that disagreed with what I said in an annoying way lol and I did not get you were making that point.
Ladies and gentlemen,
Today I want to talk about something many people are excited about: artificial intelligence. AI can help us write emails, summarize reports, generate ideas, and yes—draft speeches. It’s a powerful tool. But like any powerful tool, it reveals something important about us: technology can assist judgment, but it cannot replace it.
That brings me to a very public example: Pete Hegseth.
If you’ve been paying attention to recent public discourse, you may have seen speeches and statements associated with him that sparked debate—not just about the content itself, but about how they may have been written. Many people suspect that AI tools were involved. And when those speeches fall flat, contradict themselves, or sound oddly mechanical, critics jump to one conclusion: “AI wrote this.”
But here’s the truth we should understand: bad speeches are not a failure of AI. They’re a failure of the human using it.
AI can generate structure, language, and ideas, but it cannot replace authenticity, judgment, or responsibility. A strong speech comes from clarity of thought, understanding of the audience, and a genuine message. If someone simply copies and pastes machine-generated words without reflection, editing, or ownership, the result will sound hollow—no matter how advanced the technology is.
So when people say that certain speeches are a “terrible advertisement for AI,” they’re actually pointing to something deeper. AI doesn’t stand at a podium. AI doesn’t decide what values to defend or what message to send. Humans do.
The lesson isn’t that AI makes communication worse. The lesson is that AI magnifies the communicator.
A thoughtful speaker can use AI to research faster, refine language, and test ideas. A careless speaker will use it as a shortcut—and the audience will hear that shortcut immediately.
Public speech has always required responsibility. The tools change—typewriters, teleprompters, word processors, and now AI—but the core requirement remains the same: the speaker must mean what they say.
So instead of blaming the technology when a speech fails, we should remember a simple principle:
AI can help you write words. But it cannot help you believe them.
And the audience always knows the difference.
Thank you.
(sorry, I can’t resist replying to posts like that with AI-generated examples of what they’re complaining about; in this case, the above was generated by ChatGPT)
Edit what a perfect example of how fake and fluff filled AI writing is.
All of that could just have been said with “Don’t blame the tool, blame the person using it”
I didn’t even tell ChatGPT what the contents should be, I just told it to write a public speech about your initial showerthought, didn’t give it any instructions what it should or shouldn’t say.
In fact I agree with you that it ended up as an ironic illustration of what AI writing is like at its worst.
My bad, the way you posted it felt like you were kind of trying to troll me by posting an AI response that disagreed with what I said in an annoying way lol and I did not get you were making that point.
I would have copy-pasted it verbatim no matter what the output would have been, didn’t know what it would be before. :D