Red Flags: Your Built-In Bullshit Detector
Recognizing Manipulation in Politics, Sales, and Everyday Communication & pardon my French
We’re surrounded by attempts to cheat us. We need a bullshit detector – in plain English. Not the agricultural kind, but the rhetorical variety—carefully crafted language designed to manipulate, obscure, or deceive. Politicians dodge accountability with passive voice. Salespeople create artificial urgency. Gurus peddle empty promises wrapped in pseudo-wisdom. And we, if we’re not paying attention, nod along.
The good news? Your brain already has the hardware for a bullshit detector. You just need to learn how to turn the damn thing on.
This guide isn’t about cynicism or paranoia. It’s about pattern recognition—learning to spot the telltale signs that someone is trying to sell you something (whether it’s a product, an idea, or themselves) using tactics that bypass your critical thinking. Once you know what to look for, these red flags become impossible to miss.
Contents
Political Red Flags: The Language of Evasion
Political communication is a masterclass in saying something while meaning nothing, or worse, obscuring what you really mean. Here are the patterns that should immediately activate your skepticism.
🚩 “Mistakes Were Made” – The Passive Voice Dodge
What it sounds like: “Mistakes were made.” “Errors occurred.” “The situation was mishandled.” “It has been brought to my attention that…”
What it really means: “I screwed up but I’m not going to explicitly admit it.”
Passive voice is the politician’s best friend because it obscures agency. Who made the mistakes? Who mishandled the situation? The sentence structure makes the actor disappear. It’s the grammatical equivalent of a magic trick—now you see responsibility, now you don’t.
Active voice equivalent: “I made mistakes.” “My team made errors.” “I mishandled the situation.”
Why it matters: When someone can’t or won’t identify who is responsible for an action, they’re usually the ones responsible and trying to avoid accountability.
🚩 “Let Me Be Clear…” – The Obfuscation Introduction
What it sounds like: “Let me be perfectly clear…” “To be clear…” “I want to be very clear about this…”
What usually follows: Something decidedly unclear, evasive, or directly contradicting what they just said before.
This phrase is almost never followed by clarity. It’s a verbal tic that politicians use when they’re about to muddy the waters. It’s the rhetorical equivalent of a magician saying “watch closely” right before the misdirection.
Why it’s a red flag: If you actually were being clear, you wouldn’t need to announce it. The announcement itself signals that what follows requires special pleading or careful parsing.
🚩 “Some People Say…” – The Phantom Source
Variations:
- “Many people are saying…”
- “Experts believe…”
- “Studies show…”
- “It’s been suggested that…”
- “There are those who think…”
The problem: Who are these people? Which experts? What studies? The vagueness is the point—it allows the speaker to float a claim without having to defend it. If pressed, they can always say “I’m just reporting what others think.”
What to ask:
- Which people specifically?
- What are their qualifications?
- Where can I read these studies?
- How many people and what percentage of relevant experts?
Why it works: It creates the illusion of consensus without requiring any actual evidence. It’s social proof theater.
🚩 “I Don’t Recall” – Strategic Amnesia
Context: Usually deployed under oath or during investigations when someone needs plausible deniability.
The tell: Watch what they DO remember versus what they conveniently forget. People tend to have crystal-clear memory for things that make them look good and sudden amnesia about things that might incriminate them.
The spectrum:
- “I don’t recall” (might be true)
- “I have no recollection of that” (more formal, more suspicious)
- “To the best of my recollection” (giving yourself wiggle room)
- “I would have to check my records” (the professional dodge)
🚩 Whataboutism – The Deflection Classic
Structure: “What about [unrelated or tangentially related thing]?”
Example:
- Accusation: “Your policy hurt small businesses.”
- Whataboutism: “What about the previous administration’s policies?”
Why it’s effective: It shifts the burden of proof, changes the subject, and puts the accuser on defense. It doesn’t address the original claim—it just muddies the water enough that the original claim gets lost.
How to counter: “We can discuss that, but first, can you address the original point?”
🚩 The Moving Goalpost
Pattern: When evidence contradicts their claim, change what the claim means.
Example sequence:
- “This policy will create a million jobs.”
- [Policy creates 200,000 jobs]
- “We never said exactly when those jobs would materialize.”
- “Actually, we meant it would create the conditions for eventual job growth.”
- “What we really meant was…”
Why it’s insidious: The original claim gets retroactively redefined so it can never be proven wrong. The goalposts keep moving further back.
🚩 False Equivalency
Structure: “Both sides do it.” “They’re all the same.” “It’s six of one, half-dozen of the other.”
The problem: Not all things are equivalent. Shoplifting and armed robbery are both theft, but one is clearly worse. False equivalency treats vastly different things as if they’re the same to avoid making distinctions that might be uncomfortable.
Example: “Sure, X lied about Y, but didn’t the other side also exaggerate about Z?” (Where Y is a major policy issue and Z is a minor rhetorical flourish.)
Why it’s a problem: It’s intellectual laziness dressed up as fairness. It allows people to avoid moral judgments by pretending everything is equally bad (or good).
Sales & Marketing: Manufacturing Consent
Sales tactics are more obvious than political rhetoric because salespeople have less to lose by being caught. The goal is simple: get you to buy, preferably right now, preferably without thinking too hard.
🚩 “Act Now! Limited Time Only!” – Artificial Urgency
Variations:
- “Only 3 left in stock!”
- “This offer expires at midnight!”
- “Today only!”
- “While supplies last!”
The psychology: Scarcity triggers loss aversion. We’re more motivated to avoid losing an opportunity than to gain something equivalent. By making the opportunity seem scarce or fleeting, salespeople hijack this bias.
The reality check:
- Is this actually scarce, or is it always “almost sold out”?
- Will this deal really never come back?
- What am I losing if I wait 24 hours to think about it?
Pro tip: If you feel pressured to decide immediately, that pressure itself is a red flag. Good deals don’t require you to abandon your critical thinking.
🚩 “Doctors/Experts Recommend…” – Vague Authority
What they say: “9 out of 10 doctors recommend…” “Experts agree…” “Dentists recommend…”
What to ask:
- Which doctors? (Maybe the 9 who were paid to endorse it)
- Out of how many total doctors surveyed? (Maybe they only asked 10)
- What were they actually asked? (“Would you recommend X over literally nothing?” gets high agreement)
- Who funded the study?
The pattern: Appeals to authority work, but only when the authority is legitimate, specific, and relevant. Vague authority is designed to give you the warm feeling of expert endorsement without any actual substance.
🚩 “All Natural” – Meaningless Language
Similar terms: “Chemical-free” (everything is chemicals), “Pure” (pure what?), “Eco-friendly” (by what standard?), “Scientifically proven” (what was proven exactly?)
The problem: These words sound good but mean nothing specific. Arsenic is all natural. Poison ivy is all natural. The term “natural” has no regulatory definition in most contexts and tells you nothing about safety, efficacy, or quality.
What to demand: Specific, measurable claims. Not “all natural” but “organic certified by [specific organization].” Not “eco-friendly” but “reduces carbon emissions by X% compared to Y.”
🚩 “Up to X% Off!” – Weasel Words
Watch for: “Up to,” “as much as,” “can be,” “may,” “possibly,” “some”
Translation:
- “Save up to 70%!” = Most items are 10-20% off, one clearance item nobody wants is 70% off
- “Results may vary” = It probably won’t work for you
- “Lose up to 10 pounds!” = Most people lose 1-2 pounds, if any
Why it’s effective: Your brain focuses on the impressive number (70%!) and glosses over the qualifier (up to). The weasel word lets them technically tell the truth while practically lying.
🚩 Social Proof Manipulation
Techniques:
- “9 out of 10 customers recommend us!” (Did you survey only your happiest customers?)
- “Best-seller!” (On what list? For how long? In what category?)
- “Trusted by millions!” (How are you defining “trust”?)
- “As seen on TV!” (Being seen is not an endorsement)
Real social proof vs. fake:
- Real: Verified reviews from confirmed purchasers, specific testimonials with full names and context
- Fake: Anonymous testimonials, cherry-picked statistics, vague claims about popularity
🚩 The Monthly Payment Trap
Sales technique: “What monthly payment can you afford?”
Why it’s dangerous: It shifts focus from total cost to monthly affordability, making expensive purchases seem manageable. A car salesman asking about monthly payments instead of total price is trying to maximize what you’ll pay.
Example:
- Car costs $30,000
- Finance for 6 years at 7% interest
- Total paid: $37,000+
- But it’s “only $517/month!”
Always ask: What’s the total cost, including all fees and interest? What’s the interest rate? What are the terms?
Everyday Manipulation: The Subtle Stuff
Not all manipulation happens in politics or sales. Some of the most effective manipulation happens in everyday communication, often between people who care about each other.
🚩 Gaslighting – Reality Distortion
Pattern: Making you doubt your own perceptions, memory, or sanity.
Phrases to watch for:
- “That never happened.”
- “You’re remembering it wrong.”
- “You’re being too sensitive.”
- “You’re crazy/paranoid.”
- “I never said that.” (When you know they did)
Why it works: Repeated reality denial can make you question your own judgment, making you more dependent on the gaslighter’s version of events.
Defense: Keep records. Trust your gut. Get external validation from people you trust.
🚩 The “Just Asking Questions” Technique
Pattern: Framing statements as questions to avoid accountability.
Examples:
- “I’m just asking questions” (while implying answers)
- “Don’t you think it’s suspicious that…” (suggesting conspiracy without claiming it)
- “Why hasn’t anyone looked into…” (implying cover-up)
Why it’s effective: Questions can’t be “wrong,” so it provides cover for spreading misinformation or conspiracy theories. “I’m just asking!” is the shield against criticism.
Real questions vs. JAQing off: Real questions seek information. Rhetorical “questions” seek to plant ideas without defending them.
🚩 Loaded Language & Emotional Triggers
What it is: Using emotionally charged words to bypass rational thinking.
Examples:
- Describing the same person as “freedom fighter” or “terrorist” based on your sympathy
- “Job creators” vs. “the wealthy”
- “Pro-life” vs. “anti-choice” / “Pro-choice” vs. “pro-abortion”
- “Illegal aliens” vs. “undocumented immigrants”
The problem: The choice of words frames the debate before the debate even starts. It’s hard to think clearly when the language itself triggers strong emotions.
Defense: Notice the language. Translate emotionally loaded terms into neutral ones. Ask yourself: would I feel differently if they used different words for the same thing?
Your Bullshit Detection Toolkit
Now that you know what to look for, here are the tools to sharpen your detector.
🔧 Tool #1: The Passive Voice Alarm
Activates when: Someone uses passive constructions that obscure responsibility.
Questions to ask:
- Who did this?
- Who is responsible?
- Why isn’t the actor mentioned?
Practice: When you hear “mistakes were made,” automatically translate to “who made the mistakes?”
🔧 Tool #2: The Vagueness Meter
Measures: How specific vs. how vague a claim is.
Red zone:
- “Some people say…”
- “Studies show…”
- “Everyone knows…”
- “It’s common sense…”
Green zone:
- “Dr. Jane Smith from Harvard found in her 2024 study…”
- “According to the Census Bureau data from…”
- “The peer-reviewed research published in Nature shows…”
Rule of thumb: The more vague the claim, the higher your skepticism should be.
🔧 Tool #3: The Source Check
Questions to always ask:
- Who says so?
- What are their qualifications?
- What’s their potential bias or conflict of interest?
- Can I verify this independently?
- What do other experts say?
Quick hierarchy of source reliability:
- Peer-reviewed research in reputable journals
- Expert consensus from relevant field
- Investigative journalism from established outlets
- Individual expert opinion
- Anecdotal evidence
- Random internet claims
- “My cousin’s friend said…”
🔧 Tool #4: The Too-Good-To-Be-True Test
If something sounds too good to be true, it probably is.
Activates for:
- “Lose 30 pounds in 30 days without diet or exercise!”
- “Make $10,000 a month working from home!”
- “This one weird trick that doctors don’t want you to know!”
- “Guaranteed returns with no risk!”
Why we fall for it: We want it to be true. Hope is a powerful bias.
Defense: Ask what the catch is. There’s always a catch. If you can’t find it, you probably haven’t looked hard enough.
🔧 Tool #5: The Emotional Manipulation Radar
Detects: When someone is trying to make you feel something instead of think something.
Warning signs:
- Fear: “If you don’t act now, you’ll lose everything!”
- Greed: “This opportunity won’t last!”
- Guilt: “How can you not care about…”
- Outrage: “You should be angry about…”
- Tribal identity: “People like us believe…”
Defense: Notice when you’re feeling strong emotions. That’s when you’re most vulnerable to manipulation. Strong feeling should trigger stronger thinking, not replace it.
🔧 Tool #6: The Logical Fallacy Scanner
Common fallacies to watch for:
Ad Hominem: Attacking the person instead of the argument. “You can’t trust his opinion on economics, he’s divorced!”
Strawman: Misrepresenting someone’s argument to make it easier to attack. “So you think we should just let criminals run free?”
Appeal to Nature: Natural = good. “It’s natural, so it must be healthy!” (Earthquakes are natural too.)
False Dilemma: Presenting only two options when more exist. “You’re either with us or against us.”
Slippery Slope: “If we allow X, next thing you know we’ll have Y, then Z!” (Without explaining why this progression is inevitable.)
Bandwagon: “Everyone’s doing it, so it must be right/good/true.”
Why this matters: Logical fallacies are shortcuts around proper reasoning. Recognizing them helps you spot bad arguments dressed up as good ones.
Putting It Into Practice
Exercise #1: Political Speech Analysis
Watch a political speech or debate. Count the red flags:
- How many passive voice constructions?
- How many vague appeals to authority?
- How many instances of loaded language?
- How many questions answered vs. dodged?
Don’t just analyze politicians you disagree with. The real test is catching these patterns in people you like and agree with.
Exercise #2: Advertisement Deconstruction
Take any advertisement and ask:
- What specific claim is being made?
- What emotion are they trying to trigger?
- What information is conspicuously absent?
- What are the weasel words? (up to, may, can, possibly)
- Is there artificial urgency?
Exercise #3: The Steel Man Challenge
The opposite of strawman: state your opponent’s argument in its strongest, most charitable form before critiquing it. If you can’t do this, you don’t understand their position well enough to criticize it.
Why this is hard: It requires intellectual honesty and empathy. It’s much easier to attack a weak version of an argument.
Why it’s valuable: It forces you to engage with ideas rather than caricatures. And sometimes you’ll find that the steel-manned version is actually pretty good, which means you learned something.
Exercise #4: Source Tracing
When you see a claim that interests you:
- Find the original source
- Read what it actually says (not just the headline)
- Check who funded the research
- Look for expert response/replication
- See if there are competing studies or interpretations
This is work. Critical thinking is work. But it’s less work than being confidently wrong.
When Your Detector Might Fail You
Your bullshit detector isn’t perfect. Here’s when it’s most likely to malfunction:
⚠️ Confirmation Bias Override
Problem: Your detector is less sensitive to bullshit that confirms what you already believe.
Example: You’ll spot logical fallacies instantly when someone argues against your position, but miss identical fallacies when someone argues for it.
Defense: Actively look for flaws in arguments you agree with. Be your own devil’s advocate.
⚠️ Authority Bias
Problem: We’re less critical of people with credentials, titles, or status.
Reality: Smart people believe dumb things. Experts can be wrong, especially outside their expertise. A Nobel Prize in Physics doesn’t make someone an authority on politics or nutrition.
Defense: Credentials are evidence, not proof. Even experts need to show their work.
⚠️ Complexity Fatigue
Problem: When things get complicated, we default to simple explanations, even wrong ones.
Example: Conspiracy theories are often more psychologically satisfying than complex realities. “It’s all a conspiracy by X” is simpler than “it’s a complex interaction of multiple factors with no single villain.”
Defense: Embrace uncertainty. “I don’t know” is a valid answer. Simple answers to complex questions are usually wrong.
⚠️ Tribal Override
Problem: We’re more critical of “them” than “us.”
Example: The exact same behavior looks different depending on who does it. “When we do it, it’s justice. When they do it, it’s tyranny.”
Defense: Apply the same standards regardless of tribe. If you wouldn’t excuse it from your opponent, don’t excuse it from your ally.
⚠️ The Backfire Effect
Problem: Sometimes presenting evidence against a strongly held belief makes people believe it more strongly.
Why: Beliefs become part of identity. Attacking the belief feels like attacking the person.
Defense: Know when to walk away. You can’t logic someone out of a position they didn’t logic themselves into. Sometimes the best use of your detector is knowing when to disengage.
The Meta-Point: Weaponized Skepticism
Here’s the tricky part: bad actors can weaponize these tools too.
“Show me the evidence!” sounds like rational skepticism, but it can also be a tactic to exhaust you. No amount of evidence is ever enough because accepting the evidence would mean admitting they’re wrong.
“I’m just asking questions!” sounds like intellectual curiosity, but we already covered how that’s often bullshit.
“You’re appealing to authority!” can be a legitimate criticism or a way to dismiss actual expertise.
The balance: Be skeptical, but not cynical. Question everything, but accept that some things are true. Demand evidence, but recognize good evidence when you see it.
The principle: Your bullshit detector should make you harder to fool, not impossible to convince. Skepticism applied equally in all directions approaches wisdom. Skepticism applied only to ideas you don’t like is just confirmation bias with extra steps.
Conclusion: The Examined Life
Socrates said the unexamined life isn’t worth living. We might add: the unexamined claim isn’t worth believing.
Your bullshit detector isn’t about being smarter than everyone else. It’s about being honest with yourself about when you’re being manipulated, when you’re manipulating yourself, and when you genuinely don’t know.
The red flags we’ve covered aren’t absolute rules. Context matters. Sometimes passive voice is just passive voice, not evasion. Sometimes “some people say” is just shorthand, not vague attribution. The point isn’t to be paranoid about every sentence. The point is to notice patterns.
When you spot multiple red flags together, that’s when your detector should be screaming.
And here’s the really uncomfortable part: You’ll find red flags in your own thinking. You’ll catch yourself using passive voice to avoid blame. You’ll notice yourself appealing to vague authority. You’ll realize you’re guilty of whataboutism.
Good. That means the detector is working.
The goal isn’t perfection. The goal is calibration. You’re aiming to be slightly less wrong over time, slightly better at distinguishing signal from noise, slightly harder to manipulate.
In a world of infinite bullshit, that’s about as close to wisdom as most of us are going to get.
And that, paradoxically, is something you can trust.
Frequently Asked Questions
What are the most common red flags in political communication?
Common political red flags include passive voice to avoid responsibility (“mistakes were made”), vague authority appeals (“some people say”), strategic memory loss (“I don’t recall”), false equivalencies, moving goalposts, and loaded language designed to trigger emotional responses rather than rational thinking.
How can I develop better bullshit detection skills?
Develop your detector by: asking “who says so?” for every claim, demanding specific evidence over vague assertions, recognizing emotional manipulation tactics, checking for logical fallacies, verifying sources, and practicing skepticism while remaining open-minded. Most importantly, apply the same standards to claims you want to believe.
What is the passive voice alarm in critical thinking?
The passive voice alarm alerts you when someone uses passive constructions like “mistakes were made” instead of “I made mistakes.” Passive voice often obscures who is responsible for actions, making it a red flag for evasion and accountability-dodging.
How do I balance skepticism with not being cynical?
Healthy skepticism means questioning claims while remaining open to good evidence. Cynicism means rejecting everything regardless of evidence. The balance: be hard to fool but not impossible to convince. Apply skepticism equally to all claims, not just ones you disagree with.
Can these critical thinking tools backfire?
Yes. Bad actors can weaponize skepticism (“show me the evidence!” when no evidence would ever be enough) or hide behind “just asking questions.” Your detector should make you harder to fool, not impossible to convince. The goal is calibration, not paranoia.
Think this guide helped you spot some bullshit? Share it. Know someone who needs their detector calibrated? Send it along. Have examples of red flags we missed? Let us know in the comments.
And remember: The first person you need to use your bullshit detector on is yourself.
Sign up for our Think Smarter News, and get insights that relate to your life.






