Not all AIs are created equal. There’s a term called RLHF, which means “Reinforcement Learning with Human Feedback” that’s proving to be a significant part of developing the next iterations of large language models. The grey areas are how AI Slop infiltrate the systems that you and I use on a daily basis. From creating a social media schedule and sales tracking playbook to diagnosing a rash on your skin, not all AI tools are reliable and trustworthy.
Luckily, because we’re human and the majority of us reading this Substack know what it’s like to experience internet dial up, we can decipher fake and phony content.
Here are 3 tips for identifying AI Slop:
Verify quotes (always source-check). For legal professions, refer to top credible sources like Lexis Nexis. For stats or data, ask for references. Always be specific when writing prompts. Amtbuiguity leads to mess. Instead of writing “Give me the top youtube creators.” Re-word to say: “Provide the Youtube creators in the 1m+ subscriber range in 2025.”
Ask for receipts (links, data, provenance). Fact checking is not just for journalists; it’s for humans too! When using tools like Claude, Deep Seek or even Grok on X, make sure your prompt responses can be backed by links and other data. Especially when you’re doing Math! I can’t tell you how many times my AI tools mess up simple math.
Rephrase it — if AI can’t explain its own logic, it’s guessing. Ask your AI prompt for a different way to write or explain their logic.









