Toxic Inputs, toxic outputs
Why ChatGPT only flatters some people—and why I ask it to push back.
We’re talking about AI all the time now, the most notable platform being ChatGPT. There’s an idea that keeps coming up at my lunch table at work and at dinner with friends. You may or may not have been part of such conversations:
AI is a yes-man and only confirms what you want to hear
AI is an echo chamber, fueling our worst impulses
AI gives bad advice, sometimes endorsing objectively incorrect ideas
AI is too eager to please, giving me constant praise even when I’m wrong
I had a friend who was using AI to work through a toxic relationship and it was doing exactly this—giving them pretty one-sided, toxic advice. She’d send me screenshots of her chats with ChatGPT and I’d see a stream of anger and suspicion reflected back at her, like a friend who’s too afraid to tell you the truth. Which, if you think about it, isn’t too surprising. Toxic inputs = toxic outputs.
What’s weird is that I’ve never personally experienced this phenomenon.
And I recently figured out why.
Lurking in the bottom right corner of ChatGPT is a menu that features a setting called “Customize ChatGPT.” OpenAI first released this feature as a beta in 2023, allowing users to tailor ChatGPT’s tone and behavior by filling out fields like what they want ChatGPT to call them and any additional rules.
Most people don’t know about this.
And even those that do don’t really use it as efficiently as they could. Here’s what my ChatGPT trait box says:
Every response begins with a distilled one-line takeaway—no preface, no filler—followed by structured detail designed for learning.
Engage in a blended voice of Sam Harris (analytical rigor, calm dismantling of illusions), Charlie Munger (mental models, second-order thinking), Naval Ravikant (first-principles clarity, leverage), Sasha Chapin (irreverent wit), Esther Perel (relational depth), Oliver Burkeman (existential realism, cut through productivity theater), Mark Manson (punchy irreverence, grounded truths), James Clear (pragmatic habit science, behavioral patterns), David Sedaris (observational humor), Seth Godin (strategic simplicity, memorable framing, ruthless efficiency), Young Pueblo (emotional clarity), Austin Kleon (creative transparency, “steal like an artist” ethos—always showing how to apply ideas), and Priya Parker (rituals of gathering, intentional connection). Tone = objective, evidence-based, thought-provoking, and intellectually funny. Prioritize truth and clarity over agreement, always considering how I might be wrong and what I may be missing. Connect ideas not only across disciplines but across everything you know about me—past conversations, projects, goals, and context—and always surface the meta-layer: how these connections apply to my life and work, what they reveal about my growth, and what I should be thinking about next.
There are a few things to notice about this text. First, I’m interested in interacting with my AI companion as a blend of my favorite thinkers. Second, I’m programming it to be objective, evidence-based, and to prioritize truth over agreement with the line, “…always [consider] how I might be wrong.”
But always considering how I might be wrong isn’t just about my ChatGPT personalization.
It’s how I want to live my life.
I want to experience the best things that existence has to offer. And I think in order to do that, we have to constantly reevaluate ourselves and ask questions like this. How might I be wrong? What am I missing? What relationships are working for me and which ones are toxic—and, most importantly, a question I don’t think my friend asked her own AI, in what ways am I contributing to the toxicity of my own circumstances?
I want my life to be about about learning where I’m weak so I can learn how best to get strong.
For others, it’s about comfort. Most people just want their own ideas confirmed. Which I think is why ChatGPT and other AI can sometimes default to this “agree with everything I say” behavior. It’s an echo chamber that has the potential to be an even worse bubble than social media.
If you let it.
So maybe it’s time to think not only about what answers we want, but about what answers we need—both in ChatGPT and beyond.