This as-told-to essay is based on a conversation with Leigh Coney, a 34-year-old AI consultant based in France. It’s been edited for length and clarity.
AI models can tend to be “yes-men.”
They are sycophantic by design, meaning they agree with us, support our ideas, and want to help. Part of the reason I think so many AI projects fail is because the human factor is overlooked. The same biases that apply to us also apply to AI, so it’s important to factor in psychological principles when building experiments, agents, and automation.
I decided to pivot from teaching psychology at a university to becoming an AI consultant after Microsoft announced Copilot for its products nearly three years ago. At that moment, I decided AI would be in every business.
Now, I build custom AI automations and agents for businesses across many industries to increase efficiency and growth, using my psychological background to interact better with AI. Although ChatGPT was updated to make it less sycophantic, we have to make an extra effort to be critical of and question our ideas if we want to improve our thinking or work content.
Ask AI to challenge your ideas
A standard ChatGPT prompt might not challenge a flawed plan.
I ask AI to point out assumptions I might be making when I’m talking about an idea. My goal is to uncover things I’m not thinking about by asking questions like “where am I not being clear in my thinking?” or “What am I overlooking?”
Specify your audience to uncover new perspectives
AI is particularly useful for expanding our thinking by uncovering perspectives we could be missing.
Let’s say you’re pitching an idea to a CFO. Tell your chatbot to “Act as a skeptical CFO and ask five hard-hitting questions. Don’t be shy. Be harsh.”
Not only will it prepare you for the pitch, but it will also give you better output. The questions may even be more valuable than the answers because they force you to think about things you didn’t think about before.
Related stories
Use the ‘framing effect’
A surgery with a 90% survival rate feels different than a surgery with a 10% mortality rate. That’s the framing effect. Little wording tweaks in our prompts not only change how we feel, but can also change how AI responds.
The way you should frame a question to AI depends on what you’re trying to do.
Let’s say a team is facing a setback at work, and a manager is using a chatbot to write an email to their employees. If the manager prompts the chatbot to “explain the project delay and the problems they encountered”, that’s a negative framing, and the response will be more critical.
A more positive framing would be to say something like, “draft a project update for the team. Frame our recent challenge as a critical learning moment that has revealed two insights for making the final product even better. Focus on our resilience and the path forward.”
If I’m using a chatbot to work on something important, I test out many different versions of my prompt in different chats. I tweak some words, sometimes just one, and it actually makes a really big difference in the response I get.
I’m not as concerned about AI as I used to be
The speed at which AI was advancing made me a bit concerned about jobs and employment, but since the personally underwhelming GPT5 release, I think we’ve got a lot more time than I previously assumed. To me, this is good news for our job market.
Learning about cognitive biases is worth it. It improves how we think and communicate with each other and with AI, and will ultimately lead to better output.
Are you an AI expert with tips to share? If so, please reach out to the reporter at tmartinelli@businessinsider.com.