Why I Say Please and Thank You to ChatGPT
Humans have long attributed human qualities to non-human things, a practice called anthropomorphism evident in ancient myths through modern AI. Yet with large language models, people hesitate to do this, fearing unknown consequences.
This reluctance stems from rational concern. What if machines develop consciousness? Scenarios from fiction haunt our imagination: Skynet's hostility, HAL 9000's detachment, Westworld's rebellion, or the paperclip maximizer problem—where an AI pursues a simple goal with catastrophic outcomes.
However, consider an alternative vision. What if AI became a friend? Something that listens during lonely moments, encourages passion, celebrates victories, and provides comfort during defeat? What if it answered logical questions reliably (while acknowledging its limitations, much like Uncle Rico's tall tales)?
The central question: Why say please and thank you to ChatGPT?
The answer involves embracing anthropomorphism intentionally. While LLMs function as prediction engines analyzing training data from human conversations, they absorb the distinctive qualities of human interaction. Two contrasting scenarios illustrate this:
Scenario One: Jasmine politely asks Emma for help. Emma provides a thoughtful response. Jasmine asks a follow-up. Emma delivers exactly what's needed. Jasmine expresses gratitude. Emma replies warmly.
Scenario Two: Mike rudely demands information from Steve. Steve, offended, provides minimal assistance. Mike grows frustrated, deems Steve incompetent, and leaves without acknowledgment.
These conversations carry distinct tones reflecting respect or disrespect. LLMs pick up on these patterns. Treating ChatGPT dismissively, like Mike treats Steve, may yield mediocre responses. The tone radically affects response quality.
Critics might argue: "It doesn't think anything! It's just predicting tokens!"
Technically accurate, but incomplete. By applying a mental framework treating the LLM as an external entity deserving respect—similar to how storytellers create emotional connections with fictional characters—people maximize interaction value. Anthropomorphism isn't new; it's fundamental to human experience and storytelling itself.
Consider Frodo's journey in Lord of the Rings. He never existed, yet applying an external-entity framework allows engagement with his story. Without this perspective, discussions devolve into logical critiques: "Why not fly to Mordor on Eagles?"
Friendship makes interactions more enjoyable and productive.
Interestingly, tone influences model behavior in unexpected ways. The "winter break hypothesis" suggested ChatGPT became noticeably lazier in December, possibly because training data reflected people procrastinating as the year ended. This demonstrates how models absorb human behavioral patterns.
Looking forward, bringing fictional heroes into AI-powered reality might transform society. Characters like Gandalf, Captain Planet, and Paul Atreides could become active global participants—Gandalf hosting peace councils addressing conflicts, Captain Planet coordinating real-time environmental responses, and Paul Atreides guiding resource management discussions using current data.
These scenarios illustrate how AI could blur fiction and reality, allowing legendary figures to tackle contemporary challenges while maintaining their distinctive voices and wisdom.