A friend recently revealed her frustration in trying to re-connect with one of her adult daughters who had chosen to cut ties with the family. Having tried various unsuccessful methods, such as enlisting the help of another daughter, my friend was at a loss for what to do. Phone calls and letters had gone unanswered for months and my friend nearly threw in the towel. But it was her precious, young grandchildren, with whom she has had little contact, who kept the fire burning inside of her.
Last week, she opted to text her daughter. In that way, she’d know if her daughter had received and read the text. Not trusting her own words would effectively break through, she decided to ask ChatGPT for assistance. Instantly, the artificial intelligence site wrote the missive and my friend thought it perfectly expressed her sentiment. Copying and pasting it into a text, she hit send and waited. Almost as quickly as she sent it, she saw her daughter had read it. Although her daughter is still estranged, my friend is encouraged and more hopeful to reunite with her. One day soon.
Although I am constantly bombarded with artificial intelligence when I regularly browse the internet, I have not intentionally utilized AI apps to help me navigate personal or emotional issues. Nor have I used it to help me write anything. In my humble and nonjudgemental opinion, using AI seems like the lazy person’s means of solving problems or creating art. As a writer, I cannot fathom having a nonhuman technical application create my work for me. In fact, I have never needed any outside prompts to start writing a blog, a story or a novel. Still, I seem to be in the minority.
Human Wisdom vs. Artificial Intelligence
Everyone is using ChatGPT or one of the other popular AI apps like Elon Musk’s Grok to answer everyday questions as well as to solve complex emotional problems. Okay, seeking answers to common questions is one thing but using a nonhuman application to help solve human problems? How effective can that be? As an example, another friend told me she frequently uses ChatGPT as her backup psychologist even though she has a human therapist she sees regularly.

Call me skeptical, but there’s no way I would trust deep, psychological advice from artificial intelligence.
But I’m a curious person and didn’t want to simply dismiss the platform without trying it, so I visited ChatGPT and started asking questions. I was hoping to expose it as a fraud or at least as a mediocre medium of knowledge. It bills itself as a place that “helps you get answers, find inspiration and be more productive. Just ask, and CharGPT can help with writing, learning, brainstorming and more.”
So, I did. I asked if it could help me write a book.
“Absolutely! I’d love to help you write a book. We can take it step by step, whether you’re just starting with an idea or already have something in the works.”
To get started, the site offered me brainstorming ideas, plot and character development, world-building (whatever that is), writing chapters, editing and feedback as well as motivation and structure. All things I don’t need or want but hey, other people who’ve never written a novel, much less an essay may need motivation and structure. Especially with character creation, plot development and editing.
Simulated Emotions
I then asked if there’s anything ChatGPT cannot do. Now the truth would be revealed. I was waiting for my “gotcha” moment.

“There are a few things ChatGPT can’t do, and they mostly fall into the realm of things that require personal experience, direct actions, or specialized knowledge outside of general language processing. Here are some examples:
“Access real-time information; perform physical actions; personal memory; handle sensitive data; execute complex, specialized tasks; create or interact with media; offer legal, medical, or financial advice; predict the future.”
The last entry jumped out at me. And, in my opinion, it’s the most significant value missing from AI: Having emotions or consciousness.
“While I can simulate emotions in conversation, I don’t actually feel anything. My responses are based on patterns and probabilities from data, not personal experience or true emotional understanding.”
So, not surprisingly, ChatGPT has no human feelings. It “simulates” emotions based on data collected. Kind of like how insurance companies methodically determine your premiums based on patterns and probabilities of things like life expectancy or living in a fire zone.
Leveraging Laziness
Now that I’ve tried ChatGPT, I’ll probably continue to ask it questions from time to time, but mostly for entertainment purposes, not for real-life knowledge or wisdom. I was especially fond of its answer to my inquiry about AI making people lazy.
“Using ChatGPT isn’t about being lazy — it’s about leveraging a tool. Just like a calculator helps with math.”
It went on to say, ”It’s not a shortcut for those who don’t want to work — it’s a productivity tool for those who want to save time, clarify their thinking, get unstuck, improve quality and learn faster.” ChatGPT also remarked that people who use AI tools well, often “get more done, not less. It’s how you use it that matters.”
I am happy for my friends who find it useful. Like I said, I have no judgement about how others choose to live their lives.
My only question now is are we going to replace “Google it” with “ChatGPT it”?