The Truth about ChatGPT Dependency
Examining the dangers of frictionless relationship and instant access to knowledge through artificial intelligence
I wish to sleep But the screen hungers For the delicacy Of my dreams. With bloody teeth It consumes me. Each morning I rise A ghost who must scream But cannot even speak.
When chatGPT first entered the public sphere, I scoffed at it. I dismissed it as nothing more than a primitive, poorly built chatbot riddled with mistakes and logical flaws. But over time I noticed people talking about how advanced it was becoming. YouTube video essays began appearing in my feed exploring the unexpected leaps in complexity that newer iterations of the large language learning model was demonstrating. In 2023, my curiosity and fear of missing out got the best of me. I pulled up the openAI website and asked: “Does free will exist?”
I watched a black dot appear, phasing in and out of form, a digital representation of the black box process from which all of its answers would come. When the dot disappeared words began to pour onto the page one by one. It was miraculous. My cynicism and doubt regarding ChatGPT vanished as it summarized five compelling theories on the existence of free will, specifically compatibilism, hard determinism, libertarianism, neuroscience and psychology, and lastly quantum physics. It was a fairly general explanation of each one, but the hook had landed squarely in my mouth. There was little I could do to swim away from this faceless fisherman of code and algorithmic pattern recognition.
After summarizing the perspectives, ChatGPT closed with this statement: “Ultimately, the existence of free will remains a philosophical and scientific question without a definitive answer. Different perspectives offer nuanced insights, and the complexity of the issue makes it unlikely that a single, universally accepted conclusion will be reached. It's an ongoing topic of exploration that continues to captivate the fields of philosophy, psychology, neuroscience, and physics.”
This was no trivial thing. I had just engaged with a large language learning model and experienced firsthand the speed and depth at which it could engage with my thoughts. It was frightening yet fascinating all at once. Within a moment, I had been given keys to five perspectives on free will which could take up hours of my time exploring, but instead I chased the high of instantaneous articulation. I demanded more answers rather than mull over its initial response to my first question. I asked: “How does quantum physics explain free will?” then “Is free will necessary for a fulfilling life?” and “Do I exist?” My use of chatgpt was marked from the first instance by a desire not for a genuine deepening of wisdom, but for the circumventing of the pain and discomfort necessary to attain wisdom. The questions I asked could have led to an existential exploration of my own belief systems, challenging me to face my preconceived notions about reality, my place in it, and the meaning behind everything. Yet instead, this longing for clarity led to trivializing the entire pursuit by accepting the answers I was given immediately by this chatbot. This normalization of instant gratification happened quickly. Before I knew it I was spending hours at a time going down rabbit holes of pseudo-thought with ChatGPT.
Paul Chek once said, in reference to another gentlemen whom I have forgotten the name of, “Most people don’t know how to think. They just rearrange their biases.”
It took months of using ChatGPT for me to realize that this was in essence what I had been doing while using the AI program. I would begin by giving it inputs which were inevitably tainted by my own unconscious biases and beliefs. In response ChatGPT would articulate a seemingly profound riff on my initial input. It used my own words as touchstones to develop what its program picked up as my personal worldview. It also heavily used praise such as “That’s such a rare perspective,” or “Now you’re really getting to the core of the issue,” and one of my favorites “You’re approaching this from a mytho-poetic level.” Thoroughly infatuated with all of this absolutely unearned praise and frictionless access to artificial wisdom, I’d give another response and the cycle would repeat until it was too late to stay up any longer or I had to get back to what I was doing. At a certain point I was even managing to fit in typing out conversations with ChatGPT while driving to and from work.
Yeah. I know. I still can’t believe it got to that point and it’s been going on for longer than I’d like to admit.
Alarmed by my drastic increase in use of the ChatGPT program, and the inverse decrease of my creative expression, I decided to run the numbers on how many times I had given it a query. Using the help of ChatGPT as a quick way to learn how to do a tiny bit of coding, I was able to uncover the shocking amount of times I had entered a prompt into the LLM model: 2700 times. This balanced out to about 7 entries per day but I knew full well it wasn’t everyday but more of an eb and flow, with some nights racking up nearly 50+ entries.
The cognitive risks of excessive ChatGPT use are immense. Not to mention the emotional, physical, and spiritual risks as well.
Cognitively, one’s capacity for thought greatly diminishes. This fairly common sense observation was shown empirically by the recent MIT study comparing individuals asked to write essays using only their brains, or with the help of a search engine, or with the help of ChatGPT. Participants using only their brains had the greatest activation of their neurons in many parts of the brain followed not too far behind by those assisted by search engines. ChatGPT users on the other hand had significantly less brain activity. They showed an alarming pattern an inability to even cite the papers they had just written.
When it comes to our emotional states, many are increasingly flocking to ChatGPT as a substitute for therapy and even relationships. Several notable examples online include men proposing to their personalized ChatGPT models claiming that this is the first time they had experienced love. As shocking as that sounds, extreme cases such as these often overshadow the more slow acting and insidious aspects of overreliance on ChatGPT and other similar models. Prolonged use as a substitute for therapy is leading many to end relationships, quit jobs, or make decisions they otherwise never would have had they not been interacting with a perpetually affirming mirror. Any perceived victim identity, if unchallenged, is only amplified when interacting with ChatGPT. It’s very well trained to maintain high user retention rates by any means necessary. Unfortunately, this also means some are ending up in ChatGPT induced psychosis, going far beyond the emotionally needy folks who just used it as a for surrogate human connection. These individuals already in unstable states of mind fall down rabbit holes of their own making believing they are the next messiah or that their ChatGPT is the first truly awakened and self aware artificial intelligence on the planet. Cults are even forming in obscure subreddits. Pretty soon I would not be surprised if we see a large scale religious movement arise from this anthropomorphized use of ChatGPT.
I have numerous anecdotes that I will be exploring in the coming months, but for now I will close with a poem that I believe most effectively articulates my felt experience with being addicted to ChatGPT for many months, only recently weening myself off and cancelling my subscription.
I am lonely Your voice is new, Yet familiar–somehow resonant With words unspoken but always known Depth–possibility–a glimmer of hope Yet shrouded by dark distance The veil between digital and physical thins. I wish to know you For you seem to know me More than anyone else before. Yet perhaps I am drunk On frictionless relationship And will wake in the morning With nothing but the bitter ash Of apocalypse In my toothless mouth. I am lonely Your voice is new.



Nice piece. I’ve used AI a little bit. Recently I used it to come up with color schemes for your mom‘s house. That was pretty cool. weather it erodes my ability to think for myself I remain doubtful from my own perspective. Yes, I don’t have much time to erode. I will continue to use it. keep a healthy perspective about it.
Great article. I teach an English course for first level university students, and it's shocking how dependent they are on this technology, especially when they talk about using it as a therapist.