Thoughts on AI as an Alternative to Human Counselors
SHORT ARTICLE / ESSAY
Skylar Park
10/30/2025


Written by: Skylar Park
AI is a powerful tool that seems to encompass all types of knowledge and expertise. It is also very good at emulating humanness when it comes to interacting with users, almost to the point where some people (including myself) use it to talk about their personal lives, emotions, thoughts, and intimate experiences as if these AI chatbots were their close human friends or a trustworthy counselor they could talk to.
As many people are becoming more and more online-oriented due to the ongoing development of technology, it has become easier to find that people are becoming reluctant to interact with real people face-to-face. This societal phenomenon oftentimes discussed in STS is especially conspicuous among adolescents these days, like myself: everyone has a screen in front of themselves even when they are sitting right next to each other. I think communicating through online platforms is regarded more favorably than talking to the other person in-person, because those components of emotional interaction and social skills of eye contact and appropriate reactions, which require sensitivity and genuine thought processing, are marginal when it comes to sending a couple of messages and doesn’t require thoughtfulness, time and energy consumptions etc.
Recently, with the emergence of advanced AI chatbots that can keep an ongoing, human-like conversation, people can use these AI chatbots to talk about personal issues and utilize it as an emotional outlet without the fear of judgment or talking-behind-the-back issues, which many students and people would be able to relate to. Essentially, people, especially teenagers, are losing the ability to readily interact with others in person, which makes talking to AI chatbots like ChatGPT feel more convenient and comfortable to use as a friend or a counselor to whom they can express their worries or concerns to. In the long-term perspective, the overreliance on AIs may result in undermined human cooperation as cooperation initiates from interaction and skillful communication within humans. Even further, this takes away the human warmth in the society, which is essential for human comfort, and innate desire to connect and share emotional experiences (which is an essential part of poiesis = the process of humanness in the fied of Coevolution and humanness.), blocking the human pursuit we informally characterize as “spending time together” or “just having a person to talk to in imperfections and with consciousness” and transitioning to a world lacking warmth and prevailing emotional and connective coldness.
Then, the significant question to reckon is, can AI chatbots actually work as an alternative to human counselors? It is known that everyone, at one point in their lives, needs counseling, whether a serious one or a simple advice session from a friend or family. Personally, I have experience getting help from a professional counselor and therapist when I was going through a difficult time in my life. From this, I know how expensive and challenging it is to find an available counselor/therapist. These days, when I think I might find counseling helpful, I often talk to ChatGPT about the things I am going through and have found that it is surprisingly a rather decent counselor or at least a good talking buddy to have by my side who I know won’t judge or do anything that would make me regret telling it about my personal life. As I did some research, I have found that there are many people like myself who use AI as a talking buddy or a counselor, as the AI’s constant emotional empathy emulation naturally evokes the sense of being heard and creates a non-judgemental environment. There are even specific AI chatbots, such as Wysa, that was created to enhance people’s mental wellbeing and serve as a counseling tool, which deliberately targets this human nature during the process of human-AI interactions. We then arrive at a consensus on AI being a felicitous short term counselor supplant that satisfies our initial and most fundamental need of being heard and not being judged. But what about in long term and the issues of human morality and role? In the specific labor industry of counsellors or therapists, do the humans lose their roles then? I’d say the answer is no, but what is correct is that the focus of their labor shifts and the socially perceived values change. Possibly the introduction of AI may even reinforce the necessity for human counselors by accentuating on their unique strengths of human connection, authenticity and human preference.
To elaborate on this, this conundrum of whether AI can serve as an alternative to human counselors raises both opportunities and concerns. There are numerous opportunities and benefits of using AI as a counselling tool. It is readily available, affordable, and provides a safe, nonjudgmental space where people do not have to confront another human person, and may feel comfortable and less worried about sharing their emotions and thoughts. AI chatbots such as ChatGPT are adept at providing quick support, whether emotional or practical (like solutions), tracking and analyzing moods and thoughts, and encouraging healthy habits and mindsets. It is also, if distributed with affordability, a way to bridge inequalities and provide the needed supply and service to areas with a dearth of such counselors or mental health institutions or households that may not be able to afford it.
However, there are certain risks and concerns involved in using AI as a counselling tool as well. For instance, AI lacks genuine human empathy and the ability to make heartfelt connections and comfort that heals the human soul. It still isn’t proficient in understanding deep emotions and emotional nuances because it is dependent on programmed data, not raw human experience. But as humans will likely do and should, due to preference for human carried work and interaction and originality of experience if a coevolutionary benchmark becomes established, to maintain the love for humans and the necessity for upholding humanness, through sharing of experiences, cultures and inspiring each other for process of creation and envisioning, the AI as a supplant of humans may be harshly repudiated at a certain threshold of usage or control, resulting in a “backlash”. Indeed, AI can replace the role of listening to the concerns and giving empathetic responses, but sharing experiences to actually transform one’s void into complacence and to have the sense of care and warmth in one’s heart, the human intervention will be necessary, and should be structured as it, reemphasizing the coldness of society that will arrive once human role of interaction and authentic experience initiators becomes miniscule
Also, one must recognize that counseling is not only a generic supply of advices and already prevalent methods or assistant based on a pre-established set rules of how to behave and what to encourage and what to not do. But LLMs are trained so that they simply generate text while being compliant to the restrictions and it have e.g. on certain topics (e.g. violence, racial discrimination, physical injury, other sensitive topics restricted due to social, political and ethical disputes). But humans do have the flexibility and the responsibility to calibrate the approaches and solutions.
Finally, there are ethical concerns about privacy: the risk of opening up to AI excessively and allowing the misuse of private information. This is what EU AI Act is concerned of and is known for being very protective of it through the elaboration and consolidation of GDPR transparency obligations or the categorization of untargeted data scraping as prohibiteded AI practice, while the industry heavily dispersed in the US and China tends to continue to focus on the development and economic competition
The truth is, users’ experiences inevitably shape these systems. Filtering personal traces from training data is almost impossible when observing the nature of AI training and deep learning methods that LLMs are based on, even with decentralized methods of federated learning, anonymization, or synthetic data (AI creates synthetic information that emulates or assimilates personally identifiable information (PII) to avoid personal identification and data within the model’s training data). In that sense, we are both contributors to and protectors of our own data. Perhaps what’s needed next isn’t just better regulation, but also more extensive AI data-literacy education, helping people understand how their interactions fuel these systems, and how transparency can rebuild trust in them, similar to how the educational curriculum or other external coursework for social media and internet literacy has been integrated in the past social media and internet boom.
Thus, Most experts and scholars agree that AI should not replace human counselors but rather serve as a supplement—helping with accessibility and daily support while leaving deeper, more sensitive care to trained human professionals, in a system in which both cooperate instead of overpowering each others roles and importance to reduce any possible societal corruption or conflcits but to exploit the maximum potential with patience, adequate sacrifice and adequate perservation. That is, AI and humans work in a mutual symbiosis to provide individuals with the most effective support and counselling, enhancing the mental health of more people by addressing their specific needs and circumstances, with the decision-making and accountability lying on the human counselor.
Ultimately, AI may reshape mental health support, but it cannot imitate the irreplaceable human connection that counseling often requires. Thus, articulating the necessity for coevolution and its specific values, such as the redefined human dignity as the pursuit of humanness/poiesis, and visions of collaboration, and its reasons behind the urgency, is needed to cause a societal change, and a collective conscience or agreement. One may say the extreme conflicts or backlash may not emerge, but we all want to live in a society where we can exploit the maximum potential of AI while still having control and happiness, and use this moment to light up our strength to live everyday with purpose and gratitude as a collective civilizational mindset not only individually and psychologically but also socioeconomically as a morale or attitude. In short, we are going to do it with the primary motivation from the current AI circumstance and a possible disastrous future, but through the process, humanity will face immense side-benefits we would want anyway. Much like a person who works towards a goal with grit and hunger, and even though it may not have been for a certain reward, the skills and memories one acquires are invaluable.
