Where are you, Soul? - Why AIs write and Humans create

SHORT ARTICLE / ESSAY

Yebyoung Kim

10/18/2025

What makes an article actually good and readable? That question depends on who the reader is: everyone has personal priorities. However, according to many other analyses, the key 3 factors tend to be language, structure/organization, and connection with the reader of author's opinion, intent, emotions and experience.

But how do these features apply to AI? Or even more so, what is AI actually capable of in the field of ethics, and worth behind its writing? AI might seem like an innovation dominating today’s world, but many people still misunderstand its true strengths and limitations, which necessitates a clarification.

The Strengths and Limitations of AI

Some strengths of AI include diverse output generation: AI can produce new ideas, stylistic variations, or perspectives that humans might overlook. Additionally, AI excels in data analysis, capable of processing and interpreting vast datasets far faster and more accurately than humans. In fact, this data analysis embedded inside deep learning and neural networks is what powers and enables Large Language Models such as the generative-pre-trained transformers like ChatGPT to write based on an immense amount of data.

However, these capabilities come with limitations. One major issue is common sense reasoning: AI often lacks the intuitive, context-based understanding humans develop from lived experience. Another weakness is authenticity. AI may provide fluent, confident answers that sound correct but are subtly inaccurate or misleading, especially due to hallucination and lack of experience or context. If given contexts and background that justify writer’s choices, humans aspects that may be a point of criticism for the detractors may be understood or viewed as the author’s unique authorial choice or style, yet for AI this moral worth’s absence due to lack of personal experience or past history will make these mistakes detrimental. Finally, bias is an inherent risk: machine learning systems learn from large datasets collected and labeled by humans. Therefore, biases in the data (statistical bias) and biases in model design (algorithmic bias) can propagate through the AI’s predictions and writing style.

In more technical terms, these biases arise from how machine learning models generalize patterns during training. Taking just one section of machine learning as an example, gradient descent optimization minimizes a mathematical loss function, but this function only measures how well the model fits its training data, not whether it has learned ethically neutral or universally fair patterns. Similarly, the feature extraction process in neural networks captures correlations but not causation and the morality behind them, so models can unintentionally amplify societal or linguistic stereotypes embedded in the dataset, as that is essentially not what they are designed to do or not the area they focus on when mathematically and algorithmically developing themselves.

How AI Performs in the “Good Article” Criteria

Let’s now revisit the three qualities that define a good article and see how AI measures up.

1. Appropriate Language

A good article uses language suitable to its tone, audience, and purpose. For AI, this is arguably its strongest skill. Large Language Models (LLMs) such as GPT are trained on massive corpora of text across styles, genres, and registers, allowing them to recognize linguistic patterns statistically rather than intuitively. Through token probability modeling, predicting the next most likely word given the prior context, AI can emulate various voices and even mimic the stylistic nuances of writers like Shakespeare or Hemingway by adjusting probabilities of the next word or diction choice suited to the style of Shakespeare e.g. increasing probabilities for the next token to be “thy” and “hath” during the text generation process.
Recent advances, such as transformer architectures, reinforcement learning from human feedback (RLHF), and instruction tuning, have significantly improved how LLMs adapt to specific tones or rhetorical styles. These methods fine-tune the model on curated datasets that teach it not only grammar and syntax but also stylistic coherence and human-like flow.

2. Structure and Organization

An effective essay requires clear structure and logical organization. Here again, AI performs remarkably well. Language models are essentially pattern-recognition systems, and once they identify the structural conventions of essays, introduction, body, conclusion and transitions, they can replicate them flawlessly.
At a deeper level, the transformer’s self-attention mechanism enables it to understand relationships between words and sentences across long passages, allowing it to maintain logical flow and internal consistency throughout a text, which many human authors struggle with as soon as topic becomes very varied and the plot gets overwhelmingly convoluted.

3. Connection with the Reader

This is where AI falters and is at the same time the most crucial part to consider as we write, not to sound good and have great language structures (unless in language and literature class in college or school) but to convey ideas, messages and experiences/personal thoughts. While AI can analyze emotional tone and predict what type of content might appeal to a given demographic, it does not feel emotion or possess empathy. Its understanding of human connection is purely computational, based on statistical inference of emotional cues rather than lived experience.
Moreover, because AI tends to produce text that is highly consistent and formal as stated in criteria number 1 and 2, it often lacks the imperfections, spontaneity, and personal rhythm that make human writing emotionally engaging, and it seems like it will continue to do so as human users will not want an AI that makes mistakes or have flaws in writing just like they have. This mechanical consistency can make AI writing appear static and depersonalized, even when technically well-constructed, losing attention and affection by readers and making various articles indifferent and appear analogous.

The Ethical Dilemma and Future of Authorship

So, is AI truly a good article author? Based on these criteria, the answer remains uncertain, and ultimately, the judgment lies with you, the reader. How would you feel if this very article were written by AI? Would you be impressed or disappointed? Would you suddenly lose the affection towards the information written and stop cherishing the knowledge gained? Or would you just take it the way it is?

That emotional response is at the core of the ethical debate shaping our future. The question is not merely about performance or efficiency it’s about values and situations, as the degree of change of the impact may be minor if it is a sole informative text we are used to with google searching. Also, authorship discussions arise if AI becomes the author of bestselling books and viral essays, who deserves the credit, recognition, and financial reward? How can we preserve the meaning of "human poiesis" (=coevolution’s process of human experience synthesis and creation) without surrendering it to algorithmic generation in literature and texts?

What is certain in all cases however is that, at YAFI, we strive for a future where AI-human collaboration enhances the drudging and tedious writing work without erasing the touch of humanity that influence the story, purpose and cause of certain plots or authorial choices. We believe that while AI can assist, organize, and amplify, the essence of why we write and enjoy reading - the emotion, the imperfection, the will to connect - must remain deeply and irreducibly human.

Yebyoung Kim

October 18th, 2025