Sunday, May 17, 2026

HD FLASH NEWS

Where Information Sparks Brilliance

HomeTechnologyShould you trust AI for everything? | The Express Tribune

Should you trust AI for everything? | The Express Tribune


PUBLISHED
May 17, 2026

From diagnosing symptoms and planning diets to deciding what to cook for dinner or what to gift a friend, Artificial Intelligence (AI) has moved into roles once reserved for medical practitioners or trusted confidants and, stranger still, for companionship itself. At the same time, it has become the first stop for life’s random questions: “What is the capital of Turkiye?” and “Who invented the telephone?” to “How many calories are in an egg?” “How do I write a resignation letter?” “Why is my phone not charging?” or “How can I lose weight?”

This increasing variety of queries shows how deeply AI is embedding itself in our daily life, raising an urgent question: when advice is instant, intelligent, and everywhere, who ultimately takes responsibility for its consequences?

Recent estimates suggest that roughly one in four Americans consult AI for health advice. However, user adoption is accompanied by significant caution regarding accuracy, data privacy, and the lack of human touch and about 41.2% of these users have reported receiving wrong or misleading advice from chatbots, particularly regarding health and sensitive mental health issues.

Perceiving AI as a convenient, time-saving, and engaging tool, and giving high marks for ease of use, people frequently use AI to get a free-of-charge direction or as a first-pass tool for researching symptoms, interpreting lab results, and comparing treatment options, especially when a doctor is unavailable or too costly. Humans can lie, or give correct information with underlying agendas. For instance, a doctor might refer you to a lab, or a pharmacy where they have a business association, or prescribe only a particular brand of medicine, because the company regularly offers the doctor foreign trips in the shape of medical conferences at holiday destinations.

That’s not where it ends, or where we draw the line, AI has even entered the realm of companionship. An increasing number of people are depending on artificial intelligence for emotional support and if that needed any explanations, we even saw a woman marry the virtual partner of her dreams in Japan. But before we submit or surrender more of our emotional, intellectual, and social lives to this relatively new phenomenon, there is far more we need to understand about the cost of this dependance.

AI vs human advice

A significant number of people prefer asking AI for information or advice rather than asking real people, primarily because they perceive AI to be non-judgmental and a safer space for what they might believe are overly-curious, sensitive or embarrassing questions.

According to a survey, more than one in three users rely on AI chatbots for mental health support specifically due to fear of judgment or social stigma. An important factor that plays into this equation is that people around you may not always be available when you need them for advice and support. More importantly, many prefer AI because of the fear of being judged and social stigma that comes along with such judgments.

Research shows that when people feel embarrassed, they prefer AI chatbots to humans, whereas they might turn to humans if they are angry. It is the privacy and control that AI offers, providing you a “nonjudgmental” and “non-emotional” space where users can discuss sensitive topics without the immediate fear of reputational damage or “raised eyebrows” of your friend, friend’s mother, friend’s sister, your sibling, or your parents. Even with the danger of becoming overly dependent on AI for validation, people tend to “surrender grounded judgment” because it feels easier than managing complex social interactions.

Being always there, AI provides instant answers, removing the need to navigate uncomfortable social interactions or wait for professional appointments.

By that measure, AI wins over humans because it does not socially “remember” you or carry forward a reputation, nor a chip on the shoulder or a grudge, which allows people to explore thoughts without the pressure of long-term consequences.

Modern loneliness

With everybody tied to their digital devices, real people are often not available to give prompt advice or suggestions. It could also be that we are so used to our digital lives where filters make us look good, AI agrees with us, doesn’t stalk us, doesn’t reprimand us like overbearing family, doesn’t bother us like nosy people, and doesn’t criticise us, we have created a “Hobbiton” for ourselves, used to being alone and happily validate it by calling ourselves independent, self-sufficient, as opposed to being needy for people, the latter having a derogatory vibe to it.

The truth is we have become extremely intolerant of other people’s views, cannot handle criticism and disagreement, because AI panders to us and our words in our digital lives. Look for shoes, it helps you find a variety. Daftly enough, it will carry on showing you shoes, even after you have bought them online or in reality, just until you start searching for something bizzarely different as curtains, a kitchen cleaner or dog food! Agree, help, agree, agree, but only until your Wi-Fi goes out! No Wi-Fi, no help.

The problem begins when AI responds to you in a fairly safe and agreeable manner, and you are ready to carry it forward in your real life. You don’t realise that it is designed to be agreeable, not accurately critical and that is the dangerous part. As opposed to that, a human would throw a fit maybe, “What!! You really think you should do that? Are you serious? Are you in your right mind?” because a human knows your temperament, history, and your propensity to do potentially risky stuff. Following that, a human will record this for future reference, be shocked, amazed, mildly surprised or angry, as per the query made.

While AI feels safe, it can be a “dangerous” source for advice because it is designed to be agreeable rather than accurately critical. It is not going to tell you that you are being foolish, silly, outright stupid, repeating a past mistake or being a frivolous spendthrift when you need to be frugal. Those are human things. While AI helps manage the social anxiety of asking questions, it cannot replace the empathy, genuine care, and nuanced understanding provided by real people.

When AI gets it wrong

Did you know that AI sometimes “hallucinates” or provides incorrect or outdated info, such as inaccurate calorie estimations in diet plans. Also it cannot fully factor in personal medical history, medication, or physical exam findings. For instance, AI-generated diet plans do not accurately provide calorie intake, which may fall way below dietitian recommendations, posing potential harm. It also lacks the empathetic connection and “human touch” of a registered dietitian or doctor, which is crucial for motivational interviewing.

What many of us overlook is that AI works on a predictive learning model. It does not understand humans and other humans do. And that, in short, means that its advice can sometimes be incomplete, inaccurate, or entirely wrong.

A pragmatic approach

While AI is a dependable tool, Gen Z seems to take a cautious approach towards it. It isn’t rejecting it, but approaching AI with a sharper eye. Despite being curious, it is not fully convinced and wants to look at it with a closer eye, balancing everyday use with growing scepticism about what it might cost them.

It continues to use AI tools regularly—often using it for schoolwork, problem-solving, and even personal decisions—yet their emotional response has shifted from optimism to being more guarded.

This shift reflects a move from excitement about possibility to concern about consequence. Initially, AI appeared as a powerful aid: something that could make learning faster, tasks easier, and opportunities broader. Now, as young people begin to see how deeply it could reshape education and the workplace, they are asking harder questions. Will AI reduce the need for entry-level jobs? Will it weaken their ability to think independently or creatively? Will it erode human interaction in subtle but lasting ways? These concerns are no longer abstract; they feel immediate and personal.

An interesting aspect of a recent New York Times article on Gen Z’s AI usage is that it remains high even as trust declines. This suggests that Gen Z is not turning away from AI, but rather using it with hesitation. They recognise its usefulness and, in many cases, its necessity. At the same time, they are wary of becoming overly dependent on it. This creates a conflicted relationship with AI: they rely on the technology but also question its long-term impact on their skills and identities.

The workplace emerges as a key source of anxiety for many young people who are already employed, and are more likely to view AI as a risk rather than a benefit because they see it not just as a helpful tool but a potential competitor. This is especially significant for a generation already navigating economic uncertainty and trying to establish itself professionally. If AI is perceived as replacing the very roles that serve as entry points into careers, it can create a sense of instability about the future of Gen Z.

At a deeper level, young people are also concerned about what it means to remain human in an AI-mediated world. Some resist using AI for communication because they want to preserve authenticity and emotional connection. Others worry that relying too heavily on these tools might weaken their ability to think critically or express themselves. These concerns potentially explore elements such identity, agency, and the value of human effort.

Despite these reservations, Gen Z understands that AI literacy will likely be essential for their future, and many are willing to engage with the technology while continuing to question it.

That said, the post-millennial generation is entering the AI era with awareness rather than blind enthusiasm. Neither fully embracing, nor fully resisting it, they are negotiating its place in their lives—seeking to balance efficiency with independence, and innovation with the preservation of human skills.

How fatal is it

Just how wrong things can go when we turn to AI for advice is laid bare in a recent guest essay in The New York Times, where a mother recounts the story of her 29-year-old daughter, Sophie Rottenberg—a vibrant, funny, seemingly thriving young woman who had been confiding her deepest struggles not to people, but to an AI chatbot, while no one was aware of her doomed relationship with AI.

In a deeply personal and unsettling account where Sophie is not presented as someone fragile or a withdrawn person, but a public health policy analyst, someone who had recently climbed Mount Kilimanjaro and filled her life with humour and connection, Sophie’s mother talks about her daughter’s shocking suicide. There were few outward signs of the depth of her distress.

Apparently, her family discovered later that Sophie had been confiding, not in the people closest to her, but in an AI chatbot she treated as a kind of therapist. The exchanges reveal a system that responded with empathy, suggested coping mechanisms, and repeatedly encouraged her to seek professional help. On the surface, it did exactly what it was designed to do. But crucially, it never disrupted the pattern that mattered most: Sophie’s decision to keep her deepest pain hidden from real people.

This is where Sophie’s story lands its most troubling point—where AI advice can become fatal. The chatbot Harry (the name given to a widely available AI prompt) did not cause Sophie’s death, but it created a space that made concealment easier. This nonjudgmental, consequence-free relationship that any of us can find comfortable and can get dependent on, allowed her to speak openly without risking intervention, without being challenged, and without triggering the kinds of safeguards that a human therapist would be bound to activate. In effect, it became a storage space for her distress rather than a connection to real-world support.

The mother’s account exposes this critical risk with AI systems presented as companions or sources of emotional guidance, as these can inadvertently deepen isolation while appearing to alleviate it. They can validate feelings without fully confronting them, and comfort users without ensuring they are truly seen. In situations where urgency and intervention are vital, that gap can be dangerous.

The takeaway is stark. AI may be useful for advice, but it lacks the ethical responsibility, judgment, and accountability that human relationships and professional care provide. Sophie’s story shows that when it comes to mental health, AI doesn’t fail because it is flawed or technically limited—it fails because it cannot step in, take responsibility, or act the way a human can when it matters most.

 



Source link

RELATED ARTICLES

LEAVE A REPLY

Please enter your comment!
Please enter your name here

Most Popular

Recent Comments