Thinking Inside the Bot

ChatGPT is everywhere. But is it a blessing or a curse?
Representational image.
Representational image.
Updated on
12 min read

What do Arjun Mehta, caught in a relationship dilemma, and Priya Sharma, a student at Delhi University, have in common? Both have the “bot”. They turned to the same source for help—Chat Generative Pre-trained Transformer, or ChatGPT. For Arjun, who admits he was “spiralling like a Shakespearean hero” over a girl he was dating, ChatGPT didn’t offer a cheesy pickup line or a cliché like “just move on.” Instead, it served up a disarming question: “What’s your goal—connection, or reassurance?” That pause gave him perspective. On the other hand, Priya went down the slippery slope. During the mid-semester exams, she found herself overwhelmed. Desperate and short on time, she asked ChatGPT to write a 1,500-word essay for her assignment. A week later, her professor called her in. “This doesn’t sound like you,” he said. After further questioning and a review by the academic committee, she was suspended for the semester, her scholarship was withdrawn, and her academic record flagged.

There was a time when advice lived in very human places. You cried on a friend’s shoulder, argued with a parent, scribbled down your worries for a therapist, or nervously dialed your doctor. These days, people are skipping all that and typing their most personal questions into a ChatGPT chat box. A bot becomes confidant, coach, sounding board, and sometimes even relationship referee.

A new paper from OpenAI, Harvard, and Duke reveals that while ChatGPT now has 700 million weekly users sending 2.5 billion messages daily, its usage has shifted from primarily work-related tasks to more personal and informational queries. Also, by mid-2025, ChatGPT users shifted from being predominantly male to majority female, with nearly half of messages coming from users aged 18-25. Usage is growing four times faster in low-income countries than in high-income ones.

The bot has taken over life. It has stealthily invaded private spaces of individuals into their deepest fears, insecurities and desires. It advises a desperate lover how to gain the attention of the object of his desire. It helps people with depression, or shyness, or despair with easy suggestions. It helps writers and designers to come up with quick plots and synopses for scripts to be submitted to publishers or OTT producers. It even enters the legal world to draft petitions and briefs. It does homework, writes PhD theses, and helps executives handle stress, or a nasty boss, or a difficult colleague. It drafts job application and recipes. So what is left? Total dependance on a bot that you’ve allowed to control your life. A bot that could take you down the better path or the garden path. It could end in promotions, marriage, or despair.

Take interior designer Shuchi Jain, for instance. On a quiet evening in Indore, she leans over her desk. She is not sketching floor plans, but chatting—with a bot. She types in half-formed ideas for a client brief, and receives a cascade of neatly structured suggestions: mood board descriptions, material combinations, even references to design principles she hasn’t studied in years. For Jain, ChatGPT is an indispensable creative partner. “I put all the to-dos, tasks, events, and the site stage with process, which helps structure timelines, task lists, and project workflows, ensuring I don’t miss important steps,” she says.

But are we starting to outsource too much? Ravi Iyer, a senior software engineer working in a tech firm in Bengaluru, learnt it the hard way. One evening, after returning from a short trip to Coorg, Ravi noticed a strange, circular rash on his thigh. Concerned, he asked ChatGPT to diagnose it based on the symptoms. The chatbot suggested it could be a mild fungal infection and recommended common antifungal creams. Over the next week, the rash spread rapidly. When Ravi finally consulted a dermatologist, the diagnosis stunned him: Lyme disease, likely contracted from a tick bite during his trip. The delay in treatment had allowed the infection to spread, requiring a longer and more aggressive course of antibiotics, and a two-week medical leave from work.

“As a psychiatrist, I see one of the deepest concerns around ChatGPT being its subtle impact on how children and young adults learn to think.”
- Founder, No Worry No Tension Healthcare
“As a psychiatrist, I see one of the deepest concerns around ChatGPT being its subtle impact on how children and young adults learn to think.” - Founder, No Worry No Tension Healthcare
“Today’s generation, especially those in their  20s and 30s, expects answers on demand. ChatGPT feels personal, almost like having a guide available anytime you need.” - Astrologer and psychologist
“Today’s generation, especially those in their 20s and 30s, expects answers on demand. ChatGPT feels personal, almost like having a guide available anytime you need.” - Astrologer and psychologist

In India—a nation where nearly half the population struggles with functional literacy, and where access to good teachers and healthcare remains scarce—the arrival of a free, conversational assistant has been nothing short of revolutionary. Take 17-year-old Renu from Baran, Rajasthan. “My English teacher is good, but she has 60 students in class. With ChatGPT, I can practise English whenever I want. It even explains grammar in Hindi if I get stuck,” she says. For 42-year-old shopkeeper Pramod Yadav in Gorakhpur, it is about dignity. “Earlier I had to call my nephew in Delhi every time I had to write an email for GST or order stock. Now I just ask ChatGPT. It writes everything perfectly,” he says.

In low-literacy contexts, many hesitate to approach teachers, bank officers, or government staff, worried about sounding foolish. ChatGPT offers anonymity. People can ask it anything and receive answers in seconds. But the same trust that empowers can also mislead. Experts worry that without a culture of cross-checking, misinformation can spread quickly. “People often believe what they see written, especially if it is in English,” says Dr Anjali Mehta, a sociologist studying technology use in rural India. There’s also the question of dependency. A college student in Lucknow admits that he uses ChatGPT for all his assignments. “Why waste time writing when it can do it for me?”

The Digital Confidant

Across India, where the stigma around relationship counselling still lingers in many social circles, an increasing number of urban millennials and Gen Zs are turning to ChatGPT for guidance in matters of the heart. From decoding mixed signals on dating apps to navigating arranged marriage setups, it is fast becoming a secret therapist, coach, and diary rolled into one. There’s no fear of judgment, no emotional burdening of another person, and no risk of gossip in tight-knit social groups where privacy is hard to come by.

Yet, this trend also opens up questions. How reliable is a chatbot as a relationship counsellor? Is it ethical for people to lean on something that lacks emotional intelligence, even if it mimics it so well? Dr Neha Kapoor, a relationship psychologist based in Delhi, acknowledges the tool’s value but cautions against over-reliance. “ChatGPT can be great for introspection, but it cannot replace the empathy and accountability of human therapy.” Still, for many, it serves as a bridge. In a country straddling centuries of romantic conservatism and a digital-forward youth, these chatbots quietly slip into the role of modern confidants. They listen endlessly, ask just enough, and never interrupt.

“When employees see machines handling tasks they once owned, it can trigger anxiety about relevance and long-term employability.” - Murali Santhanam, CHRO, AscentHR Technologies
“When employees see machines handling tasks they once owned, it can trigger anxiety about relevance and long-term employability.” - Murali Santhanam, CHRO, AscentHR Technologies

Likewise, a growing number of retail investors are beginning to supplement their research with AI-generated insights. They feed the prompts about Indian companies they have shortlisted: names like Tata Elxsi, Hindustan Aeronautics, and Deepak Nitrite. Then they ask the model to analyse balance sheets, interpret quarterly earnings, and evaluate sectoral trends, along with future outlook. Skeptics, of course, question the wisdom of relying on a language model not trained on real-time stock market data or capable of understanding irrational market sentiments. But many first-time investors today are using ChatGPT as a sounding board, not a crystal ball. They cross-reference the model’s insights with data from SEBI filings, screener.in, and market commentary. After all, a chatbot has no brokerage affiliations and doesn’t benefit from volatility.

The legal world, long defined by tradition, is undergoing a subtle but profound shift, too. For lawyers managing high caseloads and navigating mountains of paperwork—FIRs, judgments, affidavits, witness statements, and procedural records—the emergence of tools like ChatGPT is reshaping the way work gets done. What once required hours of reading, summarising, and structuring can now be completed in minutes with the assistance of intelligent language models. Lawyers who were initially skeptical now find themselves returning to chatbots not for novelty, but for necessity. The Indian legal ecosystem is still catching up with this new reality. The Bar Council has yet to issue specific guidelines on the ethical and professional implications of such practices. Questions abound: Should AI-generated content be disclosed in filings? Who bears responsibility for inaccuracies? What about hallucinated precedents? These are valid concerns, but they also reflect a growing awareness that ignoring these tools is no longer viable.

Across India’s creative industries, from the bustling studios of Mumbai to co-working cafes in Bengaluru, what began as playful experimentation—asking a chatbot for character sketches or punchier dialogue—has grown into something deeper: collaboration. Writers, designers, comedians, and creators of every kind are finding that generative AI doesn’t replace the creative spark, but often amplifies it. For many, the tool has become an invisible sounding board—a place to test narrative arcs, explore emotional tones, or provoke new directions.

“The responsibility lies in ensuring accuracy. Pharma already struggles with public distrust, and if errors or biases creep in, it could deepen that suspicion rather than ease it.” -  Dr Prashant Bagali, Head of Scientific Affairs, MedGenome
“The responsibility lies in ensuring accuracy. Pharma already struggles with public distrust, and if errors or biases creep in, it could deepen that suspicion rather than ease it.” - Dr Prashant Bagali, Head of Scientific Affairs, MedGenome
“ChatGPT has made information more accessible in ways that earlier systems, like Google, never could. It saves physicians time on basic explanations.” - Dr Santosh Shetty, Executive Director, Kokilaben Dhirubhai Ambani Hospital
“ChatGPT has made information more accessible in ways that earlier systems, like Google, never could. It saves physicians time on basic explanations.” - Dr Santosh Shetty, Executive Director, Kokilaben Dhirubhai Ambani Hospital

But ethical concerns persist. Is it a form of cheating? Does it dilute originality, or simply reflect a new kind of authorship? In an industry long shaped by collaboration—between writers, directors, lyricists, editors, and more—does adding one more voice, even if synthetic, fundamentally change the process? Yet, the allure remains.

The Doctor is In

Not long ago, if you had sudden chest pain, you turned to Dr Google, who offered either a life-threatening diagnosis or an ad for digestive pills. But now, doctors—and their patients—have a new digital sidekick. Over at Kokilaben Dhirubhai Ambani Hospital in Mumbai, Dr Santosh Shetty, CEO and Executive Director, has welcomed ChatGPT into the hospital like an eager junior doctor. “It has made information more accessible,” he says, adding that ChatGPT is great at breaking down complex medical gobbledygook into something patients can actually understand. “It saves physicians time on basic explanations,” he adds.

Meanwhile, at Artemis Hospitals, Dr Shashidhar TB, Head of Surgery (ENT), is using ChatGPT like a personal assistant. It cranks out patient summaries, referral letters, and discharge notes like a pro. “Conditions like diabetes, hypertension, or cancer don’t need a PhD to explain,” he notes. But let’s not get carried away, warns Dr Shetty. In rural areas, the chatbot is being used to dish out first-aid tips and health advice. “It should serve as an assistant, not a replacement for professional judgment,” he insists.

The pharmaceutical industry has jumped on the bandwagon too. John Dawber, Corporate VP & MD, Novo Nordisk GBS, says they’re using LLMs (like ChatGPT) to help scientists with research. They’ve even got an internal bot called FounData. “It surfaces emerging trends and biomarkers faster than traditional methods,” he says. Over at MedGenome, Head of Scientific Affairs, Dr Prashant Bagali is also riding the bot wave. “Tools like ChatGPT are easing the burden by scanning data, summarising findings, and offering quick syntheses.” He’s also a fan of using the bot for regulatory paperwork and consent forms, which are famously dense.

“ChatGPT also risks masking originality and compromising authenticity. Instead of developing their own voice, individuals may become dependent on external prompts.” - Upasana Raina, HR Director, GI Group Holding
“ChatGPT also risks masking originality and compromising authenticity. Instead of developing their own voice, individuals may become dependent on external prompts.” - Upasana Raina, HR Director, GI Group Holding

One of ChatGPT’s fastest-growing use is in mental health. Young people in particular report turning to it late at night, when no counsellor or friend is available. “It doesn’t judge me,” says Latika Johri in Moradabad, Uttar Pradesh. “I live in a place where I can’t go out for therapy, so I find solace in ChatGPT.” But mental health professionals worry about this quiet shift. A bot can simulate empathy, but it cannot recognise red flags like suicidal tendencies. “As a psychiatrist, I see one of the deepest concerns around ChatGPT being its subtle impact on how children and young adults learn to think,” says Dr Sandeep Vohra, founder of No Worry No Tension Healthcare, a digital mental health startup. The risks are real: a 2023 case in Belgium links prolonged chatbot use to suicide, and just weeks ago, a California couple sued OpenAI alleging its chatbot encouraged their teenage son to take his life.

Learning, Unlearning

When Harvard Law School launched its AI-augmented legal writing course earlier this year, it signalled a switch already underway. Dr Ashok Kumar Mittal, Founder Chancellor of Lovely Professional University (LPU), says, “Our students use ChatGPT as an initial draft collaborator on petitions, contracts, and opinions, then refine their craft through critical editing.” At universities worldwide, the conversation has shifted from banning generative AI to reimagining pedagogy around it. “ChatGPT helps non-native English speakers polish grammar and style, and supports students with dyslexia or ADHD by breaking large assignments into smaller steps,” says Dr Tanya Singh, Dean, Academics at Noida International University. At The Himalayan School, Noida, educators are using an integrative approach. “ChatGPT is changing how students learn, think and engage,” says Sharmin Habib, Head of Business. “Rather than banning these tools, our aim is to incorporate them thoughtfully into the curriculum.”

Yet, not all view the change uncritically. Saswata Bhattacharya, Associate Professor of English at Delhi University’s Deshbandhu College, warns that ChatGPT risks replacing critical thinking with algorithmic suggestions. “The question is stark: do we uphold academic ethics and rigour to labour for genuine knowledge, or do we consume it cheap—allowing others to control our opinions, and, eventually, our very capacity to think?” he asks.

Dr Prashant Pareek, Associate Professor at Shanti Business School in Ahmedabad, says, “Our curriculum covers fundamentals, applied use, and ethics, training students to prompt effectively, test for bias, and uphold academic integrity. They are taught to challenge AI outputs, fact-check rigorously, and evaluate credibility with a critical eye.”

Up Close and Personal

ChatGPT has also crept into the minutiae of everyday life. “It has quietly become a behind-the-scenes life coach,” says Sidhharrth S Kumaar, relationship coach at NumroVani. “It helps people draft messages, navigate conflict, and even rehearse difficult conversations. It offers judgment-free, bias-free guidance—an active listener that builds confidence and lends emotional clarity.”

Consider the case of Amrita Dutta, a 22-year-old computer science student in Lucknow. She often turns to ChatGPT to translate her tangled emotions into lucid messages. “Feelings are extremely complex and difficult to convey,” she says. “Now, if I have to articulate myself strongly, I draft on ChatGPT and refine the tone and vocabulary. The other day, I used it to frame a message to my friend about a personal issue. Without ChatGPT, it would have been a confused, half-baked text, but it polished the words and tone. It even helps simplify tough terms. It’s definitely easier now.”

But overreliance carries risks. When ChatGPT’s polished responses blur authentic vulnerability, intimacy risks hardening into artifice—a ‘time-ticking bomb’ of mismatch between real selves and AI-shaped versions. True balance, Kumaar argues, lies in letting AI serve as prompt, not crutch: a bridge, not a shield.

That craving for clarity extends beyond relationships. “Today’s generation, especially those in their 20s and 30s, expects answers on demand,” says Mumbai-based astrologer and psychologist Rasshi Gurnani. “Earlier, people would wait their turn with an astrologer or numerologist. Now, AI fulfils that urge instantly. It feels personal, almost like having a guide available anytime you need. Unlike traditional consultations, which can feel formal or overwhelming, it feels simple, private, and conversational—and that comfort makes people more curious and open to asking questions they might otherwise withhold.”

That sense of freedom resonates with many. “People often hesitate to ask certain questions to an astrologer or numerologist because they fear being judged or misunderstood,” says Delhi-based numerologist and relationship coach Varinderr Manchanda. “With AI, there’s no such barrier. users can ask anything freely and get straightforward guidance. This feeling of safety and accessibility is what draws people to it. It’s a helpful starting point, but for real depth and accuracy, you must still consult an experienced astrologer or numerologist.”

Miracle Worker and a Menace

Once upon a time in the land of boardrooms and burnout, employees had two types of tasks: the brainy ones that made you feel smart and important, and the time-sucking ones that made you question your career choices. Enter ChatGPT—the shiny new co-worker who never takes coffee breaks, doesn’t gossip by the watercooler, and somehow whips up meeting summaries faster than you can say, “Was this call even necessary?” Anil Agarwal, CEO and co-founder of InCruiter, says, “ChatGPT is brilliant at taking away the second. Drafting routine mails or summarising a two-hour call in minutes doesn’t kill creativity; it gives it room to breathe.” But he warns, “If we lean on it too much, we risk losing our own sharpness. Just as calculators don’t replace the need to understand maths, AI shouldn’t replace our ability to think and write.” In short: use it as a pen, not a crutch. “It gives a real voice to employees who are often held back by language barriers or communication challenges,” he says. So if English grammar has been your personal nemesis, ChatGPT might be the ally you didn’t know you needed.

Upasana Raina, HR Director at GI Group Holding, believes that as reliance on ChatGPT grows, it is important to see it, if used at all, as a supportive aid only, and only for select tasks. “Over time, it erodes essential problem-solving skills, creativity, and professional competence—limiting growth for employees and leaving organisations vulnerable to losing their competitive edge,” she says. And what about all that “AI helps bad communicators shine” talk? Raina isn’t sold. “It also risks masking originality and compromising authenticity. Instead of developing their own voice, individuals may become dependent on external prompts.”

Murali Santhanam, CHRO at AscentHR Technologies, sees an emotional minefield ahead. “When employees see machines handling tasks they once owned, it can trigger anxiety about relevance and long-term employability.” But instead of panic, he calls for purpose. Another aspect that is often overlooked is how well one is using the bot. “While nearly three-fourths of employees report using tools like ChatGPT at work, only a third receive formal guidance on responsible use,” says Balasubramanian A, SVP at TeamLease Services. Smart companies are catching on—running bootcamps, deploying local-language bots, embedding AI in learning platforms. Raghu A, Partner at Deloitte India, zooms out to the big picture: “Done right, it liberates employees from repetitive, low-value tasks and opens space for strategic thinking, sharper KPIs, and cross-functional collaboration.”

The truth remains that ChatGPT can keep you company in the lonely hours; but then life still happens in the warmth of voices, in the chaos of misunderstandings, in the clumsy but beautiful ways only humans can love and heal each other.

Related Stories

No stories found.

X
The New Indian Express
www.newindianexpress.com