Early signs of speechless interaction

In India nearly 63 million rely on sign language. But Indian Sign Language remains under-resourced and pushed to the margins of everyday life. Researchers are now attempting to bridge this gap
Differently abled students interacting sign languages during the  work experience category at the state school science fair at Alappuzha
Differently abled students interacting sign languages during the work experience category at the state school science fair at Alappuzha TP Sooraj
Updated on
3 min read

The stands by the side of a busy road, traffic rushing past in waves.

The roar of the engines, blaring horns, voices wafting through the air— none reach him. In his hand is a small, worn notebook. He scribbles quickly, hesitates for a second, then holds it up to a stranger: “Bus stop?”

The man glances at the page, points vaguely in a direction, his lips moving too fast to read. The boy watches, trying to catch meaning in fragments — a direction, a hint — then nods anyway. He always nods. It is easier than asking again.

In India, according to the WHO, nearly 63 million people move through a world that speaks constantly but rarely speaks to them. They rely on sign language — a language shaped by movement, expression and nuance — yet Indian Sign Language remains under-resourced and pushed to the margins of everyday life.

In response, researchers are attempting to bridge this gap by building a two-way AI system — one that can read signs from video and convert them into text or speech and another that can translate spoken or written language into sign.

“Our focus is on building datasets and models that can both understand sign language from video and generate it from speech, since systems developed for American or British Sign Language do not work well for ISL,” said C V Jawahar, Professor at IIIT Hyderabad.

The system uses computer vision to track hand movements, facial expressions and body posture from video, converting them into skeletal keypoints that transformer-based models translate into language. But sign language is not uniform— facial expressions act as grammar, and multiple signals occur simultaneously, making it far more complex than text or speech.

“We are developing systems that can generate captions and translate speech into Indian Sign Language videos using human video synthesis and avatars, aiming for smooth, natural sentences that can be used in classrooms, hospitals and public spaces,” Jawahar added.

The process also works in reverse. “We extract keypoints from video and feed them into transformer models trained on video-text data to map signs to language, though challenges remain due to limited data, regional variation and the complexity of sign language,” said Ashutosh Modi, associate professor at IIT Kanpur.

If successful, such systems could enable real-time communication — allowing people to sign into a camera and be understood instantly, or access spoken information in sign language.

But language itself complicates the problem — sign languages are not universal, and within India, they vary further. The signs used in Delhi may not match those in Mumbai, while gestures in Kolkata can differ from those in the south. These are not just accents but distinct dialects. What is often referred to as Indian Sign Language is actually a group of regional varieties, reflected even in ISO codes such as ins for Indian Sign Language (India), wbs for West Bengal or Kolkata Sign Language, and nsp for Nepalese Sign Language. While these share a common base, they differ in vocabulary, gestures and features like one- or two-handed alphabets, shaped by local culture.

These challenges extend beyond technology, rooted in everyday realities of language, access and infrastructure. “The biggest challenge in India is that Indian Sign Language is not uniform — it varies across regions, so what is used in one place may not be understood in another,” said a Hindi sign language interpreter in a government school, noting that communication relies not just on hand movements but also on facial expressions, body posture and context.

She adds that a severe shortage of trained interpreters, limited learning resources and the lack of access in schools, hospitals and public services continue to exclude many deaf individuals, stressing that any solution must recognise the complexity and cultural depth of sign language.

Elsewhere, machines are learning to see. Advances in AI are using computer vision and deep learning to recognise and generate Indian Sign Language in real time. Models like YOLOv10-ST capture fine movements with over 97% accuracy, while datasets like iSign enable systems to move from isolated signs to full sentences. Sensor-based tools such as smart gloves are also being developed, pushing towards real-time sign-to-text and text-to-sign systems.

Policy is catching up, though slowly. The Indian Sign Language Research and Training Centre (ISLRTC), set up in 2015, aims to address gaps in resources, training and the severe shortage of interpreters. The National Education Policy 2020 further pushes for standardising ISL and integrating it into classrooms, with NCERT and ISLRTC developing sign-based content through platforms like DIKSHA.

And yet, here at the bus stop, the difference between promise and reality remains. The tech is advancing, the systems are learning but inclusion is not just about innovation, it is about reach. Until these tools move beyond labs and into everyday life, for those still waiting to be understood, the silence continues.

Related Stories

No stories found.

X
The New Indian Express
www.newindianexpress.com