E-volution: Era of AI and supercomputers 

Nowadays, computers drive life and lifestyles. They run your car, airplanes and your microwave, a city’s power grid, traffic lights, and are used in surgeries and diagnosis.
Image used for representation.
Image used for representation.

To think it all began with a room-sized machine. The Difference Engine, designed by Charles Babbage in the 1820s, is probably the world’s first computer. It used a set mechanical framework to carry out only one mathematical function, much like an industrial-era abacus. Babbage is called the “father of the computer”; his Engine prepared numerical tables using the ‘method of difference’, a mathematical technique used in navigation and astronomy.

It was a miracle of technology at that time. Now computers drive life and lifestyles. They run your car, airplanes and your microwave, a city’s power grid, traffic lights, are used in surgeries and diagnosis; there’s very little that is non-computerised left in our lives. They are used at work, for social media, gaming, and entertainment, and robotics. Portable laptops are everywhere, their young owners in cafes and cabs working lost in nerdy concentration with Java and JavaScript.

The 20th century was the Age of Data and the PC was its whisperer. Last year, India’s supercomputer AIRAWAT made it to the list of the world’s 100 most computers. Cybersecurity Analyst Vishesh Mahajan, who has his cybersecurity firm based out of Singapore, says: “Projects like AIRAWAT will ensure that we revolutionise agriculture, education, and healthcare, using pattern recognition and prediction, using AI. Moreover, given the population of India, we have more than enough data to train our models and develop strategically unique solutions, to problems unique to us.”

Back in the 18th century, Babbage didn’t know a byte about PCs or software. He couldn’t predict that one day a young genius named Elon Musk would become a billionaire after his father took him to a hotel where he played computer games. Musk did; he taught himself to code and built his first personal computer (PC) in his father’s garage and sold it for $500. Now PCs cost less and are lighter, sleeker and faster. Babbage could never guess either that a geeky owl-spectacled boy called Bill Gates and his partner Paul Allen would see the cover of Popular Electronics and start Microsoft after selling a Basic language interpreter for a PC called Altair Altair. Gates and Allen didn’t even have an Altair; they used a simulator.

The tale of modern computers begins with the likes of the Electronic Numerical Integrator and Computer or ENIAC, and Konrad Zuse’s Z3, two of the world’s first digital computers. These hulking monsters ran on vacuum tubes, devoured enough electricity to power a house for a week, and churned out solutions to some of the most complicated arithmetic problems way faster than any human. The biggest deal, however, was that these computers could be reprogrammed to carry out diverse kinds of calculations, all at once.

The world’s first digital computers were a marvel of their time and yet were dwarfed by the invention of the personal computer, and then the laptop. Weren’t we all chuffed to dial up to the Internet on those 386s and 486s? By the time those boxes gave way to folding personal computers that could be carried in a bag, the Internet had become faster too. Now there’s Wi-Fi and supersleek laptops with touch screens. These pioneers paved the way for a steady march towards smaller, faster machines, each generation marking a leap in power and accessibility.

Today, giants like Dell, Lenovo and Apple can fit into our backpacks, while smartphones that would put the EINACs and Z3s to shame based on their computing prowess nestle comfortably in our pockets. It’s a tale of constant miniaturisation and ever-expanding capabilities, now beginning to rapidly telescope into a whole new world of extremely personal computers that harness the latest innovations and technologies.

Wear We’re Headed
Wearable computing is the new big thing. Smartwatches such as the Apple Watch, Samsung’s Galaxy Watch, and several other offerings from the likes of Garmin have shown the way. These wrist-strapped devices carry out functions that even the most sophisticated laptops could carry out just a decade ago. Put another way, the smartwatch of today uses more computing power than NASA used in designing, creating and launching its Apollo mission to the moon.

It has started to get fantastic now. We are now moving into an age where all the computing power you could need comes from a matchbox-size device that you wear on your lapel. We are, of course, talking about the Humane AI Pin, a ‘smartphone; that projects a display on your hand, or a wall, and lets you interact with your ‘phone’, using just your voice.

How about a pair of sunglasses with a full-fledged computer inside, along with a camera, speakers, a touchpad and a battery to power all of this for hours? Well, Meta and RayBan have made a pair of smart glasses that do just that.

Back to Biology
Computing hardware is reaching its silicon limits in the relentless march of miniaturization. But what if the answer to our processing power woes isn’t smaller chips, but living ones? Think biocomputing.
Imagine neurons, the building block of the human nervous system and nature’s logic gates, firing away inside a dish, calculating the mass of a dying star or figuring out the next move in a murderous game of chess. That’s what biocomputing is. Biomechanical scientists and engineers are harnessing the inherent computational power of living cells, especially nerve or brain cells to perform tasks beyond the reach of even the most advanced silicon chips.

Research centres like the Final Spark in Switzerland, Cortical Labs in Australia and Koniku in the US are all vying to make the next superpowerful processing chip, something like NVIDIA’s AI-processing GPUs, using brain cells and other animal cells.

Dr Fred Jordan, the CEO and co-founder of Final Spark is focused on developing the first AI processing chip using biological cells. “We’re at the beginning of a revolution. The way the brain processes information is incredibly intricate, and today’s digital computers simply aren’t up to the task. So, we thought, since hardware alone isn’t sufficient, let’s revolutionize it with living neurons or ‘wetware’.”

The advantages of using animal cells are tantalising. Biological systems are inherently fault-tolerant, self-repairing, and operate with significantly lower power than traditional silicon-based electronics. Biocomputers can also tackle problems like protein folding or drug discovery with efficiency unheard of in traditional machines.

Mind-controlled devices
For decades, the interface between humans and computers has been dominated by physical tools like keyboards, mice, and as of late, touchscreens. But what if we could ditch the hardware altogether and control our machines with the power of thought? Mind-controlled input devices or neural interfaces are the answer to that.

These devices work by capturing brain activity, typically through electroencephalography (EEG) or magnetoencephalography (MEG). EEG headsets measure the electrical signals produced by the brain, while MEG sensors pick up tiny magnetic fields generated by neuronal activity. This brain data is then translated into computer commands, allowing users to control cursors, select objects, type text, and even operate complex software applications—all through the power of thought.

Mind control has the potential to revolutionise fields like gaming, entertainment, and creative expression. Imagine artists painting with their thoughts, musicians composing symphonies in their minds, or gamers controlling characters and wielding weapons with the power of their imagination.
The technology is still in its early stages, but the advance is rapid. Companies like NeuroSky, Emotiv, and Nextmind are developing increasingly sophisticated EEG headsets that offer greater accuracy and far better control.

The potential applications of mind-controlled input devices are vast and far-reaching. Several startups are working on BCIs, or Brain-Computer Interfaces, that can empower people with disabilities to interact with the digital world in ways that were previously impossible. Imagine quadriplegics using their thoughts to type emails, control robotic limbs, or even navigate virtual reality environments.
Based on the same principles that power mind-controlled input devices, several engineering and biomedical research departments and startups, like Elon Musk’s Neuralink are trying to make BCIs that can help people with disabilities gain control over their disability and lead a better life.

Says Dr Sudipto Chatterjee, a Kolkata-based neurosurgeon: “Although much of the focus is on improving the quality of life in patients who have lost their mobility, or have become specially abled after an accident, there are engineers who are working on developing a new kind of neural interface that will allow us to control computers and perhaps other heavy machinery. They are still at a nascent stage, so mass manufactured, market-ready solution is still years away, but prototypes like the ones we have seen from OpenBCI look very promising.”

In fact, a team of researchers from the University of Houston engineered a BCI headset, which is currently undergoing clinical trials in collaboration with TIRR Memorial Hermann Hospital in Texas, US. The headset has already shown great promise in a patient rehabilitating from paralysis caused by a stroke.

Brain implants that are computers
In the time of the ever-evolving PC the lines between human and machine have blurred; thoughts become commands, and data is processed both, by your brain, and a silicon-based brain implanted inside your skull, where information flows seamlessly between your brain and the digital world.
This may seem too farfetched, and it is, for now. But it’s the potential future promised by brain-computer implants, full-fledged computers embedded within our very minds.

Dr Mukesh Dwivedi, Gurgaon-based neuro specialist, explains: “The kind of BCIs that we are talking about here will be completely different from how we perceive computers to be today. There is a New York-based BCI company called Synchron. They have already moved forward with human studies and are testing a matchstick-sized neural implant that doesn’t require open brain surgery. In their initial tests, they have successfully helped ALS patients to do online tasks such as banking, shopping, and emailing. However, it hasn’t cured them of their ALS. Many in the medical community believe that scientists and engineers working on BCIs should first focus on curing such ailments."

Based in Salt Lake City, Blackrock Neurotech has already developed a BCI to address physical disabilities, blindness, deafness, and depression. Their NeuroPort Array chip empowers users to command robotic limbs, operate wheelchairs, engage in video games, and perceive emotions. Their technology involves a tiny chip, with almost 100 micro-needles that is just placed onto a specific portion of the brain. The needles, then interpret electrical signals generated by the individual’s brain, sending it across the nervous system. To date, over 50 individuals have undergone this groundbreaking procedure as a part of their trial.

Quantum computing comes of age
Quantum mechanics is the study of the world on a sub-atomic scale where the particles in focus exhibit multidimensional states. Quantum computing uses this knowledge to make hardware and processors that can deal with highly complex situations like the behaviour of trillions of molecules in a sample or even financial trickery. The supercomputers of today are binary systems based on 20th-century transistor technology. Quantum computing opens up a whole new dimension where extremely complicated and hitherto unpredictable situations can be understood, explained and predicted.

Once limited to only scientists and serious researchers, accessibility to quantum computing is growing, hinting at a future where the mind-bending power of quantum mechanics isn’t just for the privileged few. This shift promises to reshape entire industries and spark unprecedented innovations.
Typically, quantum computers have conjured up a specific image, again and again — a room-sized giant machine guarded by people in white coats with clipboards, engrossed in serious calculations, contemplating some mystery of some faraway galaxy. That image has, however, started to fade.

Cloud platforms like IBM Quantum and Microsoft Azure Quantum now offer access to these marvels via the Internet, allowing anyone with a web browser to dabble in this technology. This democratization removes geographical and financial barriers, opening the door for startups, students, and even hobbyists to experiment with the possibilities.

The nascent field of quantum software development is rapidly evolving, with new tools and libraries emerging to make coding for these superpowered machines less daunting. Initiatives like Google’s Quantum AI and Microsoft’s Quantum Katas are paving the learning path, offering interactive tutorials and challenges to demystify the complexities.

While full-fledged quantum computers remain futuristic, smaller, specialized machines are proving their worth in specific applications. Pharmaceutical companies are harnessing their power to accelerate drug discovery, materials scientists are using them to design novel superconductors, and financial institutions are exploring their potential for ultra-secure cryptography.

The Eyes Have it
The humble display, once king of the digital experience, faces a rebellion, and from quarters that you might not even know about. Displays as bulky boxes and flat rectangles have truly become medieval at this point. Curving and foldable displays are also getting pretty regular. In fact, Motorola has already showcased a smartphone that can wrap around your wrist like a band.

What about no displays with no physical panels? Well, between holographic interfaces, and smart contact lenses, the war on traditional displays has been joined. Imagine waking up to your schedule projected onto your cornea, glancing at the temperature with a flick of your eye, or receiving text messages in your peripheral vision.

Smart contact lenses equipped with micro-LED displays are becoming a reality. Mojo Vision’s Mojo Lens is a self-contained display-enabled contact lens set to become the next evolution of Augmented Reality (AR). Steve Sinclair, senior vice president at Mojo Vision, underlines just that when he talks about the viability of the Mojo Vision Contact Lens as a medically approved assistive device. “People with low vision could use a supplementary high-resolution camera integrated into glasses or suspended near their ears. In this scenario, users would capture a detailed image by looking at something, and the image would instantly be displayed in their field of vision, using Mojo Vision. Similarly, doctors who are about to perform a surgery can get high-resolution, live multidimensional scans, during the surgery. In both of these scenarios, users will have the capability to pan, zoom, and explore the details of the captured image.”

Futuristic wearables like these could revolutionize our relationship with information, seamlessly integrating it into our field of view without obstructing our natural sight. Moreover, traditional AR/VR headsets, including Apple’s yet-to-be-released Vision Pro, use eye-tracking to determine where the user is looking. Smart contact lenses don’t need to track the movement of the eye — they move with the eye.

From touch wood to click wood
For decades, our computers have come at a massive cost to the planet. From resource-hungry mining to energy-intensive production and mountains of e-waste, the footprint of our PCs is hefty. A quiet revolution has been brewing of late that is reimagining electronics, not just in their processing power, but also how and with what they are made. This is where sustainable computing comes in.

More and more smartphones and smartwatches are using recycled metals and glass. The Apple Watch Series 9 for example, is a great testament of just how far we have come in using recycled materials. Users can pick up a Watch Series 9 and a strap from Apple that was produced using 100 per cent recycled components and was shipped in ecological packaging.

More importantly, though, computers and computer accessories, which are normally made using plastics, are being replaced with wood, bamboo, and other eco-friendly alternatives. The reasons for this shift are compelling. Traditional computer components rely heavily on mining, a process linked to environmental damage and social injustice. Sustainable materials like wood and bamboo, on the other hand, are eminently renewable resources.

Switching to these lighter materials also cuts down on the carbon footprint of production and transport. Finally, unlike electronics that linger in landfills for eternity, these alternatives are biodegradable, which reduces e-waste and promotes closed-loop systems where materials are then reused or composted.

But this isn’t just a feel-good fantasy. Companies like Woodoo Computers and Mythic Computers, are crafting sleek and stylish laptops and desktops using wood as their primary material. CNC machining and special treatments ensure durability and prevent warping while maintaining a modern aesthetic.
Meanwhile, bamboo’s rapid growth and strength make it another exciting contender. Several companies have created keyboards and mice from this versatile material, offering a natural and tactile experience.

Engineers are working on making processors out of timber as well. A team of researchers at Sweden’s Linkoping University, in Sweden has successfully created a transistor using balsa wood. And although it won’t be powering any computers or even a smart appliance any time soon, it sure is interesting to see scientists working on some unconventional materials for that next generation of processors.

Mixed Reality Goes Mainstream

One of the biggest shifts in modern-day computing is just around the corner. Although MX or Mixed Reality (a combination of Augmented Reality and Virtual Reality) has been around for years now, Apple is set to launch its Vision Pro AR headset, which, is set to catapult AR and VR to a whole new level of popularity.

What are VR and AR? Virtual Reality immerses users in a computer-generated environment, typically through a headset, creating a fully simulated experience. On the other hand, Augmented Reality overlays digital information onto the real world, and enhances a user’s perception by combining computer-generated elements with the physical environment, often viewed through devices like smartphones or AR glasses or headsets.

AR and VR have already proven their mettle in some niche categories. The US porn industry, for example, has a burgeoning category dedicated to VR Porn. Historically speaking, any new visual format or tech that has taken the world by storm was first adopted by the US porn industry. Case in point would be BlueRay, 4K resolution, 3D filming techniques, and now, AR or VR content.

Many tech companies have tried their hands at AR and VR. There are several headsets already on the market from the likes of Samsung, Sony and Oculus. However, Meta has been the king of a rather small hill, ever since they acquired Oculus. With Apple entering the MR space with its headset and its ecosystem, it is a certainty that MR technologies, especially AR, are going to get a massive boost.

AIRAWAT, India’s 21st century supercomputer, sits among the world’s top 100 even as PC sales hit 286 million worldwide last year. This ubiquitous machine is the dominant artefact of civilisation now, and it’s going new places

Related Stories

No stories found.
The New Indian Express
www.newindianexpress.com