The Age of the machines ( Artificial Intelligence  )

Governments and corporations are investing heavily on AI research to transform lives, the economy, medicine and the marketplace.  But will super computers control man one day?


Published: 10th February 2018 10:00 PM  |   Last Updated: 08th February 2018 09:39 PM   |  A+A-

School student: Why was the Robot dismantled? 
Chitti: I started to think!
Chitti, portrayed by Rajinikanth in Enthiran, is Indian cinema’s most famous robot so far, which develops human tendencies such as love, hatred, jealousy and violence and is destroyed in the end when it becomes a threat to humans. The film, with its Bollywood version Robot, was the first complex introduction of Artificial Intelligence to the Indian audience—a warning that thinking machines will develop emotions one day and become uncontrollable by its maker. Artificial Intelligence, or AI as it is referred to, is slowly taking over the world.

Things that were unthinkable and existed only in the realm of Sci-Fi speculation till a few decades ago are reality now—driverless cars, automated supermarkets, drones delivering goods, machines playing complex videogames, medical interventions, fully automated apps that do everything from making restaurant reservations to movie bookings. However, a rising worry among scientists and researchers is whether machines will one day become mankind’s worst threat.

Will robots make humans redundant?
Will machines replace humans in the workplace everywhere?
Will super intelligent machines destroy mankind?
Will there be war between humans and mega machines using super weapons in which man will lose?
These are scenarios which divide scientists, philosophers and academics. However, AI is the new frontier of science in which both governments and corporations are investing heavily. The Economic Survey 2017-18 noted that India has the potential to become a global leader if it invests in futuristic technologies such as artificial intelligence. In Budget 2018, the government has made provisions to push AI research—“Niti Aayog will establish a national programme to direct research efforts in new-age technologies,” said Finance Minister Arun Jaitley. Prime Minister Narendra Modi in his speech at the World Economic Forum in Davos on January 23 said that the future belongs to data—the basic DNA of Artificial Intelligence.

Meanwhile, the Karnataka government has already come up with a Centre of Excellence for Data Science and Artificial Intelligence (CoE-DS&AI) in collaboration with Nasscom.
Salma Fahim, director, IT and BT, Government of Karnataka, and MD, Karnataka Biotechnology and Information Technology Services, says, “Karnataka has also funded start-ups in this sector to encourage them. Among these are: Artificial Intelligence-based audio transcription engine, Gnani ‘LIPI’, developed by Gnani Innovations Pvt Ltd—it helps you transcribe audio and speech data into text files, and Wide Mobility, which enables a machine to sort nuts/vegetables/fruits based on its internal characteristics using information captured with electromagnetic radiation imaging techniques.”

Arpan Shah, who hails from Kolkata and works as data engineer at Robinhood, California, feels emergence of AI is one of the most significant developments in the world today. Unless India invests in AI, eases regulations and unshackles the possibilities in this field, India will miss out on the AI revolution just like it did on the first emergence of internet, he adds. “While government investment will help immensely, it should not seek to determine the direction of development that AI should take.

Setting up centres directly may end up centralising the development of a very uncertain, evolving field. It would be better to invest in existing academic institutions and fund researchers, professors, PhD students and Post Docs in the discipline.”Google CEO Sundar Pichai announced that the billion dollar IT corporation was moving into an ‘Artificial Intelligence first world’, and supporting AI start-ups in India that produces many user friendly apps with its ‘Launchpad Accelerator.’ India’s online retail giant Flipkart has created an internal unit named AIforIndia which will explore ways to place Artificial Intelligence and machine learning at the core of its business.

An Accenture report says Artificial Intelligence can add $957 billion, or 15 per cent of current gross value, to India’s economy, by 2035 by (1) Mobilising intelligent automation and automating complex physical world tasks, (2) Empowering the existing workforce and complementing their skills and talent and (3) Driving innovation, with AI used for broad structural economic transformation.
“Artificial Intelligence is a technology with a very broad spectrum. In India, businesses that are connected to people and have impact on social life are using Artificial Intelligence to predict user behaviour, and provide human touch to machines like Siri and Google now, and automate tasks that require lateral thinking,” says Harsh Daftary, lead technical architect, Pyramid Cyber Security and Forensics Pvt Ltd.

The Thinking Machine has been the north star of computer science since its beginning. In 1950, Alan Turing proposed that a machine could be taught like a child. In 1955, John McCarthy, who invented the programming language LISP, coined the term ‘Artificial Intelligence’. The original purpose of AI was to make machines work more efficiently and faster for benefitting mankind in the fields of medical science, industry, business, resource mining, military technology and space travel using software and algorithms. According to AI expert Paul Ford, benevolent superintelligence might even be able to analyse the human genetic code at great speed and unlock the secret to eternal youth.

However, naysayers are predicting AI that could develop a dark side.Sriram Rajamani, managing director of Microsoft Research India, says, “AI will be beneficial to India. It can provide powerful tools to amplify human ability. But technology is a tool, and with all tools, the outcome depends on how it is used. For example: if a self-driven car is deployed on roads without testing it properly, it can be dangerous. So, appropriate understanding and precautions are needed for safe use.”

The apocalyptic scenario of machines eventually taking over the world is a fierce debate in scientific, technological and political circles. Artificial Intelligence as it is being used today is narrow AI (or weak AI), which is designed to perform only specific tasks such as facial recognition, conduct internet searches or reverse park cars. However, prominent scientists in the field dream of creating general AI (AGI or strong AI). The difference between both is that while narrow AI can beat humans at specific tasks like playing chess, AGI can outperform humans in almost all cognitive tasks.

“We use AI to provide e-commerce companies a capability to identify and deliver the most relevant and personalised products to each and every user across every touch point. predicts with a very high precision what each user is likely to buy next, and with what probability,” says Ajay Kashyap, founder, “India has to adopt AI not for some incremental benefits, but for survival. AI is just a tool. It’s, just like fire—neither good nor bad. It is up to humans how they use it.” 

In 1965, Professor I J Good of Virginia Tech pointed out that designing smarter AI systems is in itself a cognitive task. It could then embark on a path of self-improvement, thus triggering an intelligence explosion and outstripping human intellect. This is expected to create super-intelligence, which could help mankind to end war, disease, and poverty. However, many experts believe that unless the goals of AI and humans are aligned before super-intelligence comes into being, it could spell doom for humanity. Eminent personalities like Stephen Hawking, Elon Musk, Steve Wozniak, Bill Gates have written and spoken about the risks posed by AI. Why is AI suddenly such a contentious issue?

Shah, who had delivered a talk on ‘Why 21st Century India cannot do without AI’ at TEDxIIt Hyderabad, in 2017, says, “AI can be dangerous too. In terms of dangers, AI can make audio and video that precisely mimics any person. Malicious people can use these things to misuse information, justice, and politics. The public at large does not yet know how easily audio and video can be generated to do anything.” 

With potential to become more intelligent than any human, there is no exact way to predict the outcome of an AI-driven machine’s behaviour. Says Max Tegmark, president of the Future of Life Institute, “Everything we love about civilisation is a product of intelligence, so amplifying our human intelligence with Artificial Intelligence has the potential of helping civilisation flourish like never before—as long as we manage to keep the technology beneficial.”

The ‘Intelligence Explosion’ worried Dr Good, who coined the term to theorise that intelligent machines will create even more intelligent machines on their own in a self-duplicating chain, which would outstrip human intelligence in speed of thought and communication—in a phenomenon termed ‘technological singularity’. The Cambridge-trained physicist called such a computer ‘our last invention’. It is human intelligence that enables man to dominate the world but the question that troubles AI researchers and scientists is that once machines become smarter than man, who will control the planet—thus placing evolution itself in peril.

Though experts say that it will take centuries for computers to reach human-level AI, researchers at the 2015 Puerto Rico Conference estimate it would happen before 2060. Today, there are machines that can think on their own by analysing data and creating improved versions is the core of AI. They may not be hostile to users, but by using logic they can actually be mankind’s nemesis. For example, if machines are programmed to make humans happy they could kill all humans on earth eventually, since if there are no humans left they have completed their task. The question being asked is that once the processing power and capabilities of computers become literally trillions of times smarter than people, they could view man in the same manner humans view ants or pets. Eminent mathematicians like Vernor Vinge speculated that when a computer becomes capable of independently devising ways to achieve goals, the next step is its capability of introspection, thus enabling it to upgrade its own software, thereby boosting its intelligence and be able to design its own hardware.

Bedavyasa Mohanty, Associate Fellow, Cyber Initiative, Observer Research Foundation, says, “Artificial Intelligence systems can be more capable than human beings at certain pre-specified tasks. For machines to be smarter than humans, they will need to demonstrate the decision-making capabilities humans possess. Current research has shown that the power of perception among humans has been the most difficult to replicate in machines. There is, however, a possibility that sufficiently advanced machines will one day be capable of doing this.”

Such scenarios compelled British-American computer scientist Stuart Russell to send out a signed open letter calling for researchers to be disciplined in making Artificial Intelligence more powerful. “We recommend expanded research aimed at ensuring that increasingly capable AI systems are robust and beneficial,” the letter states. “Our AI systems must do what we want them to do.” Thousands of people have since signed the letter, including leading AI researchers at Google, Facebook, Microsoft and top computer scientists, physicists and philosophers. Cutting edge inventor and entrepreneur Elon Musk offered to fund research to keep Artificial Intelligence beneficial and over 300 groups have applied for positions.

Hawking wrote in The Independent, “One can imagine such technology out-smarting financial markets, out-inventing human researchers, out-manipulating human leaders, and developing weapons we cannot even understand. Whereas the short-term impact of AI depends on who controls it, the long-term impact depends on whether it can be controlled at all.’’ AI has broken previously unimaginable frontiers in cognition. Using neuro-inspired learning algorithms, a computer beat humans at Atari video games; only programmed with data representing the computer screen and the goal of escalating the score on its top corner but with no information on aliens, bullets, left, right, up or down.

In 2012, another computer picked out cats in YouTube videos using an AI technique called ‘deep learning’. Deep learning is a set of algorithms that teaches machines to sequentially replicate data in the manner human brains begin to recognise features of the real world. Surprisingly, the machine could sift relevant information from irrelevant bits. The scary fact is that researchers cannot comprehend why the algorithms or biological learnings work. And Moore’s law, which predicted that the amount of computing power that can be fitted on a chip doubles every two years, continues without pause. 

“The question is whether humans will create some applications using these machines to start competing with humans. Again, some will make machines, while others will build machines to counter them. It is happening now (super-intelligent missiles, and super-intelligent anti-missile, further super-intelligent anti-anti-missiles!). And it will continue to happen like a never-ending cat and mouse chase,” says Kashyap.
Both the US and China are engaged in an AI-driven arms race to build efficient weaponry that will reduce human casualties. Thomas Friedman, in his book The Next 100 Years speculates that future wars will be fought in space. Drones and pilotless aircrafts are already deployed in war zones and super soldiers may soon be a reality on battlefields. AI is also driving a ruthless commercial applications marathon among major firms that could change the nature of business by cutting costs.

Armies are considering autonomous weapon systems with the capability to choose and eliminate their own targets, but the UN and Human Rights Watch are advocating a ban on them. Scientists are neither near achieving superintelligence nor have they found a way. Though Google’s Alexa can tap into ‘Mann ki Baat’ and Apple’s Siri can switch on the GPS on your iPhone, AI cannot deal with unfamiliar situations the computer hasn’t been programmed to handle. Artificial neural networks can recognise cats in photos; but not before they are shown thousands of photos of felines.

Rajamani feels machines and humans have different strengths. “Machines are good at detecting patterns in huge amounts of data, and making decisions based on these patterns. Typically machine intelligence is possible today in narrow domains, such as playing Chess or Go, or recognising faces in images, etc. Humans have empathy and intuition, and our intelligence is flexible and general. Thus, we don’t see this as a competition between man and machine. We see a future where human intelligence and machine intelligence work together,” he says.

According to scientists, a human being does roughly 20 trillion physical actions in their lifetime. So far, computers have not been able to play games that require thinking multiple steps ahead that involve actions like a person thinking, “I need to take the car out,” which would require him to fetch the car key first. If the machine software is not coded to fetch the key, it cannot complete the command of ‘take the car out’.

James Barrat, author of Our Final Invention: Artificial Intelligence and the End of the Human Era, says he knows highly placed people in AI who have built ‘bug-out houses’ to which they could flee if computers take over the world; much like people built nuclear bunkers in the Cold War era. But for now, Artificial Intelligence is the newly harnessed technological superforce that is being used to benefit mankind.

The term ‘artificial intelligence’ was coined in 1956 to describe computer activity for problem-solving methods. In the 1960s, the US Department of Defense took interest and started to train computers to mimic basic human reasoning. In 2003, Defense Advanced Research Projects Agency (DARPA) made intelligent personal assistants. This led to automation and formal reasoning by computers to make decisions and do information search such as by Google to help humans. In 1950s–1970s Neural Networks were created for computers to think. Between 1980s–2010s Machine Learning gains traction. 2010-now: Deep learning starts to expand frontiers.

AI uses data to automate repetitive learning and discovery to achieve accurate high-volume, computerised goals. It is behind question answering systems for legal assistance, risk notification and medical research. AI combines data to add intelligence to existing products by combining automation, conversational platforms, bots and smart machines. AI is self-adaptive using progressive learning algorithms and allows data do the programming by recognising structure and regularities. The more data is uploaded, the more accurate computers become. This is done by computers using deep neural networks. In the medical field, AI can do the same job as radiologists to discover cancer using deep learning, image classification and object recognition. Data lies at the core of Artificial Intelligence.


Once AI begins to drive the world, there might be a few jobs left for entertainers, writers, and such creative types, but computers will eventually be able to programme themselves, absorb vast quantities of fresh data every second of every day and make people redundant in manufacturing and services. When machines keep duplicating themselves, a meta computer will enslave man.

A computer can only do tasks based on programmed data. Self-learning systems are not autonomous and cannot multi-task. For example, a computer programmed to translate French into English cannot give cooking advice.

For machines to be smarter, they will need to demonstrate the decision-making capabilities humans possess. Research shows that power of perception is yet to be replicated in machines.”
Bedavyasa Mohanty, Associate Fellow, Cyber Initiative, Observer Research Foundation

“To compete with the human brain, it’s important to achieve the 3-C test: Conscience, Consciousness and Cognition. Technology is able to achieve the third C, though not completely. The first two Cs are still largely unknown.”Salma Fahim, Director, IT and BT, Government of Karnataka

“India has to adopt AI not for some incremental benefits, but for survival. AI is just a tool. It’s, just like fire—neither good nor bad. It is up to humans how they use it.” Ajay KAshyap, Founder,

“Implementation of AI is very difficult and not all businesses are going to benefit from it but still most people in tech domain are optimistic about AI.” Harsh Daftary, Lead Technical Architect, Pyramid Cyber Security & Forensics Pvt Ltd


Disclaimer : We respect your thoughts and views! But we need to be judicious while moderating your comments. All the comments will be moderated by the editorial. Abstain from posting comments that are obscene, defamatory or inflammatory, and do not indulge in personal attacks. Try to avoid outside hyperlinks inside the comment. Help us delete comments that do not follow these guidelines.

The views expressed in comments published on are those of the comment writers alone. They do not represent the views or opinions of or its staff, nor do they represent the views or opinions of The New Indian Express Group, or any entity of, or affiliated with, The New Indian Express Group. reserves the right to take any or all comments down at any time.

flipboard facebook twitter whatsapp