The advent of the industrial age in the West saw machines take center stage in human society. And human beings used the higher productivity of these machines to transform human lives for the better. Standards of living, life expectancy and quality of life improved exponentially.

At some point in this journey though, mankind crossed the tipping point where the benefits of the industrial revolution outweighed its adverse impacts. Rampant pollution and global warming were direct result of the excesses of this industrial age. However, the widespread use of such technology also lead to greater inequality in society, resulting in even more exploitation of the poor by the rich, even though it brought the basic necessities to the poor.

The “Computer Age” was another inflexion point in mankind’s history. It brought with it the promise of freedom and Maslow’s concept of self-actualization. With the entire body of knowledge potentially in every human’s hands through the mobile smartphone, it brought forth one kind of equality. But the real question is what does the future hold for humankind?

As the specter of artificial intelligence (AI) raises its head above the line of industrial and post-industrial machinery age, we are confused, not sure whether we are watching the rise of a new Frankenstein or the dawn of something more revolutionary! There is ambivalence in our approach to intelligent machines. Till now, machines were tools in our hands, and do what we want at our bidding. Even computers did what we programmed them to do. Maybe, faster and better, but human beings were always in our control. Now, we have a new class of machinery which can “think” and “reason” and “learn”. It is potentially autonomous.

It is time to lay the cards on the table, demystify AI and machine learning (ML). It is time to forecast which direction are we headed and identify the threats on the way.

According to the New Oxford American Dictionary, AI is the theory and development of computer systems able to perform tasks that normally require human intelligence, such as visual perception, speech recognition, decision-making and translation between languages, among others. John McCarthy, the US computer scientist who coined the phrase “artificial intelligence” in 1956 described it as the science and engineering of making intelligent machines, especially intelligent computer programs. Artificial Intelligence is a way of making a computer, a computer-controlled robot or software, think intelligently as a human would do.

The ultimate goal of AI research is to make a computer mimic human intelligence. The areas of research include reasoning, knowledge representation, planning, learning, natural language processing, perception and the ability to move and manipulate objects. Popularly, AI is identified with chess and Go playing computers, robots and autonomous vehicles.  Of course, there is much more to AI than these examples. Increasingly, the fruits of AI are beginning to reach the common man. We are no longer surprised by the selective ads that show up on our screens. We have got used to this uninvited invasion but fail to see the fingers of AI at work. The TV advertisements of Alexa by Amazon are amusing but we do not recognize it as the thin end of the AI wedge in our everyday life.

The goal of AI is to build a machine which can think, perceive and reason like a human. The processes of the human brain are thought to be computable. Some experts, however, feel machines can never do all that a human mind can do. Others predict that a computer which will reach the level of a human brain may be a reality by 2030. The technology in such a machine has been given a name: Artificial General Intelligence or AGI. With the simultaneous development of ML, which is the ability of the computer to improve its abilities without human intervention, it is feared by some that once we achieve AGI, there will be no stopping the computer using all the data resources in the world and ML ability to become more and more intelligent until it reaches super-intelligence. The coming of the ASI or artificial super intelligence is the stuff of nightmares to some experts. They fear, the machine will take over humans and the world.

The question is why should a machine do that at all. Let us go back to the days when the first fears from marauding robots were expressed. To stop robots from taking over the world, Issac Asimov in his robot series of SF stories first wrote out the three laws of robotics. The laws were built into the design of his fictional robots to prevent them from harming or hurting humans in any way. His stories, of course, revolved around how these laws could be interpreted by “smart” robots to harm and even kill humans.

Utilizing the story of I, Robot as a springboard, we also need to consider here the feasibility of the robot utilitarian, the moral responsibilities that come with the creation of ethical robots, and the possibility of distinct ethics for robot-robot interaction as opposed to robot-human interactionIs there a real fear that the AI machines we are developing will not have the ability to “see” what can be harmful for humanity? If so, what can be done to build in morals and safety in the machines to make them act ethically? This is what the new field of AI ethics is trying to study and understand.

The meaning of the word ethics is moral principles that govern a person's behavior or conduct when participating in an activity. It is a moral philosophy that involves systematizing, defending and recommending concepts of right and wrong conduct.

Ethics has been studied throughout history by our greatest philosophers and has been at the centre of the debate regarding “human” behavior. All religions devote a large part of their teachings and laws on ethical behavior. In other words, ethics is a set of generalized rules for being a good person. When applied to an organization, an organization doing business based on ethics is governed by a set of principles based on which the organization conducts one or more business activities based on good principles such as honesty and integrity.

Ethical AI refers to AI/ML models whose predictions are trustable/explainable (and hence transparent), unbiased/fair (towards all class of users or entities) and safe (not hurting humans or businesses). Unethical AI would mean that models are biased towards a specific class of users (and, thus, unfair towards other users) or intended to harm a specific class of user entity (and, thus, unsafe).

An explainable AI model is one in which the AI machine’s decisions can be explained to the user. In other words, the decisions are transparent and based on specific moral rules. The more a system is explainable, the more trustable it becomes.

If one or more features are left out of an AI model, intentionally or unintentionally, it becomes biased. An AI system is modeled by making available to it data sets which govern its decision-making. If a group of relevant data is left out while developing the model, it becomes biased. Therefore, AI systems have to be “taught” with the right mix of data sets and randomly generated scenario generations principle and processes to make them unbiased.

To give an example, if an AI system for helping customers select the products they want to purchase is developed making a complete data base of products, specifications, prices available to it, the system will guide the buyers to the products of their choice and sales will be completed with the consumer confident that the right products have been procured. However, if the data base made available to the AI system leaves out one class of products or the products of one set of manufacturers, its recommendations would be biased and not to the satisfaction of the buyers. It is important, therefore, to ensure that the AI system has the complete data and built in logic to ensure unbiased selection.

Although autonomous vehicles are expected to reduce the number of accidents on our roadways by as much as 90% according to a McKinsey report, accidents are still possible and we need to consider how to program machines to take the correct decisions. Besides, we need to determine who is responsible for deciding the objective and logic of the programs in these vehicles, whether it’s the consumers, politicians, the market, insurance companies or someone else.  If an autonomous car encounters an obstacle driving down the road, it can respond in a variety of ways from staying on course and risk getting hit to swerving lanes and hitting a car that ends up killing the passengers. Does the decision about which lane to swerve into change based on the number of possible casualties of passengers in the two vehicles? Maybe the person who gets killed is a parent or a notable scientist. What if there are children in the car? Perhaps the decision on how to avoid the obstacle should be made by just a flip of a coin or randomly choosing from the options. These are all dilemmas we need to address as we build and design autonomous systems.

Another wrinkle in the decision-making algorithms needs to include accidents that might cause loss of limbs, mental capacity and other disabilities. The situation would get further complicated and the scenario change completely when there is heterogeneity on the roads with a mix of autonomous cars, human driven cars and other transport vehicles.

Ensuring a safe and secure AI model is crucial to the success of this technology. The system should be so trained that it inevitably produces results which do not lead to wrong decisions or wrong predictions. A false negative may be absolutely unacceptable in a given situation For example, a medical AI system designed to suggest, based on investigation data, whether a patient needs immediate life-saving surgery cannot make the mistake of predicting a negative. In another situation, a false positive can be equally disastrous.

The fear is that humans will get used to using AI systems in their daily lives without knowing that they are becoming dependent on a technology which may take over most human activities without the safety net of moral behavior. When the realization dawns, it may be too late. Therefore, there is an urgent need to recognize the potential and the threat from this invention which the author James Barrat has called “our last invention”.

Chitro Majumdar
Chitro Majumdar

Majumdar is founder, RsRL and co-founder of a start-up on AI Ethics.

Follow us on Facebook, Twitter, YouTube & Instagram to never miss an update from Fortune India. To buy a copy, visit Amazon.