It is not data or information that will be used but the intelligence extracted from data to build solutions for the common citizens’ use. We are moving from the information era to artificial intelligence era. The strengths we have in India are that we have a vast IT talent pool, freedom from legacy assets, highest rate of data consumption and growth and rapid digitisation across sectors.
We need a significant focus to exploit this technology for disruptive growth to move from the existing economy to the new digital economy. Being one of the few global economies to have implemented and perfected automated AI processes across diverse sectors, India is definitely leading in the AI usage trends. As per a study by Salesforce, India ranks third after Singapore and Hong-Kong in the Asia Pacific region, in terms of artificial intelligence readiness. However, given this we do need to keep aspects relating to the ethical practices to be kept in mind while developing solutions using AI.
Imagine this: It’s parching hot outside; you are returning home from a long day at work and want a quick siesta. You are approaching home and are in a two-kilometre radius, and through your smart phone app, you set up and adjust your AC temperature, preference of lights, curtains etc – an interconnected world, enabled by technologies such as Internet of Things (IoT) and Artificial Intelligence (AI).
This anecdote only scratches the surface of the plethora of possibilities that AI has to offer. Moving ahead, imagine living in an era, where machines not only run assigned errands, but can also make decisions and execute actions, through analysis of insights and knowledge fed into the program. Basically, machines powered by AI can not only act, but also understand and analyse.
Evaluating these comprehensive abilities, there are a few sects in the world of tech, who have a very morbid perspective of AI. This school of thought associates AI with a dystopian future, where machines will take over or completely replace humans. However, countering such theories, there are others who believe AI to be a revolutionary technology that has immense potential, if used vigilantly and appropriately. It has the potential to revolutionize human lifestyle. As a matter of fact, AI is still at a very nascent stage and the opportunities it could unravel, is still ambiguous.
It is projected that eventually, AI will bring about qualitative progress and innovation, enhancing individual and societal well-being and the common good. For instance, AI systems could significantly impact the UN Sustainable Development Goals, such as tackling climate change, rationalising our use of natural resources, enhancing our health, mobility and production processes, and supporting how we monitor progress against sustainability and social cohesion indicators. For this to happen, we need to create AI systems that are human-centric and designed to work towards human welfare.
Despite AI’s promises to bring forth new opportunities, there are certain associated risks that need to be mitigated appropriately and effectively. To give a better perspective, the ecosystem and the socio-technical environment in which the AI systems are embedded needs to be more trustworthy. Such an endeavour would not only maximise the benefits of AI systems, but would also minimise the associated risks.
The trustworthiness of AI systems depends on three key components of the system’s lifecycle viz. lawfulness, compliance with the laws and regulations of the land and it should be robust, both from technical as well as from social perspective because even with good intentions, AI systems can cause unintentional harm.
While these components are essential, they need not necessarily be sufficient to achieve the desired results. Ideally, all three work in harmony and overlap in their operation. In practice, however, there may be tensions between these elements (e.g. at times the scope and content of existing law might be out of step with ethical norms). It is our individual and collective responsibility as a society to work towards ensuring that all three components help to secure Trustworthy AI.
The question is why are these three components important? Let’s take an example - a self-driving car hits a woman and she pass away. Though a driver was there, the AI was in full control. In this case, who is to be blamed? The person in the driver’s seat? The designers of the AI system? Or the manufacturers of its on-board sensory equipment? AI allows machines to ‘learn’ from data and make decisions, without being explicitly programmed.
Considering hypothetical situations like this, it is important to induce fairness in AI systems. In order to achieve that, one needs to understand how bias can be introduced and impact recommendations, attracting diverse AI talent pool, developing analytical techniques to detect and eliminate bias and facilitating human review and domain experience.
Following fairness, the next aspect is Reliability & Safety of AI systems. This encompasses evaluation of training data and tests, monitoring of ongoing performance, designing for unexpected circumstances – including nefarious attacks and keeping humans in the loop.
Similarly, to ensure privacy and security, AI systems must adhere to existing privacy laws, provide transparency about data collection and use, and good controls so people can make choices about their data, design systems to protect against bad actors and use de-identification techniques to promote both privacy and security.
Another aspect is inclusiveness, which entails inclusive design practices to address potential barriers that could unintentionally exclude people, enhance opportunities for those with disabilities, build trust through contextual interactions, and bring in EQ in addition to IQ. Similarly, such systems should be transparent - people should understand how decisions were made, provide contextual explanations and make it easier to raise awareness of potential bias, errors and unintended outcomes.
Therefore, as we can see, the need of the hour is to bring together industry leaders, global academic researchers and policymakers, to understand the relevance and implications of ethical considerations in AI implementations in India. Putting things into perspective, as per a research paper published in ‘Nature’, a group of 23 authors from various countries and institutes highlighted that machine behaviour and AI systems study calls for cross disciplinary efforts that could involve social scientists, computer scientists, economists, psychologists and lawyers. Neutral and interdisciplinary studies in AI, has the capacity to further explore the principles from other sciences to build guidelines and set standards for autonomous systems. Research institutes and government funding agencies can play a vital role in designing and developing large-scale interdisciplinary studies in AI. The NITI Aayog recently drafted a strategy for AI where it suggests creating a National AI Market (NAIM).
AI is one of the most disruptive technologies of today and it is here to stay. Businesses can leverage data and machine learning and turn every interaction into an opportunity. AI can not only optimise margins, but also fetch increased ROI by directing spends and talent into areas where outreach and engagement is crucial. But all should be done within the ambit of highest ethical standards.
Views are personal.
The author is director general of COAI.
Leave a Comment
Your email address will not be published. Required field are marked*