Artificial intelligence is also hailed as a tool to promote fairness but as AI finds greater use in industry, social spheres and daily life, it is facing scrutiny over biases that can creep into algorithms that power it.
The tech industry believes that artificial intelligence (AI) offers enormous opportunities to benefit humanity but the key is ethical deployment. There is a possibility that human biases influence algorithms and result in discriminatory outcomes.
We need to ensure that AI algorithms do not reflect and propagate bias, causing unintended harm.
While technologists are optimistic about AI, results of machine learning (ML) models can be affected by data that amplifies biases found in the real world like race or gender.
These potential pitfalls should not blind us to the huge opportunities that the AI landscape provides. A recent NASSCOM report shows that India has a huge opportunity to position itself a global hub for data and AI, adding 0.5 trillion dollars to the GDP by 2025.
Enhancing AI usage, at a functional level alone, can unlock around 50 percent of that value, the report says. Nearly 45 percent of that value is likely to be delivered by three sectors—consumer goods & retail, agriculture and banking and insurance.
To make this happen, three critical pieces have to be in place. “First and foremost, we need to ensure that we have a robust partnership-based model that focuses on local innovation around data and AI. No one company or organisation can make this happen. We need the academia, industry and the government working together to create a strong data and AI ecosystem in the country,” says Rohini Srivathsa, National Technology Officer, Microsoft India.
Second, the country needs to have AI-ready skills. AI skilling will be critical for India to innovate at scale and become the tech engine of the world. Microsoft has partnered with CBSE to introduce coding and data sciences in the school curriculum.
Third, the tech industry has to ensure that technology is used responsibly. “That is why we developed a core set of principles to guide our work, which is centred around the fact that AI must be designed to augment human capabilities,” says Srivathsa.
AI needs to be transparent, maximise efficiency while protecting human rights, provide intelligent privacy and accountability for unexpected scenarios and guard against biases.
Shalini Kapoor, IBM Fellow, IBM India Software Labs, believes AI offers infinite possibilities and the coronavirus pandemic has moved it mainstream.
We are witnessing an evolving demand from organisations, irrespective of their size or the industry they operate in.
Organisations are looking to automate the manual process and generate sales leads by connecting diverse data sets, while an NGO is using AI to assist it in hiring lay counsellors.
And, some organisations are also using AI to unlock the value of the existing data (structured and unstructured) to establish new lines of business. As organisations embark on transformational journeys, they are using AI to modernise their applications.
The trust factor
Today’s AI is narrow and to train new models, it needs a lot of data and time. So, researchers globally are working on AI that needs fewer data to train, one that combines different forms of knowledge, unpacks causal relationships and learns new things on its own.
Research efforts, especially at IBM, are focused on three areas that are critical for businesses looking to scale AI. They are Natural Language Processing (NLP), automation and trust.
Researchers are advancing AI to understand the language of business by advancing its capability to generalise, reason, recognise the relationships between words in context, and also its unique nuances, which are an important part of human communication, says IBM’s Kapoor.
In the context of business, this also means the ability to extract meaning from a variety of complex, multi-format documents like PDFs, embedded tables, diagrams, charts, etc.
Automation is the task of managing the AI lifecycle, starting with the curation of data till the time a model is deployed in the system.
“At IBM, we have developed AutoAI (ML variation) to automate most of the preparation and workflows that go into an AI deployment, including data provisioning, and modelling processes, reducing the time to train an initial model from weeks to days or even hours,” says Kapoor.
And since trust is essential for AI adoption, researchers are working on comprehensive data and AI governance solutions to help companies achieve greater trust, transparency and confidence in business predictions and outcomes.
Prepping for pole position
To further tap into AI’s possibilities and accelerate R&D, India needs to focus on three key elements: increasing the capacity of cloud computing infrastructure, augmenting AI-related talent and capabilities, with a focus on innovation, and strengthening global collaboration involving the government, industry and academia.
With revenues of $194 billion in 2020-21, India’s robust IT sector means the country is in a promising position to be a leading player in global AI development.
“Our research community has been the fourth-largest producer of AI-relevant scholarly papers since 2010 and our workforce also holds the highest average share of AI skills of any nation in the world.
“While skills are important, it’s also critical for the government to provide further impetus for AI development and consumption as part of its goal to make India a $1-trillion digital economy by 2025,” says Intel’s Srinivas Lingam, VP, Datacentre & AI Group.
Google’s on a mission to make the benefits of AI available to everyone in three ways—making its apps and services, many of which are used by more than a billion people, more useful with AI; help businesses, developers and other third parties innovate with AI and provide researchers with tools to tackle challenges like healthcare, energy consumption, and environmental conservation.
Google Research India, the tech giant’s AI lab in Bengaluru, has teams that focus on advancing fundamental computer science and AI research by building a strong team and partnering with the research community across the country and secondly, applying this research to tackle problems in healthcare, agriculture, education and other fields.
AI is celebrated for its benefits but also scrutinised and, to some degree, feared for its pitfalls. Critical issues like facial recognition, large-scale language models and other sensitive AI applications affect our lives.
Looking at the next few years, some simple and common trendlines emerge, says Srivathsa of Microsoft.
In the last few years, AI has moved into mainstream products, thanks to a confluence of factors like the massive computing power of the cloud, the availability of enormous datasets that can be used to teach AI systems, breakthroughs in developing algorithms and improvements in methods such as deep learning.
The industry has realised that the development of AI presents many challenges and that technologies that use AI must be developed keeping Responsible AI as a design principle, and in a way that fosters trust and maintains privacy protections, Srivathsa says.
IBM’s Kapoor believes a company must protect client data and insights, and ensure responsible and transparent use of artificial intelligence and other transformative innovations.
Creating principles of trust and transparency to help clients understand where its values lie within the conversation around AI is important.
She says IBM has three core principles that dictate its approach to data and AI. One, the purpose of AI is to augment human intelligence, which means it doesn’t seek to replace human intelligence with AI but support it.
Two, data and insights belong to their creator—IBM clients can rest assured that they, and they alone, own their data, she says.
Three, AI systems must be transparent and explainable— IBM believes that technology companies need to be clear about who trains their AI systems, which data was used in that training and, most importantly, what went into their algorithms’ recommendations.
With today’s massive numbers of data assets and regulatory compliance requirements, organisations are finding it increasingly difficult to deliver timely, trusted, quality data for business consumption. Data is trusted only if its quality, content, and structure are well understood, and maintained over time, says Kapoor.
Data and the insights it offers are the lifeblood of any modern organisation. Companies have a wealth of insights trapped in the massive amounts of data residing across their businesses which needs to be mined securely and trusted to leverage across various functions, she says.
Trust is a very important aspect, particularly for AI. If businesses cannot explain to the end-users how and why their AI system is making certain decisions, it may lead to unfounded fears of mysterious algorithms that do not evoke trust.
Stakeholders can only trust algorithms when they see how they were created and how they work, she says.
There will be bumps along the road for AI adoption. It is a complex emerging technology, one that needs to be managed responsibly. “We have already seen instances where insufficient or low-quality data used to train AI algorithms has led to outcomes where biases have been reinforced,” says Srinivas Lingam of Intel.
Many nations have instituted data privacy regulations, including India’s Personal Data Protection Bill, to protect citizens, so understanding the sources and quality of data is critical. However, as more businesses are exposed to AI, the stage is set for wider adoption.
For the general public, through programs such as AI for All, AI is being demystified and embedded into daily life. “Over the next few years, we can expect to see an explosion in AI-based uses cases and their extensive adoption across different key aspects of life like healthcare, education, retail, entertainment, etc,” says Srinivas.
(This is the second article in a three-part series on AI and social good) Source