What is the current state of AI?
Artificial intelligence technologies are transforming business processes and society at large. What are the AI trends in 2021 enterprises should be paying attention to?
Success stories tend to focus on the achievements and evolution of the algorithms. Google’s BERT transformer neural network is an example of a new type of algorithm that promises to revolutionize natural language processing.
Equally impressive — and worthy of enterprise attention — are the new tools being invented to automate machine learning pipelines and greatly accelerate the development process.
In addition, the field of AI is moving into various new domains such as conceptual design, smaller devices and multi-modal applications — innovations that will expand AI’s repertoire in many industries. It’s also important for companies to keep an eye on the bleeding edge AI technologies that show tremendous promise and are now available for experimentation via the cloud — quantum AI is an example.
What are AI and machine learning trends for 2021?
To take full advantage of the benefits of AI and machine learning trends, IT and business leaders will need to develop a strategy for aligning AI with employee interests and business goals. The following issues should be on the agenda:
- how to streamline and democratize access to AI;
- how to address rising concerns about ethical and responsible AI; and
- how to tie AI compensation to business goals to ensure AI implementations actually deliver on the hype.
Here are nine top 2021 trends IT leaders should prepare for now.
1. Automated machine learning (AutoML)
Two promising aspects of automated machine learning will be improved tools for labelling data and the automatic tuning of neural net architectures, said Michael Mazur, CEO of AI Clearing, which is using AI to improve construction reporting.
- The need for labelled data had created a labelling industry of human annotators based in low-cost countries like India, Central Eastern Europe and South America, Mazur said. The risks associated with using offshore labor “pushed the market to look at different ways of avoiding or minimizing this part of the process.” Improvements in semi- and self-supervised learning are helping companies keep the amount of manually labelled data to a minimum.
- By automating the work of selecting and tuning a neural network model, AI will become cheaper and new solutions will take less time to reach market.
Going forward, Gartner predicts a focus on improving the various processes required to operationalize these models: PlatformOps, MLOps and DataOps. Gartner collectively calls these new capabilities XOps.
2. AI-enabled conceptual design
Historically, AI was mostly applied to streamline processes related to data, image and linguistic analytics.
This is ideal for usage in financial, retail or healthcare industries and for clearly defined repetitive tasks. But recently OpenAI developed two new models called DALL·E and CLIP (Contrastive Language-Image Pre-training) that combine language and images to generate new visual designs from a text description.
Early work shows how the models can be trained to make novel designs. Examples included an avocado-shaped armchair that was designed by giving the AI the caption “avocado armchair.” Mazur believes the new models will facilitate production-scale implementation of AI into creative industries. “Soon we can expect something similar disrupting fashion, architecture and other creative industries,” Mazur said.
3. Multi-modal learning
AI is getting better at supporting multiple modalities within a single ML model, such as text, vision, speech and IoT sensor data. Developers are starting to find innovative ways to combine modalities to improve common tasks like document understanding, said David Talby, founder and CTO of John Snow Labs, an NLP tools provider.
For example, patient data collected and processed by healthcare systems can include visual lab results, genetic sequencing reports, clinical trial forms and other scanned documents. The layout and presentation style of this information, if done right, can help doctors better understand what they’re looking at. AI algorithms trained using multi-modal techniques such as machine vison and optical character recognition could optimize the presentation of results, improving medical diagnosis. Getting the most out of multi-modal techniques will require hiring or training data scientists with cross-domain skills such as natural language processing and machine vision techniques.
4. Tiny ML
Tiny ML is a rapidly growing approach for developing AI and ML models that run on hardware-constrained devices such as the microcontrollers used for powering cars, refrigerators and utility meters. Jason Shepherd, vice president of Ecosystem at Zededa, expects Tiny ML algorithms to be increasingly used for localized analysis of simple voice and gesture commands; common sounds such as a gunshot or baby crying; asset location and orientation; environmental conditions; and vital signs. Teams will need to adopt new approaches for the development, security and management of Tiny ML.
5. AI-enabled employee experience
IT leaders are starting to confront concerns about the potential for AI to steal or dehumanize jobs. This is driving interest in using AI to enhance and augment the employee experience, said Howard Brown, founder and CEO of RingDNA, a call center tools provider. AI assistance could be especially useful in overburdened departments that are struggling to hire people, such as sales and customer success teams.
Combined with robotic process automation, AI could help automate mundane tasks to free up sales teams a for more meaningful conversation with customers. It could also be used to improve employee coaching and training.
“Everyone talks about delivering great customer experience, but the best way to do that is to deliver a great employee experience first,” Brown said. IT leaders will need to think about how AI can be provisioned in a way that helps employees stay engaged, happy and successful at work.
6. Quantum ML
Quantum computing shows tremendous promise for creating more powerful AI and machine learning models. The technology is still beyond practical reach, but things are starting to change with Microsoft, Amazon and IBM making quantum computing resources and simulators easily accessible via cloud models.
“This could set us up for huge breakthroughs in late 2022 and 2023 as quantum computers become more powerful and intersect with the increased interest in and experimentation by the ML community,” said Scott Laliberte, managing director and leader, emerging technology consulting, at Protiviti, a digital transformation consultancy.
The intersection of quantum computing and ML could create tremendous benefits for companies, enabling them to potentially solve problems that are unsolvable today. Laliberte recommends that organizations start looking now at the potential impact of quantum computing on their industry and adapt their AI strategies to enable resources to explore quantum computing and ML when the platforms mature in the next two to three years.
7. Democratized AI
Improvements in AI tooling are lowering the level of expertise required to build AI models. This will make it easier to include subject matter experts in the AI development process. Democratized AI will not only speed up AI development, it will also ensure the level of accuracy provided by subject matter experts, Talby said. Frontline experts can see where new models can provide the most value and where they can create problems or need to be worked around.
Doug Rank, senior data scientist at Saggezza, predicts the trend will mirror the trajectory of technologies like computers and networks, which evolved from being usable by only a few experts to wide adoption across the enterprise. The big challenge will be cleaning up the data and providing access with appropriate guardrails.
“With careful planning, IT leaders can ensure their data remains accurate and complete throughout cloud migrations, so they can realize the value of accessible AI,” Rank said.
8. Responsible AI
Early AI work operated in a greenfield when it came to regulations, ethics and explainability. The first substantive efforts at addressing this absence of oversight have focused on protecting data privacy and security through new legislation like GDPR and CCPA. The laws included some guidelines on AI transparency, particularly when personally identifiable information was used to make substantive decisions. Now regulators in Europe and the Biden Administration in the U.S. are turning the heat up on the AI algorithms themselves.
Trustworthy AI is growing in importance, not just to appease regulators and consumers but also to help business users understand where and how AI makes mistakes.
Thanneer Malai, senior technical program manager at Saggezza, predicts enterprises will have to invest in training programs for trustworthy AI. Improved training will help humans identify and rectify problems that automated tools may miss.
9. ROI guarantees for AI projects
Expect more IT executives to push for new results-driven contracts with AI consultancies, systems integrators and vendors. Arijit Sengupta, founder and CEO of Aible, who previously founded and sold off the Einstein platform to Salesforce, said, “People are going to be tired of paying huge amounts of money for a low probability of success.” For example, one survey from MIT and Boston Consulting Group found only 10% of AI projects are delivering financial benefits.
The status quo is for enterprises to invest significantly in multi-year software licenses and consulting fees up front without knowing whether the project will be successful. Sengupta argues this same sort of behavior drove millions of dollars of investment in Hadoop and big data systems, which never delivered on the promised benefits. “There will be greater demand for satisfaction guarantees for AI projects,” Sengupta predicts. Otherwise, the industry risks falling into another AI winter.