Skip to main content

Where is AI heading?

To deliver on the exponential potential of artificial intelligence, enterprises must focus on the development of responsible AI

 

ai webpage header

Since the days when computers ran on punch cards and vacuum tubes, humans have been preoccupied with questions of what those computers could do when they became even more advanced — and what that would mean for humanity. Would they be able to help us solve our biggest challenges, from climate change to world hunger? Or would they turn on us and become our greatest threat? As artificial intelligence (AI) has started to hit the mainstream, those questions have become much more tangible.


The path AI takes will depend greatly on what happens in the next few years. While it’s clear that AI offers massive potential, achieving it in a way that will benefit humanity, avoiding the catastrophic outcomes depicted in movies like Minority Report, requires doing the hard work today to ensure AI is developed and used ethically and responsibly.

The state of AI today

AI has been making significant headlines since the 2022 launch of OpenAI’s ChatGPT. It’s a chatbot tool that uses generative AI to produce new content based on inputs from many sources of existing text — giving businesses a simple way to speed up writing tasks, support brainstorming and improve customer service. Feliz Fuentes Montpellier, General Manager, Industry Software Partners at Microsoft, says it’s also democratizing access to AI’s capabilities.

“You don’t have to have specialized knowledge to leverage the power and insights from massive amounts of data anymore,” she says. “Now anyone can use basic language prompts to tap into those insights.”

While generative AI has been getting most of the attention lately, AI researchers like Sanaz Mostaghim, a professor of computer science at Otto von Guericke University Magdeburg in Germany, are quick to point out that generative AI is only one branch of a much wider field.

“It’s really great that so many people are talking about AI,” she says. “It gets people asking questions and thinking about what else it can do. And that gives me the opportunity to showcase other types of AI and the possibilities they offer for a better life for everyone.”

AI becomes invisible

Some of the ways organizations are already using various forms of AI include sentiment analysis to get a sense of how people feel about a company or product, chatbots that can provide automated customer service in natural-sounding language, and recommendation engines that suggest additional products based on a customer’s purchase or search history. These have become so common that it’s easy to forget that most of these systems are powered by AI. 

In more specialized cases, AI is being used to support research by “knowledge mining” vast reference sets, such as court filings or medical data. It’s improving operations with predictive maintenance based on correlations among data from multiple sensors and other sources. It’s enabling metaverse applications such as digital twins. And it’s even being used to manage and reduce the energy consumption of telecommunications network equipment — without sacrificing performance or reliability.

One thing is certain: companies in nearly every industry are always on the lookout for ways to become more productive and are now increasingly looking to AI to help them do that by automating or even eliminating some routine processes.

“Companies are hearing about how disruptive AI will be, and they’re very interested to find out more,” says Anne Lee, a Senior Technology Advisor in the Technology Leadership Office. “They want to know how AI will impact the work and work processes of their employees as well as how AI will impact the kind of products they’ll be able to offer – what effects will AI have on the competitive landscape.” 

Types of AI

  • Artificial general intelligence is a hypothetical form of AI that possesses the ability to learn, apply knowledge, and solve tasks across a wide range of domains.
  • Computer vision uses and interprets visual inputs (video and images) to extract information.
  • Expert systems are rules-based and designed to emulate the decision-making of humans by applying rules-based logic to input data to arrive at a decision.
  • Generative AI models can create new multi-modal content based on the patterns of the data they are trained on.
  • Machine learning algorithms learn from historical data to predict future outcomes and solve problems.
  • Natural language processing recognizes and uses natural speech patterns to respond to commands and carry out tasks.
“You don’t have to have specialized knowledge to leverage the power and insights from massive amounts of data anymore.”
Feliz Fuentes Montpellier
General Manager, Industry Software Partners, Microsoft

Five AI use cases on the horizon

The field of AI is advancing so quickly that many experts are hesitant to even speculate about potential future use cases, noting that literally anything could be on the table. But, some applications are currently in preliminary or piloting phases of development and could start to see more widespread uptake in the coming years. These include:

Self-driving labs: Scientific research involves a lot of precise, repetitive tasks that robots could easily accomplish. The Argonne National Laboratory’s Rapid Prototyping Lab is working on integrating robotics into lab work, but that’s only the beginning. With further development of AI, it could conduct initial literature reviews to summarize the current state of research in a given area and propose new topics for study and methods to try. Looking even further ahead, the AI could be given leeway to examine existing research, decide – on its own – the next steps and carry out the research with minimal human oversight — which could significantly increase the speed to new discoveries.

Highly effective decision-support algorithms: Human existence is full of decisions, and many of them are highly complex. Mostaghim’s research is primarily devoted to developing decision-support AI that can examine a vast array of possible options and narrow them down to a more manageable shortlist based on specified criteria.

“These tools help balance the human need to look at enough options to feel confident they’ve made the best decision with the human inability to effectively choose between more than about seven options,” says Mostaghim.

They can be applied to purchasing decisions, healthcare treatment options or even industrial processes, with AI able to propose options that balance many competing criteria, such as cost-effectiveness vs. environmental sustainability.

Pharmaceutical discovery: AI could be used to accelerate the process of discovering new drugs for specific medical conditions – whether for pandemic events or conditions that affect only a small number of people and, therefore, today, would receive minimal research funding and little interest from for-profit drug companies. It could ultimately even help create truly individualized medications, leading to personalized medicine.

Personalized education: AI can already be used to supplement conventional education with individual tutoring. But, the addition of machine learning could enable an AI tutor to adapt based on a student’s learning style to provide more effective instruction tailored to each student it works with.

Autonomous creation and design: AI-generated art is just the beginning. Future AI could take a set of requirements and then create entirely new designs for existing products. Pushed to its limits, this could be one of the most innovative applications of AI. For example, with broad enough criteria, an AI might opt not to create a better car but to propose a completely new solution to transportation.

“It’s hard to overstate how disruptive AI could be,” says Sean Kennedy, who leads the AI Research Lab at Nokia Bell Labs. “We’re so used to making changes incrementally based on rigid standards, and AI has the potential for something else entirely.”

Given the speed at which the technology is developing, there will undoubtedly be many more use cases to come that have yet to be conceived of.

“In 10 years, anything is possible,” says Lee. “We might see AI superintelligence that outperforms humans in everything it does.”

“In 10 years, we might see AI superintelligence that outperforms humans in everything it does.”
Anne Lee
Senior Technology Advisor, Technology Leadership Office

Responsible AI is critical to getting there

But there is work to do to achieve the promise of AI, and it must be done the right way to ensure its advances come with more benefits to humanity than risks. That will require organizations across all industries to have a certain level of maturity in their tools, data and people.

First, they need to recognize the importance of having the right AI technologies, models and platforms for their needs and goals — and invest accordingly to drive value for their business.

Next, AI tools rely on massive amounts of data to be effective, so organizations need to understand the data they have at their disposal. They also need to implement systems to collect, control, store, organize and access that data effectively. Depending on the data set, this could also include controls to exclude objectionable material or other data an organization doesn’t want AI tools to use.

Finally, organizations need to recognize the value of their people. That means ensuring they have the proper training and skills to work effectively with AI.

“Having the right level of maturity in these areas is critical to understanding the scope of what’s coming next and getting ready for it,” says Kennedy.

But even with the right tools, data and people in place, some fundamental issues with how AI is developed still have yet to be overcome. If not appropriately trained, AI can provide inaccurate information, plagiarize other content or reinforce existing biases. 

Governments and enterprises have vital roles to play

To drive the positive outcomes we hope will come, governments and enterprises will need to spend the next few years focusing on building solid ethical and legal foundations for AI development. To that end, in May 2023, US President Joe Biden convened some of the biggest players in global AI development to ask for their commitment to managing AI appropriately. Some organizations, including Nokia and Microsoft, are already working in this area – and are strong advocates for appropriate AI safeguards and regulations.

For several years, Nokia Bell Labs has been researching how AI affects humans and how humans interact with AI. This work informed Nokia’s responsible AI framework (including its six pillars of responsible AI: fairness, reliability, privacy, transparency, sustainability and accountability). Nokia’s approach is to ensure these pillars are applied from the moment a new AI solution is conceived and enforced throughout the solution’s life.

Similarly, six principles inform the development and use of AI at Microsoft: fairness, reliability and safety, privacy and security, inclusiveness, transparency, and accountability.

“What happens to your data should be a top consideration whenever you’re thinking about using an AI tool,” says Microsoft’s Fuentes Montpellier. “Always ask about privacy and data use policies before you allow access to your data for any purpose.”

Frameworks like these are critical to determining when and how to use AI and ensure that those who use AI can trust its output. This is necessary to move forward with many of AI’s most promising use cases.

“I strongly believe that AI will lead to unbelievable value and good for humanity,” says Kennedy. “But the only way to get there is through a deep, ingrained focus on responsibility.”