Skip to main content

What is Responsible AI?

Nokia has defined six principles, or pillars, that should guide all AI research and development in the future. We believe these principles should be applied the moment any new AI solution is conceived and then enforced throughout its development, implementation and operation stages.

Nokia Bell Labs and e& announce R&D collaboration to innovate for strategic industrial sectors

The goal is to develop responsible AI solutions for sustainable enterprise and industrial automation applications and accelerate innovation concepts toward real world deployments.

 

Read the press release

Colored wooden blocks

For AI to thrive, it must put humans first

The case for human-centric AI design and development.
 

 

Read the blog

Fairness

AI systems must be designed in ways that maximize fairness, non-discrimination and accessibility. All AI designs should promote inclusivity by correcting both unwanted data biases and unwanted algorithmic biases.

Designing Value-Aligned Human-AI Interaction

Zana Buçinca examines the over- or under-reliance on AI advice due to design flaws, advocating for AI tools that align with human values to improve decision-making and human…

Responsible AI in Education

Rene Kizilcec explores AI in education, focusing on trust, cultural bias, and improving AI models as personalized tutors, with promising examples and future research…

Emotion AI in the Future of Work

Nazanin Andalibi critiques Emotion AI's rise in workplaces, discussing validity, bias, and surveillance concerns. She highlights its sociotechnical impact and persistent harms…

Investigating Algorithmic Biases in Child Welfare Systems Through Human-Centered Data Science

Shion Guha's study reveals biases in child-welfare algorithms, highlighting flawed data use and legal-risk focus over maximizing children's welfare outcomes.

The Ethics of Emotion in Artificial Intelligence Systems

Artificial Emotional Intelligence raises ethical concerns. Luke questions emotion modeling and the appropriateness of deploying AEI for public use.

Human-Centered Approaches to Supporting Fairness in AI

Vivek Krishnamurthy of University of Ottawa gave a talk titled “Human-Centered Approaches to Supporting Fairness in AI”.

The Future of AI for Social Good

Saiph Savage demonstrates how AI-based coaching on crowd-sourcing platforms can boost worker wages, efficiency, and skills while promoting fairness.

Ethics in AI: A Challenging Task

Ricardo Baeza-Yates from Institute for Experiential AI at Northeastern University gave a talk titled “Ethics in AI: A Challenging Task”.

Human-Centered Approaches to Supporting Fairness in AI

Michael Madaio from Microsoft Research discusses human-centered approaches to fairness in AI, focusing on context-driven checklists across AI design phases.

Designing Artificial Intelligence to Navigate Societal Disagreement

Michael Bernstein of Stanford introduced 'jury learning,' a method addressing bias in AI by ensuring underrepresented groups influence classifier predictions.

Maintaining fairness under distribution shift

Jessica Schrouff from Google Research explores the problem of distribution shift in healthcare AI, focusing on dermatology and electronic health records.

Is Legal AI Ethical AI?

Harvard researchers discuss the ethical challenges of legal AI, exploring how moral principles are translated into algorithms, and the risks of unintended discrimination.

Who are we listening to? Building blocks for trustworthy AI

Hillary Juma of Mozilla Foundation introduces the Common Voice project, improving voice tech accessibility by collecting 13,000 hours of data in 76 languages.

Reliability, Safety and Security

Privacy

Transparency

Sustainability

Accountability