Delivering maximum social benefit from artificial intelligence (AI): Leveraging its capabilities responsibly

AI safety is not a future problem to solve, it’s a real problem right now. In this blog I'll explain the challenges we face as a society in developing and using AI, and propose some operating principles to help us leverage AI for social good.
Britta Srivas
Customer-facing Solution Engineer
Published
Last updated

AI safety is not a future problem to solve, it’s a real problem right now. From public safety risks, to deep fakes, AI hallucinations, bias in AI algorithms and copyright infringements, we have a lot of real and practical problems to tackle when it comes to building AI responsibly today! As with all general-purpose technologies throughout history, AI represents the next leap in capability. As AI continues to evolve at lightning speed, it’s up to us to ensure that it’s developed responsibly to enhance its positive impact while mitigating potential harm. But where to start?

That's where responsible AI comes in - an approach that prioritizes safety, fairness, and transparency to maximize social benefits. By following key principles such as societal benefits, fairness, privacy, and transparency, we can build trust between humans and machines while ensuring safe interactions with technology.

But it’s not without its challenges - from AI's moral dilemma to privacy and governance concerns. To address these challenges, we need to collaborate and develop best practices for responsible AI, while continually iterating and testing for improvement. With the right technology and the approach, we can safely adopt AI and maximize its potential for greater good.

Responsible AI operating principles

Responsible AI operating principles focus on using technology to deliver societal benefits, while also taking safety and equity considerations into account. I believe that the following operating principles are essential considerations when designing and implementing AI systems:

Societal benefits: AI technology should maximize the potential benefits for people and society. For example, advancing its development, distribution and use to benefit climate and sustainability. On the other hand, AI practitioners need to be mindful of potential harm that AI can cause, for example, through unintended or malicious outputs. This can include physical injury (e.g., in autonomous vehicles), embarrassment (e.g., in facial recognition), or use cases that support harmful actions (e.g., war).

Fairness: Fairness in AI systems has several facets, from ensuring that the model has been trained on diverse, unbiased data to how the trained system is used. Equity and inclusion considerations include individual differences related to race, gender identity or expression, age, socioeconomic status, sexual orientation, disability status, etc. None of these groups should be unduly advantaged or disadvantaged.

Privacy: Individual’s privacy should be preserved. There are several techniques, including privacy-enhancing technologies (PETs), that enable privacy-preserving training of AI models. Leveraging these technologies allows companies to innovate without exposing individuals’ private information.

Transparency: When using a pre-trained AI model, it’s important for the user to understand why a model predicts the way it does and which data it was trained on. Traceable oversight of model and data usage increases trust and lays the foundation for responsible AI use.

Challenges in implementing responsible AI

Implementing responsible AI is challenging for several reasons:

AI's moral dilemma: Society is made up of diverse individuals who are free to make decisions based on their individual morality. AI brings the ability to scale decision making, informed by general human values and norms. It’s a challenge to trust an AI’s consolidated view of human values, and to agree on what those consolidated values should be. I’m keen to see how we as a society will solve this problem in the future, as many topics evoke different human views. Understanding the precise implications of AI-based decision making for individuals and society will be a first step and an ongoing exercise.   

Privacy, security and governance: It’s a key challenge to design AI systems responsibly and to ensure that these systems remain secure, reliable, and fair over time. Data used to train the AI system must be protected from unauthorized access or manipulation. Users’ personal data must remain confidential to protect their right to privacy. Data must be collected, stored, and used in a way that is consistent with legal requirements and ethical considerations such as transparency about what types of data are being collected, who has access to it, and so on. While there are still few practical standards on how to build private and secure AI models, I am personally glad to see that data privacy is a relevant issue in Europe.

Scaling and maintaining AI systems: Finally, there are practical considerations when developing a responsible AI solution. These include a model’s scalability (ensuring that the system operates reliably even under high demand) or maintainability (ensuring that future updates do not disrupt existing functionality). Structured and systematic testing of a model’s performance and reliability is not easy and isn’t common practice today. The same is true for the maintenance of AI systems so that they remain secure, private and accurate over time. Systematic and structured processes are needed to ensure the reliability of an AI model under high demand, that updates are properly tested before release, and that a system is continually updated.

Potential ways to tackle those challenges

While these challenges are not easy to overcome, I think there are good ways to address them:

Developing best practices for responsible AI: To ensure responsible AI, it is important that all stakeholders involved in the development process understand best practices for developing and using the technology safely and ethically. This includes being transparent about what data is collected and how it is used, as well as establishing clear rules about who is responsible if something goes wrong. In my view, accountability should always be a key consideration. These standards can be developed by different groups, such astechnical communities, NGOs (by raising awareness and voicing public opinion), or governments (in the form of legislation).

Continual iteration, testing and improvement: A good AI system won’t stay good over time without regular improvements. Developers need to keep up with new advances to ensure their systems remain safe, reliable and equitable over time. To ensure this, I personally think it’s each organization’s responsibility to create the right structures for their employees to be able to do this. Introducing QA practices (e.g., clear AI model release and regular testing processes) help identify potential problems before a system is embedded into an organizations code base and deployed into production.

Collaboration between stakeholders: Finally, collaboration between different stakeholders, including engineers, designers, ethicists, and legal experts is essential to developing responsible solutions. By coming together to discuss the challenges of building an ethical model from multiple perspectives, practitioners can gain valuable insights into potential pitfalls while finding ways to mitigate risks when implementing an AI solution at scale.

Conclusion

In today’s world, responsible AI is not just an option, it is an absolute necessity. It’s imperative that we ensure AI is developed and used ethically and safely to realise its full potential benefits for society. Governments around the world are starting to implement regulations to ensure that AI models are used in a fair and responsible way (e.g., the EU AI Act). This means that companies will need to step up their game when it comes to AI security practices – and it’s a great opportunity for new players to come in and support the AI ecosystem to innovate responsibly.

Leveraging existing technologies, such as federated learning, will be critical to achieving responsible AI. This technology enables AI models to be trained without sharing data, reducing potential risks associated with data sharing, compliance and privacy. Apheris is building a secure and federated infrastructure that allows data custodians to jointly leverage their data for AI training while providing only relevant data with limited access in a privacy-preserving manner. This opens-up new possibilities for implementing best-practices for responsible AI, such as diverse and representative training data, allowing organizations to address issues such as bias and fairness with a more comprehensive and thoughtful approach.

The stakes are high, and time is of the essence, but I feel encouraged to work towards a future where technology is used for social good in a thoughtful and ethical way.

Collaboration
Privacy
Machine learning & AI
Federated learning & analytics
Share blog post to Linked InTwitter

Insights delivered to your inbox monthly