Andrew Gibson - Principal AI Consultant

22.8.24

Ethical Generative AI - The Importance of Responsible Technology Integration.

We discuss why putting ethics at the heart of your AI strategy is not only good practice, but good business. (7 min read)

When it comes to incorporating artificial intelligence into your business, there’s a lot more to consider than just technological upgrades.

For CEOs and Founders, and anyone leading the charge with AI adoption and integration, or becoming AI-ready, prioritising ethics in AI isn’t just about avoiding pitfalls like data mishaps or biased outcomes - it’s about building and maintaining trust.

How we implement AI is just as crucial as the technology itself.

And, what exactly are the ethical considerations for generative AI?

About the author:

I’m Andrew Gibson, Principal AI Consultant at Build Circle.

As an experienced software engineer and architect, I’ve successfully delivered generative AI solutions to production for companies ranging from agile startups to fully regulated banks. My work as a technical leader across a range of industries has given me a deep understanding of the ethical considerations in deploying AI technologies.

In this article, I discuss why putting ethics at the heart of your AI strategy is not only good practice, but good business too.

SHORT ON TIME?

Use the contents menu below to choose, or save, what you’d like to read about first.

What is Generative AI?GenAI in Simple TermsUnderstanding Generative AIWhy is Ethical AI Important in Business Innovation?Identifying and Mitigating AI BiasEnsuring Privacy and Data Protection in GenAIAccountability in AI OperationsImplementing AI Ethically: A Step-by-Step GuideMonitoring and Auditing AI SystemsEthical AI Practices. Good Business.

What is Generative AI?

Generative AI, is a sophisticated branch of artificial intelligence (AI) that creates new content like text, images, music and even code by learning from existing data.

Though the term has been around for some time, it gained mainstream attention with the release of ChatGPT in late 2022.

Generative AI has a wide range of use cases within business and is increasingly being used in everyday business processes and strategies, thanks to its ability to make nuanced subjective assessments of natural language.

GenAI in Simple Terms

In simple terms, GenAI creates AI-generated content, and powers chatbots, AI-generated art, and scripts, producing original material that mimics real-world examples.

It responds naturally to human conversation, making it an invaluable tool for things like customer service and personalised workflows. AI-powered chatbots, voice bots, and virtual assistants use this technology to engage with customers more accurately - enhancing first-contact resolution, and harnessing better engagement and overall trust in the brand.

Using AI in business by integrating it into core strategies and processes is powerful for driving growth and staying competitive - all important stuff. However, the ethical considerations for AI integration and using AI responsibly, are also integral to your AI journey.

Understanding Generative AI

It’s important to understand what is and isn’t an appropriate use case for generative AI.

Common mistakes I see people and organisations make are confusing generative AI with predictive analytics - this is because generative AI models aren’t good at finding patterns in huge data sets. Or assuming that generative AI should be used to solve a problem that would be better suited to being solved with more traditional software development.

The mental model I use for thinking about good generative AI use cases is to think:

“If I had a very low-cost army of reasonably intelligent humans, what problems could I get them to solve?”.

This helps me keep in mind that generative AI is best suited for tasks requiring creativity, interpretation, and generation of new content rather than tasks that rely heavily on data analysis and pattern recognition.

By understanding the strengths and limitations of generative AI, you can better identify suitable applications and avoid misalignments in your AI strategy.

Engaging tech consultancy services can help you identify appropriate generative AI use cases for your organisation - providing support and tailored solutions to meet your specific needs.

Explore Build Circle's GenAI Services if your organisation is becoming AI-ready.

Why is Ethical AI Important in Business Innovation?

Ethics in AI is foundational to sustainable business innovation.

Here’s why.

First of all, in most enterprises, you simply won’t be able to satisfy your risk and compliance team without showing that you are taking AI ethics seriously. And, if they aren’t satisfied, then nothing gets to production!

Then there’s your customers’ perception; ethical use of AI is at the front of people's minds at the moment, and rightly so. The perception that you aren’t following ethical practices could be seriously damaging to your brand.

Ultimately, customers want products that will enhance their lives - they want you to find innovative uses for AI, so it’s vital to have clear ethical principles at the foundation of any business innovation.

Identifying and Mitigating AI Bias

Bias in AI systems is a common issue, often arising from inherent biases in the data the systems are trained on.

In most generative AI applications, you won't have much, if any, control over the training dataset, so some mitigations like ensuring diverse dataset collection, which applies to more traditional AI, don't apply here.

However, you DO have control over how you use a generative AI model.

A key part of your process for approving generative AI use cases should include assessing the likelihood of biases in the model’s training data leading to unfair outcomes for your customers.

Use cases where generative AI makes subjective judgments, especially where there’s no human intervention in the loop, should be carefully considered before implementation. This is also true for generating content for specific demographics, as it can inadvertently reinforce stereotypes.


Case Study - Responsible Generative AI Integration

This case study on BPP, a leader in the Edtech space, is a prime example of responsible generative AI integration.

It demonstrates how AI can effectively augment human work, streamline processes, and also lead to substantial improvements in the quality and scalability of AI initiatives.

The Project

BPP leveraged generative AI from Build Circle to assist their subject matter experts (SMEs) in creating multiple-choice questions for the Solicitors Qualifying Examination (SQE).

The SMEs played a crucial role in training the AI models to make sure the integration was responsible and minimise bias.

The project exemplifies how AI can effectively support human expertise, augment capabilities and improve efficiency while ensuring the adoption and integration of AI are both ethical and responsible.

Read the full case study here:

Generative AI in Education - Custom AI Solutions for Edtech Leaders, BPP.

Ensuring Privacy and Data Protection in GenAI

Privacy and data protection are critical components of any ethical implementation of AI. Laws such as GDPR or HIPPA apply here, the same as they do in any other piece of software you might build.

When it comes to generative AI there are some unique challenges. Firstly, if you are using a SaaS model - one that you aren’t hosting yourself - then you must be familiar with the privacy policy of that product.

• How do they use the data that you feed into the model?

• Is the data used to train future models?

• Could your customer data somehow end up in a response from a future model to a third party?

Most providers will have good answers to these questions, but it’s an important point to verify.

You should also carefully consider the design of the application that you build using generative AI to guard against private data being leaked to unauthorised parties. Threat modelling is a very valuable technique here, just as it is for securing the design of traditional applications.

A common type of generative AI application is a chatbot that has access to customer data. Care must be taken here to ensure that the customer using the chatbot only has access to their own data and can’t manipulate the bot into displaying other customer’s data.

Learn more about the importance of data governance when implementing generative AI, and achieving data security and compliance: Build Circle Data Services.

Accountability in AI Operations

It’s a great feeling to see how quickly you can get an impressive proof of concept (PoC) up and running using generative AI, but turning that PoC into an operational product is much more challenging.

There are several things to consider when thinking about the operational process for an AI product:

Accountability and ownership

Who is responsible for monitoring the performance of the AI and how will they do this?

If the AI begins to perform in unacceptable ways, who owns the process for falling back to manual/human processing?

Transparency and auditability

If the AI makes decisions without a human in the loop, how is the rationale behind those decisions documented?

If there is a human in the loop, then how much decision-making is the human expected to do versus trusting the AI output?

Change management

Who needs to be involved in the decision to approve new features?

What is the approach to testing new features?

Implementing AI Ethically: A Step-by-Step Guide

The goal of a good AI ethics and risk framework should be to enable rapid innovation while maintaining an acceptable level of safety.

Your process should be lightweight and clear, as well as being as integrated into your normal development lifecycle as possible; don’t wait until you’re ready to deploy to production to start talking about ethics, “shift left”!

Here are some specific steps you may want to consider:

1. Define clear ethical guidelines ✅

Common issues to cover are:

Fairness -  Would I be happy for my data to be used in this way?

Accountability - Who is accountable if things go wrong?

Transparency - Can I clearly explain what the AI is doing?

Privacy - How is personal data being protected? Are there safeguards in place to prevent misuse or unauthorised access?

Make sure people are clear on how to assess a use case against these guidelines.

2. Decide on stakeholders ✅

This should be a diverse group with backgrounds other than just technology. This group should have people in it and contributors that enable decisions to be made about new AI use cases.

AI is the new exciting thing so everyone will want to be in this group, be careful not to make it too big and select who’s a part of it with care.

3. Create a process for approving new use cases ✅

In the early days of discussing AI potential, every use case will likely be discussed in depth. As stakeholders start to become more familiar with common risks and their mitigants, these conversations should become more streamlined.

Make it easy for people to present their use cases clearly by creating reusable templates containing the key areas that need to be covered - e.g. common risks and how they are mitigated and assessment against ethical principles.

4. Build transparency ✅

Be open with your wider company about what you’re building.

Internal product demos are a great way to show off the innovative things you’ve been building with AI and give people outside of your key stakeholders an opportunity for input. Blog posts and related content can help build trust and transparency with external customers.

Monitoring and Auditing AI Systems

Continuous monitoring and auditing is an essential part of implementing ethical AI; you can’t be sure your system is operating ethically if you don’t have any data about how it is operating!

How you achieve this will vary based on your use case but there are some important things to consider:

Make it simple for humans to provide feedback

Whether it’s an end customer using a product or an operations analyst using an internal tool, you should make it very low-effort for them to provide feedback on the quality of the AI performance. The data you gather here should be reviewed regularly as a key risk control.

Use automated testing where appropriate

For use cases where you are using generative AI to add structure to unstructured data - e.g. extract the key fields from this email and return them as JSON - automated testing is relatively straightforward to implement.

For a given set of test data, you know what your expected outputs are. The non-deterministic nature of generative AI models means you always need to expect some level of false positive test failures, but in my experience, they are reliable enough to be useful.

Use cases where you are using generative AI to generate unstructured data are trickier - human feedback is really important here, but you can get some value from using generative AI to assess the output of generative AI; very meta!

Tailor the monitoring data to the user

Monitoring and audit data is no good if the user can’t understand it.

Engineers likely want access to low-level data to investigate specific user feedback.

Non-technical stakeholders likely want clear visualisations and insights on aggregated data. Think carefully about the intended audience when preparing data to present.

Ethical AI Practices. Good Business.

Think about it: your customers and your team rely on your organisation to make decisions that respect their privacy and fairness.

Demonstrating that you handle AI responsibly not only shields your business from legal headaches but also bolsters your reputation, keeps your customers loyal, and attracts the best talent.

If you’d like to go into more detail about how a technology partner like Build Circle can help you to define an AI governance strategy that makes sense for your organisation then get it touch with us here at Build Circle.

Contact us

Contact us_

Cheers, we will get back to you shortly.
Uh oh, something messed up. Try refreshing the page and submitting again.