Table of content

Communicating AI results to clients in a clear and understandable manner
Why insight matters
Let’s start with an example
Waiting in line
The stakeholders
Make it explainable
Opening up the black box
Back to the roots
Explanation by example
Advanced methods
Let’s apply what we learned
Final destination
In conclusion

Table of content

Table of content

Communicating AI results to clients in a clear and understandable manner
Why insight matters
Let’s start with an example
Waiting in line
The stakeholders
Make it explainable
Opening up the black box
Back to the roots
Explanation by example
Advanced methods
Let’s apply what we learned
Final destination
In conclusion

Communicating AI results to clients in a clear and understandable manner

Communicating AI results to clients in a clear and understandable manner

02 Nov 2020

To effectively communicate AI results, focus on simplifying explanations, use examples, and ensure transparency. Explain model decisions clearly for better client understanding.

It still happens often that people who interact with AI systems or services see them as impenetrable black boxes. What many people don’t know, however, is that this no longer needs to be the case. Through advancements in AI technology and the use of specially selected algorithms, we can allow users to ask for an explanation and managers to get deeper insights.

Why insight matters

There are many reasons why you may want to provide an explanation behind the results an AI system has given you and the decisions you make based on those. There could be a legal reason you need to be able to explain which data factored into a result and why it was processed a certain way, for example for GDPR compliance. Transparency can also inspire trust, for example when communicating results to important stakeholders. It’s also important for systems to be explainable in order for them to be fair, or at the very least for their fairness to be checked.

Let’s start with an example

Waiting in line

Imagine, you’ve made an AI system that predicts queuing times at the airport. Its predictions are used to inform passengers about how long they have to wait at the security checkpoint and the border control station. This system is also used by the border control agents and security staff to optimize their personnel allocation. For example, when they see that the queuing time is predicted to rise to 30 minutes at a certain security checkpoint they may choose to transfer a staff member there to try to prevent that.

The stakeholders

As a developer, you’ve been getting questions from the border control agents. They’ve noticed that the predictions for long queues sometimes seem to be unreasonable at first sight. On a calm day, with very few flights leaving, the waiting times could be inexplicably long. Conversely, there are also days that are usually busy where the AI system predicts a drop in queuing times for a while. How would you explain the cause of these strange predictions?

Make it explainable

Opening up the black box

Machine learning models typically take in data from many different sources to make their predictions. Many modern machine learning models also have complex mathematical structures with a large number of parameters, making it difficult to reason about them. In this sense, models can be seen as black boxes, you can see that they work, but it’s hard to determine how and even harder to explain why they do what they do.

There are, however, some models that are easier to understand and techniques that can make opaque models more transparent.

Back to the roots

Let’s start with an example of a model that’s easier to understand: a decision tree. You can see this as a number of branching logic statements. If this model makes a prediction it’s easy to trace back the decisions it took and give this as an explanation. You may have noticed that this can raise more questions: “Why that specific set of rules and not other rules?”, “What if one of these variables was a little bit different?”, …

Explanation by example

That brings us to another way of explaining predictions. The rules in our decision tree model are based on a number of training examples. It might be interesting for the users of a model to see what kind of examples contributed to the result they saw, as a kind of “explanation by example”. This would, however, require us to keep at least part of the training data and have a way of associating a new prediction with training examples, which doesn’t always work.

Advanced methods

For cases where explanation by example doesn’t work, there are some more advanced methods we can use. For example, in neural network-based models for computer vision, we may be able to show which parts of an image contributed most to the final result. We also have methods that show the importance of certain input data compared to others, like SHAP values.

Let’s apply what we learned

Let’s get back to the airport queuing time case and try to piece together a way of explaining to the border control agents why these strange but very relevant spikes sometimes appear in the predictions.

Final destination

The model we use looks at many different variables like the countries our passengers are flying to. If most passengers are flying to an EU country the agents will have little to do, but if a large number of passengers are flying to a country that requires extra checks the agents will take more time per passenger, resulting in longer queues. The model was able to learn this from the examples it got and we can use an analysis technique to highlight that the “destination country” variable was the one that contributed the most to these extreme situations.

In conclusion

We hope we’ve been able to illustrate that Explainability is an important topic for AI and Machine Learning projects. Also, that Machine Learning models aren’t incomprehensible black boxes. In the end, everything depends on the type of explanation the client needs and which tools we can use to give it to them.

Author:

Joren Verspeurt

RAGLite tutorial

Article

This guide walks you through the process of building a powerful RAG pipeline using RAGLite. From configuring your LLM and database to implementing advanced retrieval strategies like semantic chunking and reranking, this guide covers everything you need to optimize and scale your RAG-based applications.

RAGLite tutorial

Article

This guide walks you through the process of building a powerful RAG pipeline using RAGLite. From configuring your LLM and database to implementing advanced retrieval strategies like semantic chunking and reranking, this guide covers everything you need to optimize and scale your RAG-based applications.

RAGLite tutorial

Article

This guide walks you through the process of building a powerful RAG pipeline using RAGLite. From configuring your LLM and database to implementing advanced retrieval strategies like semantic chunking and reranking, this guide covers everything you need to optimize and scale your RAG-based applications.

RAGLite

Article

Discover RAGLite, a lightweight toolkit that revolutionizes Retrieval-Augmented Generation (RAG). With features like semantic chunking, adaptive retrieval, and hybrid search, RAGLite overcomes traditional RAG limitations, simplifying workflows and ensuring fast, scalable, and accurate information retrieval for real-world AI applications.

RAGLite

Article

Discover RAGLite, a lightweight toolkit that revolutionizes Retrieval-Augmented Generation (RAG). With features like semantic chunking, adaptive retrieval, and hybrid search, RAGLite overcomes traditional RAG limitations, simplifying workflows and ensuring fast, scalable, and accurate information retrieval for real-world AI applications.

RAGLite

Article

Discover RAGLite, a lightweight toolkit that revolutionizes Retrieval-Augmented Generation (RAG). With features like semantic chunking, adaptive retrieval, and hybrid search, RAGLite overcomes traditional RAG limitations, simplifying workflows and ensuring fast, scalable, and accurate information retrieval for real-world AI applications.

worker doing product defect detection in manufacturing

Article

Unsupervised anomaly detection advances quality control in manufacturing by enabling efficient and flexible product defect detection with a minimal labelling effort and the ability to handle changing products and various defect types.

worker doing product defect detection in manufacturing

Article

Unsupervised anomaly detection advances quality control in manufacturing by enabling efficient and flexible product defect detection with a minimal labelling effort and the ability to handle changing products and various defect types.

worker doing product defect detection in manufacturing

Article

Unsupervised anomaly detection advances quality control in manufacturing by enabling efficient and flexible product defect detection with a minimal labelling effort and the ability to handle changing products and various defect types.

Contact Us

Ready to tackle your business challenges?

Stay Informed

Subscribe to our newsletter

Get the latest AI insights and be invited to our digital sessions!

Stay Informed

Subscribe to our newsletter

Get the latest AI insights and be invited to our digital sessions!

Stay Informed

Subscribe to our newsletter

Get the latest AI insights and be invited to our digital sessions!

Locations

Brussels HQ

Central Gate

Cantersteen 47



1000 Brussels

Ghent

Planet Group Arena

Ottergemsesteenweg-Zuid 808 b300

9000 Gent

© 2024 Superlinear. All rights reserved.

Locations

Brussels HQ

Central Gate

Cantersteen 47



1000 Brussels

Ghent

Planet Group Arena
Ottergemsesteenweg-Zuid 808 b300
9000 Gent

© 2024 Superlinear. All rights reserved.

Locations

Brussels HQ

Central Gate

Cantersteen 47



1000 Brussels

Ghent

Planet Group Arena
Ottergemsesteenweg-Zuid 808 b300
9000 Gent

© 2024 Superlinear. All rights reserved.