Nicolas Mancini
Five keys to AI project success: define clear objectives, ensure data quality, involve stakeholders, iterate quickly, and focus on long-term value.
If you are looking to maximise the value of Artificial Intelligence in your organisation while minimising your risks, then please read on!
We founded Superlinear to assist people and organisations in creating value through Artificial Intelligence. We believe in the power of AI to automate tasks that require human-like intelligence and solve real-world problems.
But AI by itself is just a toolkit, a means to an end. It takes a lot more than throwing humongous amounts of data at the algorithms and hoping that they will solve your problems. Through frequent interactions with our clients, we have seen many scenarios. Now, we want to share our experience in setting up AI projects with our readers.
To maximise the impact of AI, you need a user-centric, agile, risk-aware and multidisciplinary approach. In this blog, we will walk you through the five keys to set you up for success with your AI projects:
Frame your business case holistically
Engage with your end-users early and often
Deliver a skateboard first and fast
Continuously assess and monitor your risks
Assemble a multidisciplinary team
1. Frame your business case holistically
A project without a clear business case is a project that is doomed to fail. All decision-makers and managers know this, and the same truth applies to AI projects. A business case weighs the total costs of implementation against the total estimated benefits of an action, decision or solution. The emphasis we would like to bring is on the total benefits of your solution, not just the AI component.
Identify your strategic value drivers
We believe there are six universal value drivers that every organisation continuously tries to improve to beat the market. Start by identifying those that are most strategic for your organisations:
Increase sales revenue (growth)
Increase customer satisfaction (growth)
Decrease costs (efficiency)
Decrease effort (efficiency)
Decrease throughput time (efficiency)
Decrease cost of capital (financial)
Analyze the moving parts
Once you have identified your most important value drivers, define which key performance indicators (KPIs) these consist of. Imagine that you are a Healthcare and Pharmaceutical company interested in decreasing throughput time in R&D efforts, for example, by reducing the long vaccine development cycles. Break down what drives long vaccine development cycles: high clinical trial failure rate, the cost and speed of performing a trial experiment, the long time-to-FDA approval rate etc.
You can then identify concrete use cases for AI that can help improve the KPIs over which you can exert the most control over. Start by automating the tasks in your internal R&D process workflows and value chain that are repetitive and mundane, i.e. the low-hanging fruit. Think automated summarization of reports, automated counting of bacterial cells in Petri dishes etc.
These are typically “high reward, low risk” types of use cases.
Frame your AI value proposition
To ensure that all stakeholders are aligned on the problem statement and the value creation, translate this into an AI value proposition statement that could take the following form:
“The R&D department wants to reduce throughput time in vaccine development cycles by better identifying strong candidates in drug discovery experiments through AI generative models.”
Measure the impact holistically
In your business case, the estimated value gains will not only come from cost reduction but from empowering your teams to do more creative and intellectually complex work.
By defining how you measure the value, you can optimise both AI and non-AI tasks throughout the project to maximise the overall value delivered to the client and the end-users. We have experienced projects where, ultimately, 70% of the value creation was a direct contribution by the AI and 30% of the value on the table that is not delivered by AI. Hence, we suggest to approach the business case holistically and look at both AI and non-AI components of the envisioned solution.
Up to 6x reduction in the workload of R&D technicians
We have worked with GSK on developing an AI assistant for CFU counting to accelerate the vaccine development process. Beyond building a predictive model powered by computer vision, we also implemented two additional non-AI components that contributed significantly to the productivity flow of the lab technicians. We created a lightweight web application for them to quickly drag and drop the input images that need to be processed by the model. On top, we gave them access to a tool that automatically generates a report of the counts.
In this case, the primary KPI was to reduce the time spent by GSK technicians on uploading pictures, counting, annotating and reporting CFUs. With our solution, GSK technicians spend up to 6x less time on counting CFU and dedicate their time to the more fulfilling parts of their jobs.
2. Engage with your end-users early and often
At Superlinear, our vision is that AI should be human-centric at its core. AI should be our co-pilot, working with us and for us. For that, we engage with our end-users early in the project lifecycle to understand their deepest wants and needs. We also capture how and when they intend to consume or act upon the predictions provided by the AI system. Armed with these insights, we can identify and communicate effectively what’s in it for them and how the AI will help them be more productive and more successful in their jobs. What drives value creation of AI is how you turn predictions into actions and decisions, and not just the performance of the model.
Involving your end-users from day one is crucial, but so is collecting feedback often throughout the project. At Superlinear, we typically deliver increments of a solution in two-week sprints that we put in the hands of our end-user. We observe their behavior and capture their valuable feedback to define the priorities for what to improve or build next. In the best-case scenario, end-user feedback is the only real feedback that matters.
The “Disney Effect”
We have experienced this first hand with our recent work at Brussels Airport Company. We developed a machine learning model that predicts when passenger luggage will be available for retrieval. Passengers attach a tag to their bags, the bTag, and receive real-time notifications about when and on which belt they are available for pick up on the turning belts.
Through user surveys, we discovered something a little counterintuitive at first: passengers that hop off their plane are less interested in arriving in the claiming area before the first bag appears on the turning belt. Instead, they prefer to come and pick up their bags only after the majority of the bags are on the belts. We called that the “Disney Effect”, or using an analogy that is closer to home, the “Tomorrowland Effect”. Passengers arrive, and their luggage is ready for pick up, removing the anxiety that most of us experience when you don’t know where your luggage is.
Much like a magical and seamless experience in a Disney theme park.
We accounted for this feedback in the next iteration of our model and also communicated the predictions to passengers differently, which ultimately resulted in higher passenger satisfaction and even higher sales for Brussels Airport Company. Indeed, passengers reported a change of behavior based on the arrival time of their luggage. Passengers were more likely to do last minute shopping or grab a coffee.
Davio Larnout, CEO of Superlinear: “AI is always a means to an end, never a goal on its own. In our approach, we always start with the business and the users.”
3. Deliver a skateboard first and fast
In today’s VUCA world, customer demands and expectations are changing and rising rapidly. This creates an unprecedented degree of uncertainty and complexity for our clients; they often do not know what the ideal solution looks like for their end-users. The traditional sequential process of the Waterfall model that has been so prevalent in many organisations for decades is no longer fit for purpose. The Waterfall model is at the heart of why so many new projects, products and services fail.
In response to this, we believe in the Skateboard Approach to create solutions that maximise business value and that our end-users love, trust and understand.
We first start by delivering a V0 of the envisioned solution i.e., the smallest thing possible that adds value, and we ship it as fast as possible to gather feedback from real users. With this approach, we aim to validate the problem-solution fit with as little code and effort as possible.
Imagine that you want a system that automatically summarises a web article while preserving its core message. The V0 for your abstractive summarization problem could be taking the very first two sentences of the article. After multiple iterations on an actual NLP model that is more sophisticated, you may find out that the two first sentences of the articles were not that far off. Imagine that you want to estimate the price of a house, the V0 of your time series problem would be to take the average cost of a house in the municipality. Start by getting early results, optimise iteratively and avoid premature complexity is the message.
Coming back to the Skateboard analogy; the client often does not explicitly ask for a car. The client simply asks for a way to get from A to B as fast, as safe and as comfortable as possible. Starting with a skateboard and then iterating incrementally allows you to uncover what end-users genuinely want, and not what they say they want. Their feedback can take the explicit form in sprint reviews and the implicit form through user behavior (e.g. analysing user clicks and actions).
In summary, the faster and the more often we can learn, the higher the value is likely to be for both end-users and the client.
Think big. Start small…and boring.
One of our clients has been scaling artificial intelligence in their services and internal processes for multiple years now. For one of their particular use cases, we explicitly opted to deploy a simple “non-AI” approach first that would simulate how the AI solution would work. The initial white-box model consisted of a simulation of the process of interest, based on manually crafted rules derived firsthand from the business rules used by the stakeholders of the project. This step was crucial in understanding the process of interest in-depth as well as listing all the factors that come into play and the possible edge cases.
From a business perspective, the white-box phase gave the client stakeholders a better feeling of how and when the predictions would be consumed. From a technical perspective, we generated valuable insights exceptionally early in the development lifecycle that were later incorporated into the ML solution. We learned a lot faster how the AI model would be part of the total solution and interact with the rest of the codebase. Moreover, the initial white-box model made it very clear that there was a need to go beyond a manual solution, as manual rules could not fully tackle the complexity of the problem.
4. Assess and monitor your risks
Risks are associated with any and every business endeavour you undertake, and AI-based initiatives are no exception. AI introduces new elements of risk (e.g. bias and interpretability of results) or may even amplify existing ones (e.g. privacy and ethicality of the use case). While AI has proven to offer great positive benefits to both businesses and consumers, its naive deployment may cause unintended, sometimes harmful, consequences for both your organisation, your customers and even for society at large.
From a business perspective, however, there are ways to derisk your AI projects in the conceptualisation phase. In general, we recommend you to break down the end-to-end process of your workflow into its moving parts. From the way you are going to collect and preprocess data to delivering the output and the human interaction with the system. From here on, you can pinpoint more easily what the potential risks and constraints are that you must consider.
Traditionally, risk management professionals employ a process that consists of four key phases:
Understand. Identify and understand the nature and the underlying causes of your risks.
Evaluate. Assess their probability and potential adversarial impact.
Control. Prioritise and implement risk mitigation controls.
Monitor and report. Continuously measure and evaluate your risks.
In the following paragraphs, we discuss two common risks in the discovery phase and how Superlinear has mitigated these in the past.
Data: Collecting enough reliable and high-quality data to build and train your model.
34% of surveyed business and IT leaders reveal that access to high-quality data remains a crucial barrier of AI adoption in their organisation. The adage that your AI system is only as good as the training data it has been fed is very much true. However, there are techniques to overcome the risks of limited or incomplete data:
Data annotation. Creating your training data sets by having internal subject matter experts labeling (input, output) pairs using open-source web annotation tools (e.g., Scalabel for images and Doccano for text). You can also outsource your annotation efforts to trusted third parties e.g., Amazon’s Mechanical Turk or Appen’s Figure Eight. For one of our clients in the automotive industry, we streamlined a data annotation pipeline to help label the training data for a computer vision application. Using Scalabel, we enabled the client, in just two days, to produce efficient and effective data annotations for car damages.
Data Augmentation. A technique to increase the diversity of your limited training data set by applying several random transformations. In a computer vision application, we used data augmentation to boost the size of our training set and train the neural network, without having to collect new data. We used techniques such as image rotations, image cropping and scaling etc. to 10x our training set.
Active Learning. A form of semi-supervised machine learning technique that equips your model with a way of choosing which training examples to label next. Essentially, Active Learning speeds up your data annotation efforts at a fraction of the cost of traditional human labeling. For one of our clients in the public sector, we used Active Learning to help a subject matter expert label skills in the next “best” job vacancy out of an extensive database that would provide the most learning gain for the model. We used this technique to ensure that the model is exposed to a high variety of job vacancies and, thus, a high variety of skills much faster than with random sampling. In short, the human labeler had to label 5.000 instead of 20.000 job vacancies to achieve the 80-20 Pareto efficiency.
Transfer Learning. Another methodology to leverage the most out of existing labelled data without having to collect lots of new labelled data. Transfer Learning is the idea of transferring the knowledge acquired in one task, for which you have a lot of labelled data available, to different yet related tasks for which you only have little labelled data available. At Superlinear, we used Transfer Learning with word embeddings in NLP applications to fine-tune the model on an unsupervised set of documents.
Ultimately, you can also use pre-trained models, such as Open AI’s GPT-3 and leverage the existing data sets upon which these models have been trained on. These may do the job when it comes to generic applications. The downside, however, is that you may not always have access to the code base to tweak the performance to fit your very own edge cases.
At Superlinear, we can help you identify the best approach depending on your use case and your constraints.
Bias and fairness: Producing predictions that are considered fair and unbiased.
With great power comes great responsibility. At Superlinear, we believe we have a shared responsibility towards building fair and ethical AI solutions. We also think that human-centered AI systems are a trade-off between fairness, privacy, accuracy and transparency. And this trade-off needs to be considered on a use case by use case, based on the context and environment that your organisation operates in.
At Superlinear, if we are concerned with a biased or discriminatory outcome of our AI, we approach it as follows:
Evaluate before committing. We carefully consider what could go wrong with automated decision-making systems, especially when deployed at scale and exposed to millions of users. What are the ethical implications, and what is the impact on people? For instance, A deep learning system that predicts whether a mortgage loan should be granted is a lot more prone to bias risk than will be a simple chatbot that provides general information to customers.
Take a moral stance and pick a measure. We involve the client stakeholders and the end-users to define what the desired outcome is. This is a team sport with efforts and knowledge required both from the client side and our side.
Monitor and mitigate. We track the implemented fairness measures and correct where needed.
Deploy with intent. We evaluate fairness over time by verifying the potential performance degradation of the model. Did the underlying data change? Do the original fairness metrics still work?
Ultimately, for humans, fairness is intimately related to explainability. We are more likely to trust predictions and decisions when they are accompanied by a plausible explanation or a relatable story. This is why we strongly believe in explainability by design, i.e. carefully crafting a user experience that creates trust in the system and provides ample opportunity to give feedback on the system and how this feedback will be used.
On the positive side, though, is that bias can be detected more easily in AI systems than in humans. Once you probe the model with an input, it will return the output just like it was trained to do. But fairness in machine learning will remain a challenging problem because fairness is an equally hard problem in the real world. In this fascinating case study, we explain the various techniques you can implement to measure bias and fairness and the hard choices you need to make. Are you ready to make those choices?
5. Assemble a multidisciplinary team
Companies are starting to realise that designing, building and maintaining AI solutions requires a different set of skills than for traditional IT projects. AI solutions have their own pipeline i.e., the series of tasks to turn raw data into useful predictions or inferences that end-users can use for more informed decision making or action taking. While you must assemble a team of domain experts that masters the different components of the pipeline, we advocate that you strengthen the team with individuals who understand:
The business and the end-users for your product or service because they provide focus and knowledge on the problem that needs solving.
The data because Machine Learning relies on high-quality and relevant data.
The IT landscape of your organization because they will help you bring your AI to life.
The pitfalls of “throw-over-the-wall data science”
We have observed that client project teams sometimes overlook data and IT experts because they underestimate the effort that goes into sourcing data and making it available in the right format for algorithms to do their magic. There are pitfalls to this organisational pattern called ‘Throw-over-the-wall data science’, whereby there is a clear split between the data scientists who design and build the algorithms and the software or DevOps engineers who integrate the AI into existing company systems and processes.
In our collaboration with Brussels Airport Company, one of the reasons put forward for the success of the project is that we worked closely with data engineering and software engineering experts who helped us bring the solution into the hands of actual users. Remember that only once a solution is integrated, only then can you say something meaningful about the outcome for your business.
Laurent Sorber, CTO of Superlinear: “At Superlinear, We prefer to break down the wall between machine learning engineers and software or DevOps engineering teams to increase the speed at which we can update and retrain our models.”
Partner up to accelerate AI adoption
Project sponsors and innovation managers must do an inventory analysis of the skills that are available in-house and highlight the gaps. Given today’s war on talent for data scientists and machine learning profiles, it is attractive to partner up with experts to have direct access to top AI talent and their experience across various industries to accelerate AI adoption in your organisation. AI experts are more likely to have worked on similar business challenges for other clients and can piggyback on those insights to lower the risks for your implementation. In addition, an outsider will be able to help you validate or challenge your assumptions and come up with fresh ideas.
What do you do next?
You learned that setting up an AI project is not like setting up any other project. Here are the main takeaways you can refer back to:
Define the business case, holistically. You need stakeholder buy-in from the beginning because even one person can delay the project by months. Don’t forget to set clear KPIs to measure the impact at the end of the project.
Engage with the end-users from the start. If the solution is not used, it’s not creating value for the business.
Start with a skateboard. Build the smallest version possible that adds value, and ship it as fast as possible to gather feedback from real users. With this approach, you can validate the problem-solution fit with as little code and effort as possible.
Evaluate the specific AI risks. In an AI project, there are more points of failure, more places where it can go wrong compared to traditional digital projects. That’s because the model learns from data, and some risks are difficult to determine upfront. Experience in conducting AI projects is essential to identify and mitigate those risks.
Assemble a multidisciplinary team. If your team is very experienced in delivering AI projects across industries, you have a higher chance of success from the start.
If you are considering starting an AI project, you generally have three choices:
You outsource it. You work with expert partners that help you accelerate adoption by bringing in deep expertise and experience. This is often the most cost-efficient option in the short term.
You do it yourself. You internalise the project and build up a capability which will be useful for future projects. You also have greater control over your setup.
You collaborate with trusted partners. Partners will be able to advise you and give you a fresh outsider perspective on your problems and leverage their expertise across clients and industry.
In the meantime, if you have some questions on the five steps we discussed or if you want to know how Superlinear can help you in that process, don’t hesitate to contact us and to check our Fast Discovery framework.
In this workshop, we co-identify the most impactful areas where AI can create real value for your end-users and your business. We sit down with decision-makers and influencers, domain experts, and data experts that have their knees deep in the business to map out their key ambitions and challenges, near and long term.
By the end of the workshop, you will be able to visualize what a working version of your AI solution delivered after just one sprint could look like. Wherever you are in your AI journey, we can help you get ahead.
Contact Us