What can be done to be more successful in Artificial Intelligence?

In previous articles in this series, we discussed some examples of how industry has adopted AI both successfully and poorly.  Some of the poor implementations saw industry trying to do too much too soon as well as not truly understanding the environment in which their AI solutions were being deployed. We then looked at a possible recipe for success which examined what the successful implementations of the technology had in common. In this article we will take this a step further and propose some concepts that can be implemented to help industry follow this recipe and increase the chance of getting it right. 

Two recurring factors that we see in successful implementation of AI, in particularly in the examples we discussed in the previous article, are how well we understand the operating environment and how well we understand how the AI will respond or behave. For well constrained operating environments, such as robotic arm on an assembly line, we can understand and model the environment in the systems design to a high degree of fidelity. The same can be said about the behavior of AI systems which are more deterministic, i.e., we know the how it will behave to a given stimulus. However, as we grow in complexity, in both the operating in environment and the systems behavior, it becomes increasingly more difficult to be confident in the fidelity of our understanding and modelling.  

For industry to successfully adopt and integrate these complex technologies in these complex environments we need to become more confident that we understand the full extent of the environment we are deploying these technologies into and that we understand how the technology will behave. This concept of confidence in design is nothing new, in fact it is what the concept of design assurance is based on. So why can’t we just implement traditional assurance techniques to AI? Traditionally we conduct this assurance by developing test cases which form the wider assurance case for the system. This assurance case makes a justified argument that the system has been designed and built correctly and provides us with the confidence in its design. Unfortunately, this approach has some glaring deficiencies when it comes to the application of AI. 

When developing test cases, we try and develop representations of the ‘worst case’ for the system. This allows the results to then be extrapolated to the wider operating environment. However, as we do not understand exactly how an AI system will respond to a set of stimuli, developing this ‘worst case’ test cases is difficult.  Furthermore, these test cases are quite often pass/fail in nature, either the system responds appropriately or not. But the complex decision making behind AI can lead to unpredictable or probabilistic responses.  For example, a particular AI system may behave appropriately 99% of the time.  In this case it is still expected to fail once every 100 cycles which may not be acceptable for safety critical systems. However, a pass/fail approach to assurance has a 99% chance of passing this system. Thus, for us to be confident we have designed these systems correctly we need to take a different approach to assurance.  

There has already been a range of academic research into developing an approach to assurance of AI systems. One approach is to use a dynamic or live assurance case. This approach involves collecting data on the system in operation and comparing it to a model of the design environment used to develop this technology. If there are gaps between the ‘real’ operating environment and the design environment the model can be updated and any required changes to the AI can be made. This technique works well when the consequence of failure is low however is not appropriate for safety critical systems. An ‘offline’ technique which would be more appropriate for safety critical systems is taking a probabilistic approach to the assurance. One solution involves developing a model of the system and its environment including a set of key environmental variables are selected and then a series of simulations are run altering these variables. This will then result in a more representative probabilistic understanding of the systems behavior.  

In the next article in the series we will take a deeper look into these assurance methodologies and propose a framework concept that has the potential to provide us confidence that the AI system designs can be successfully implemented. Follow us on LinkedIn to keep up-to-date and join the discussion. 

Read the previous articles in this series

Article one – The AI Revolution

Article two – Overcoming the AI Hype Curve

Article three – Adoption of AI in Industry: Successes and Failures