In our previous article in this series, Overcoming the AI Hype Curve, we discussed the build-up of excitement in Artificial Intelligence technologies in recent years. We are now moving into an era of realistic expectations around what AI will be able to achieve in the near to mid-term future. To better understand where the AI field has reached in its development, we now examine some recent success stories (along with a few failures) and explore the key factors behind these outcomes.
Successes in Artificial Intelligence
Australia has one of the world’s leading success stories of adoption of advanced automation into existing industries in Rio Tinto and their Mine of the Future initiative. Since the introduction of autonomous haul trucks in 2008, Rio Tinto has expanded the usage of automation technologies across their entire mining operations, including autonomous control of their trucks, trains and drilling equipment. Rio Tinto is now using deep learning models to optimise their rail operations . Developed in conjunction with researchers in Australian Universities , Rio Tinto is a great example of how using innovative AI technologies, in a controlled operational environment like long haul rail, can deliver increased efficiency and cost savings to one of our largest industries.
Another example of innovative use of AI is seen in Australian law enforcement, where advanced computer vision methods have been applied to the problem of catching people illegally using mobile phones while driving  The system uses a machine learning image analysis model to review photos taken from overhead cameras as traffic passes underneath. If a possible offending driver is detected, the photo is then sent to a human reviewer before an infringement notice is issued. This example shows AI technologies successfully being used to support human-based decision making in an environment where the consequence of failure is relatively low.
In the Defence technology space, powerful and lightweight sensors are enabling ever increasing amounts of real-time data to be collected on the battlefield to assist with things such as targeting. To process this raw data into a form that can aid tactical decision making, AI models are being developed as a core component in next-generation battlefield situational awareness systems . When combined with real-time displays, these systems allow the soldiers to more accurately identify potential targets and enemy positions, while avoiding friendly forces and civilians.
Setbacks and failures are inevitable
Along with these success stories, there have also been setbacks and failures when AI systems are moved into complex production environments before they are ready. An infamous example of this is the much-reported tragic accident  caused by the failure of an autonomous Uber vehicle to identify a pedestrian on the road. Rather than implement the technology in a controlled operational environment, like the long haul train example above, the autonomous Uber vehicle is operating in an environment with countless complex interaction including non-automated road vehicles, highly variable road conditions and pedestrians.
In other instances, systems have been found to not deliver the expected level of performance when they are taken from the simplified R&D lab environment into the complex real world. When Google Health developed a sophisticated ML-based tool for screening diabetes patients for early signs of eye disease, they encountered several unexpected obstacles  that meant the system did not perform as expected. This was caused by several factors, including the system being trained on a set of high-quality clinical images during training phase which did not translate to the more complex varying quality of real images that it was later examining.
Recipes for success with Artificial Intelligence
So, how will we know beforehand whether a particular AI development effort is likely to succeed? Although it is impossible to predict with certainty, there are several key ideas that can help guide us. One important concept is that we may not want to completely replace human workers but to use AI models to assist them. As an example, a computer vision might involve using the AI model to pre-screen a collection of images and then have a human inspect images that the AI selected as meeting some pre-determined criteria, such as in the earlier mobile phone detection example. In this workflow, there is a good opportunity to use the human feedback to continue to build a training dataset. In effect, the system will ‘continue to learn’. By taking this approach, and using real world dynamic assurance, Google Health may have caught the obstacles facing their eye disease screening tool earlier and had a more successful implementation.
Another consideration is to look at what the consequences of a failure in the AI model will be in its operating environment. Is the environment well controlled and are certain levels of error (e.g. false negatives or false positives) acceptable, or is the AI operating in an unknown, complex environment where an error potentially led to catastrophic outcome (such as the case for autonomous vehicles operating on public roads)? Even if we believe in the revolutionary new capabilities offered by AI, it is wise to proceed slowly and carefully check our assumptions around the AI system performance. Shoal will continue to investigate ‘The AI Revolution’. As the use of AI technologies expands into new areas of application, including their use in mission-critical tasks, so does our expectation that these systems deliver safe and reliable performance. In our next article, we will look at strategies for achieving assurance of AI systems, including new engineering design approaches that address the unique challenges posed by AI-enabled technologies.
Follow us on LinkedIn to keep up-to-date and join the discussion.