Modern AI technologies have their origins in research dating back to the 1960s. Over the decades, there have been several periods of renewed interest and optimism, interspersed by disappointment and pessimism (so called “AI Winters”).

The re-birth of Artificial Intelligence (AI) began around 2012, when academic researchers saw a leap in performance of their machine vision models when they combined modern neural network software with large training datasets. This initial result quickly led to similar data-driven breakthroughs across many other research areas, including natural language processing, medical diagnosis and robotic control. Realising the huge business potential for applying these new capabilities, billions of dollars poured into this space, leading to what some people have dubbed the ‘AI gold rush’.

Gartner, a leading research advisory company, are renowned for their surveys into the trends of emerging technologies. The Gartner Hype Cycle surveys, provide a snapshot of the maturity and perception of different innovations and technologies.

The breakthroughs in the AI field over the past 10 years are remarkable, with new advances continuing to be made every day. But there is also a sense of realism setting in regarding the pace of AI technical improvements that we should expect to see in the future.

There is a widespread belief that since we have AI systems capable of beating the world’s best Go player, then super-human abilities across a wide range of tasks must be just around the corner, right? An overlooked aspect of the AlphaGo result is that board games actually present a greatly-reduced problem complexity compared to the majority of problems we encounter in the real world. In a board game, there is a well-defined set of rules and the players must strictly follow these rules throughout the game. The real, physical world has a lot less structure and the things within it are constantly changing. In fact, many of the failures of implementing AI systems are due to the system failing to recognise rarely-occurring events that were not well represented in its training dataset. These events are labelled ‘edge cases’. A major lesson that many AI developers have learned is that in the real world, these edge cases are pervasive.

In the near future, it is likely AI will be most successfully applied to tasks that we might consider mundane. Well-trained AI systems offer huge potential for the automation of tasks that are Dull, Dirty, Dangerous – and to handle these tasks quickly and efficiently. With the deluge of data we increasingly encounter in the modern world, AI systems give us the ability to perform lightning-fast pattern matching and to do so with an accuracy that approaches or exceeds humans (under optimal, well-defined conditions).

A key factor to achieving a successful outcome in implementing an AI system will be to clearly understand the problem space and to design a solution that adequately addresses the problem. This involves incorporating engineering design principles, including system assurance testing.

Shoal will continue to investigate ‘The AI Revolution’. As the technologies begin to enter a ‘post-hype’ environment, we will explore the challenges in adopting these technologies and possible steps required to ensure the technology integrates with the existing environment, providing beneficial outcomes to society. Our next article will look at case examples where we can learn lessons from both successful and unsuccessful implementations of AI.

Follow us on LinkedIn to keep up-to-date and join the discussion.

 

Ref: Gartner, Hype Cycle for Artificial Intelligence, 2020, Svetlana Sicular, Shubhangi Vashisth, July 27, 2020