What is Artificial Intelligence? Attendees at October’s Sphere Network event addressed this question, and pooled their collective knowledge of machine learning, advanced analytics, AI applications, and ethics. This blog is an attempt to distill the broad and fascinating discussion into a few hundred words from many pages of comprehensive notes.
First, definitions. “Artificial intelligence is a composite of a number areas of data science, psychology, sociology,” one attendee contributed. “I personally think of it in terms of empathy. If a system is capable of inferring things you haven’t told it or enhances your world in ways you haven’t thought of, then that’s artificial intelligence.” However, as the term becomes more main-stream it is increasingly used to refer both to automated intelligence and to “automated non-intelligence” as exemplified by robotic process automation (RPA). The lines are blurred due to the application of advanced analytics to automated processes. A great example given was that of smart homes, where the response of the automated systems is increasingly informed by real-time data. Do the inferences drawn and actions taken by the systems shift a simple automation into the realm of AI?
This depends upon the degree of machine learning involved. Does the entity learn more from each iteration of data, in the same way that, for instance, a human doctor will read increasing numbers of x-rays and thereby improve his or her ability to analyse them? Industry has a crossover of interest across machine learning and AI. There have been rapid increases in both computational power and the amount of data available for learning, and this brings in the third element of the evening’s discussion, advanced analytics. While we may not have reached the Holy Grail of “context-free grammar”, with the advent of quantum computation the level and detail of analytics we will be able to access will be truly scary.
Attendees noted that analytics has four distinct aspects: Descriptive (what has happened); Diagnostic(why has it happened); Predictive (what will happen); and Prescriptive (what should we do about it). These definitions formalise our own learning experience – it hurts, I touched something, if I do it again it will hurt again, I should change behaviour. The same route should be taken by machine learning, and these aspects of data are crucial in understanding the development of artificial intelligence and its application in business settings.
Challenges to overcome in Artificial Intelligence development
The AI journey is still in its infancy. Chatbots are prevalent but simplistic, and can make mistakes or simply fail to answer questions effectively. An artificial intelligence cannot yet adapt its learning across different spheres: It was once observed that Deep Blue may be good at chess, but lousy at Tic Tac Toe. Machine learning is qualitatively different from human learning, and we must recognise this in order better to understand what AI can deliver. The groups identified a number of challenges including quality of data, human insecurities, tolerance of mistakes, and underlying ethical considerations. Some surprising barriers to the adoption of new technology were also raised, although attendees shared experience of good application in 3D modelling and design, in identifying risk in the legal sector, and in data modelling across shared services.
The quality of data was a subject of lively debate. Some argued that the quality of data and availability of chip computing resources and machine generated data means that data has a new lease of life. However, high level macro analysis reveals that the level of intellectual capability contributing to data is decreasing. On the other hand, data produced by automated systems is improving, and while quantity may currently be preferred over quality there is more structured data available to us. This is an essential discussion, as data quality directly influences the machine learning process. An artificial intelligence basing its learning on the skewed world of Facebook, for instance, would ultimately make decisions based upon incorrect assumptions about the world at large.
Human insecurities are most evident in the current fear of the impact of AI on jobs. This contrasts with our general acceptance of the technology in everyday life. The same people who fear losing their jobs are also comfortable with customer service chat bots, smartphone interaction,and connection devices. In terms of threats to employment, the group agreed that where basic tasks are absorbed by automation, there may be difficulties with human learning and career progression that once relied upon knowledge accumulated through those very activities. A solution may be the adoption of a hybrid model where responsibility is shared and learning is maintained, while the advantages of efficiency and speed are realised.
These insecurities also lead to a reduced tolerance of mistakes. There is an expectation that AI will be perfect: Autonomous vehicles are a good example of this. While the 37 million drivers on Britain’s roads are far from perfect (and not all intelligent!), rare news of a self-driving vehicle crashing is treated with derision. Would you get into an autonomous vehicle that’s programmed to calculate the value of your life against that of others to determine who to save in a car crash? Building ethics into artificial intelligence is essential to its acceptance.
The ethical landscape of artificial intelligence, data analytics, and machine learning is a rocky one. There are some good guidelines emerging on the ethics of AI development and use, however this group focused on more practical considerations. Can you teach your computer ethics? Not many people are thinking about how you might inspect and interrogate an autonomous vehicle should there be an accident. If the AI goes wrong, do you hold the vendor accountable, or the user by how they have subsequently influenced or programmed it?
The group believed that business leaders need to be more commercially aware than they are of what advanced analytics and AI can do for them. “There’s a generational gap of people who just don’t want to know,” said one attendee. Sadly, it seems that leaders regularly block innovation, although there are some shining examples of good practice. Advanced automotive nailed it years ago, said some, and Greggs is one local success story where supply chain forecasting from raw material to finished product has been remodeled using advanced analytics and AI. Younger companies are generally more data aware, however there are large companies who have managed well enough without data and analytics and see no need to change, even fearing the liability of data. This is a difficult topic as we count down to the implementation of the General Data Protection Regulations (GDPR), and businesses will be cautious about sharing data.
Where do we go from here?
Overall, our attendees felt that the discussion had been both insightful and scary. It was noted that the conversation led more to the negative aspects of AI; caution with a new and potentially game-changing technology. All agreed that artificial intelligence is rapidly evolving, and interestingly referred back to the topic of our previous event, Blockchain, as a key driver for even faster evolution. The timeframe of the adoption and disruption cycle is shortening with every new phase, and we must be ready to react.
We would like to open this discussion up to everyone in the region whose business or sector could feel the impact of AI. In particular, we are seeking case studies and opportunities for Sphere Network members to collaborate. If you can contribute to a future workshop on the topic, please contact us.