Artificial Intelligence and Machine Learning: An Introduction for Policymakers

July 4, 2017

Paul MacDonnell, Executive Director, Global Digital Foundation 

MYTH

For most people, machines that can think and act on their own have, until now, been futurist fantasy. Fritz Lang’s Metropolis (1927), Stanley Kubrick's 2001 A Space Odyssey (1968), Alex Proyas' I, Robot (2004), and Steven Spielberg's Minority Report (2002) have, along with many other creative works, variously portrayed fictive worlds profoundly altered by Artificial Intelligence and, especially, automata. The roots of these vivid tales reach down to a bedrock of Judeo-Christian folklore and Greek mythology from which, at least since the Middle Ages, have grown parables warning of the danger that comes from taking the place of the Creator.[1] Inhabiting Medieval Jewish folklore is one such, the golem, an automaton-protector made from mud which, in one story, prefiguring Mary Shelley’s Frankenstein, runs amok.[2] As technology has evolved stories about the ambitions of its creators that end in tragedy have evolved with it—lasting well into an age where an unchallenged scientific secularism rules our intellectual and moral worlds. Is this a residue of superstition in an enlightened age or a moral symbiosis? And if the latter, is its lesson that science should split the difference with superstition or that the humanities and religion, along with science, should retain this perspective: that good and evil live in man and not in his machines?

REALITY

The latest developments in Artificial Intelligence and Machine Learning would appear to confirm Arthur C. Clarke’s dictum, “any sufficiently advanced technology is indistinguishable from magic”.[3] A 70-year stop-start intellectual relay race by two generations of scientists, has carried artificial Intelligence (AI) and Machine Learning to an inflection point. For the first time intelligent machines can match or exceed humans in complex tasks. Already they drive cars, beat humans in advanced games like Go and Poker, and outperform radiologists in spotting cancers on x-rays.[4] It is just the beginning. McKinsey has estimated that automation of almost one fifth of the activities performed by workers in the United States is 78 percent technically feasible with current technology.[5] Some sectors are especially susceptible. Seventy three percent of activities performed by workers in the food and accommodation services sector—which employs 13.4 million workers in the U.S.—has the potential for automation.[6] Fifty three percent of activities in retailing—which employs 16 million people in the U.S.—could be automated.[7] New developments in AI and Machine Learning promise to extend intelligent automation into the more complex activities performed by middle managers and other white collar workers. Activities that call for mid-level skills such as data collection and data processing are very susceptible to automation. And increasingly work that is nonrepetitive and non predictable is likely to be performed, or at least assisted, with the help of AI.[8] The health sector offers promising examples, such as a deep learning algorithm developed by Google, that can identify diabetic retinopathy, an eye disease associated with diabetes, and a chatbot developed by Your.MD, an AI startup, that uses language and machine-learning algorithms to show the probability that certain symptoms indicate specific diseases.[9] But AI and Machine Learning go far beyond the imitation of a single human mind. By applying the embedded knowledge and expertise of many minds to identify the most likely correct solutions to an ever widening range of problems they are likely to find their use in every field of human endeavour, from social work to conflict resolution.

CREATION

Before considering what policies might best promote the benefits of AI and Machine Learning and what rules might best prevent their abuse we should have a reasonable understanding of what these technologies are and how they work. A brief history is in order.

In 1950 Alan Turing, the British computer scientist and mathematician—writing in the philosophical journal, Mind—discussed the idea of a computer that could convince a person communicating with it that it was, itself, human.[10] For Turing a computer that could fool its interlocutor in this way would indeed be intelligent. Philosophers, notably John Searle, have disputed Turing’s idea that mistaking a machine for a human endows it with intelligence but computer technologists working in AI have, considering their interest in developing technology that works, dismissed these objections as being beside the point.[11] 

The first use of the term “Artificial Intelligence” was by the computer scientist John McCarthy in August 1955 when he proposed that a conference to discuss the subject be held during the Summer of 1956 in Dartmouth College, New Hampshire.[12] At the time researchers understood AI in terms of a machine’s ability to perform rules-based functions. So, for example, an AI language translation system for Russian-English would contain all the rules of grammar and a vocabulary drawn from both languages. This approach would treat sentences as logical statements that could be represented by symbols and would assume that if the computer had a big enough inventory of these for each language then it would be able to translate from one to the other. This was the approach taken by Georgetown University and IBM in January 1954 in a joint experiment which demonstrated a Russian-English translation machine.[13] Though the experiment was quite limited—it used just 250 words and six rules of grammar—it was promoted by its leading scholar, the linguist Leon Dostert, as a major breakthrough.[14] Encouraged by the project’s leaders the worldwide press reported that easy machine translation could be achieved within ten years. The New York Times reported that the experiment could be "the culmination of centuries of search by scholars for ‘a mechanical translator.’"[15] An inherent flaw in its approach, however, was that meaning and structure in natural languages are not sufficiently precise or consistent to be susceptible to the system of symbolic logic that was at the heart of the experiment. There is no logical way to translate from Russian to English. Working with symbolic logic works where rules and definitions are clear. As a public relations initiative the Georgetown-IBM experiment was a success, encouraging the Government to invest in computational linguistics and to fund Dostert’s own subsequent academic career at Georgetown.[16] However the scientists were much further away from their goal than they thought and this approach did not get far.

EVOLUTION

Another approach, which had been discussed since the 1940s, took as its starting point the idea that Artificial Intelligence should be understood in terms of the brain of a growing child or that of an evolving species.[17] The system would not be programmed with a complete set of information but rather would take information from its environment. One of the first significant attempts to implement this approach was the Perceptron, a machine unveiled by the U.S, Navy in 1958. The Perceptron used a neural network, a kind of sandwich comprising three layers—a data input layer, a ''hidden'' layer to understand the data, and an output layer to provide or display the results. The project’s leader, Frank Rosenblatt, a cognitive-systems scientist at the Cornell Aeronautical Laboratory, made promises and predictions that were even wilder than the reports of the Georgetown-IBM experiment claiming, for example, that the Perceptron could soon be used to fight America’s enemies and explore space.[18] The technology, of course, did none of these things. However, though the basic principle behind the Perceptron was sound, it was limited both in its design and by its lack of computational power. The Perceptron had only one hidden layer. It also lacked a key ability that neural networks would have to develop if they were ever to be effective—the ability to learn.

Any neural network that hoped to be truly intelligent would need more than one hidden layer to evaluate data. Instead data would need to be passed between a number of layers becoming more refined and precise as it headed towards the output layer. Such a neural network would work as follows. Data would enter the input layer. Next to the input layer the first hidden layer would search for patterns in the data, such as a basic shape or whether a perceived blob of colour had any edges. Any pattern it found would be presented to the next layer which, in turn, would seek a pattern in that pattern that indicated whether the shape represented a face or a limb. A third layer could then determine whether the limb or face was that of a cat or dog and so on. There was a problem however. If one hidden layer got it wrong then subsequent layers would not be able to correct the error and nonsense would emerge from the output layer. This is why such a system would need to be able to learn.

This insight came from a team that included Geoffrey Hinton, a British cognitive and computer scientist, who had worked on neural networks since the 1960s.[19] Their solution to the vulnerability of neural networks to errors was a system that could be trained to interpret data correctly. Their proposal was that individual artificial neurons in each hidden layer would "vote" for a certain interpretation of the data. Those which voted correctly would have their voting power increased whenever they, afterwards, voted in the same way. Hence neurons that voted correctly in response to specific data, such as correctly recognising a cat, would be treated as “experts” at seeing cats.

This approach aimed to mimic what scientists took to be the development of understanding in the human brain. It approached the problem from the more natural point of view—the brain uses partial information such as visual or aural patterns to reach conclusions.[20]

AN INSIGHT FOR POLICYMAKERS

A more straightforward way of grasping why a learning neural network is so powerful and potentially beneficial is to be found in a principle set out by the French philosopher, the Marquis (Nicolas) de Condorcet in his 1785 "Essay on the Application of Analysis to the Probability of Majority Decisions".[21] Condorcet is famous for the Condorcet Jury Theorem which states that where the probability of each voter in a deliberating group voting for a correct decision exceeds 50% then the probability that the group will reach a correct collective decision approaches 100% as its size increases.[22] Hence, using reliable artificial neurons that stand in for Condorcet’s jury members, AI and Machine Learning can, for the first time, apply this cognitive insight to a very wide range of human problems. This is why, properly managed, AI and Machine Learning will approach very high levels of reliability in their ability to recommend, or in controlled situations, take decisions. The implications for many areas of human activity from diagnosing cancers to transport safety and, possibly even more complex areas of public policy, such as telling social workers when to intervene to protect children in troubled households, are significant.

POLICY: THE IMPACT ON JOBS AND THE IMPLICATIONS FOR EDUCATION AND SKILLS

The impact of AI and Machine Learning is fuelling another generation of automation and is becoming an ever stronger factor in the appification of routine tasks, formerly done by the less skilled, on smartphones and tablets. Map software that anticipates your destination, navigates you there and offers you places to visit or stay is replacing work previously performed by personal assistants, travel agencies, and even friends. Tasks such as these and others in offices and factory floors that are routine or repetitive will be the first to be taken over by machines. Tasks that are complex are more likely, at least for the foreseeable future, to be assisted, rather than replaced, by machine intelligence. Most of the one fifth of activities in the U.S. workplace that McKinsey has identified as highly susceptible to automation are performed by low skilled or unskilled workers. Already changes due, at least partly, to technology are visible. The share of U.S. workers employed in routine office jobs declined from 25.5% to 21% between 1996 and 2015.[23] Far from being threatened by AI and Machine Learning highly skilled professionals such as doctors, lawyers and accountants will significantly benefit. Services in areas such as healthcare, law, and financial advice, are usually in high demand and can be made more affordable through AI. Professionals will be able to do more with less.[24] But in areas like retail and services, jobs are already being replaced by machines.[25]

The implications of these developments for policymakers are stark. First, policymakers need to understand that “unemployment caused by technology” is simply another way of saying “unemployment caused by a lack of skills”; second, when workers’ skills fall behind then inequality follows.[26] In a highly educated and highly skilled society there is no reason for anyone to suffer unemployment due to displacement by technology for more than a short period. Other than managing worker displacement, for which Denmark’s flexicurity system may offer a model, the best policy responses are likely to include measures for teaching people how to learn new skills. Workers who have learned how to learn are likely to need this skill throughout their careers.

POLICY: THE DATA CHALLENGE

Big data is an essential raw ingredient in AI and Machine Learning technologies. AI and machine intelligence are likely to intensify questions about how personal data is used by organisations. The perspective of policymakers in Europe on this question has potentially far-reaching consequences. In particular European privacy advocates are likely to seek to embody equality principles within their approach to regulating AI and Machine Learning. For example a key fear will be that algorithms, possibly using erroneous data or data that is taken to be a proxy for specific ethnic groups, will be used to support decisions about people that are discriminatory. Europe and the U.S. have a large body of legislation protecting people defined as belonging to minority groups or as belonging to a specific gender. If AI and machine intelligence start to support many services that citizens rely on then what is the scope for discrimination?

The short answer to this question is as follows: AI and machine intelligence are digital tools. The likelihood or not of discrimination lies with the users of these tools and not in the tools themselves. Regulators, therefore, should not assume that businesses are attracted to these technologies because they wish to practice discrimination against certain individuals. This does not get away from the fact, however, that poor data will lead to poor decisions and in a world increasingly reliant on AI and machine intelligence this could have bad consequences. Ultimately organisations will need governance frameworks to minimise the risk of harm arising to critical systems from malfunctioning intelligent machines.

CONCLUSION

Artificial Intelligence and Machine Learning are products of both science and myth. The idea that machines could think and perform tasks just as humans do is thousands of years old. The cognitive truths expressed in AI and Machine Learning systems are not new either. It may be better to view these technologies as the implementation of powerful and long-established cognitive principles through engineering.

We should accept that there is a tendency to approach all important innovations as a Rorschach test upon which we impose anxieties and hopes about what constitutes a good or happy world. But the potential of AI and machine intelligence for good does not lie exclusively, or even primarily, within its technologies. It lies mainly in its users. If we trust (in the main) how our societies are currently being run then we have no reason not to trust ourselves to do good with these technologies. And if we can suspend presentism and accept that ancient stories warning us not to play God with powerful technologies are instructive then we will likely free ourselves from unnecessary anxiety about their use.

End

Views expressed in this article are those of the author and not those of the Global Digital Foundation which does not hold corporate views.

NOTES

  1. Bruce Mazlish, 'The man-machine and Artificial Intelligence', Stanford Humanities Review, Stanford University, Volume 4, Issue 2, July 1995, http://web.stanford.edu/group/SHR/4-2/text/mazlish.html

  2. Alden Oreck, Jewish Virtual Library, 'The Golem', http://www.jewishvirtuallibrary.org/the-golem

  3. Arthur C. Clarke, 'Profiles of the Future: An Enquiry into the Limits of the Possible', 1962, rev. 1973, pp. 14, 21, 36

  4. Olivia Solon, 'Oh the humanity! Poker computer trounces humans in big step for AI', Guardian, January 30, 2017, https://www.theguardian.com/technology/2017/jan/30/libratus-poker-artificial-intelligence-professional-human-players-competition

  5. McKinsey, ‘Where machines could replace humans—and where they can’t (yet)’, July, 2016, See: http://www.mckinsey.com/business-functions/digital-mckinsey/our-insights/where-machines-could-replace-humans-and-where-they-cant-yet

  6. McKinsey, July, 2016. See also U.S. Bureau of Labor Statistics, Industries at a Glance, 2016, https://www.bls.gov/iag/tgs/iag72.htm

  7. U.S. Bureau of Labor Statistics, Industries at a Glance, 2016, https://www.bls.gov/iag/tgs/iag44-45.htm

  8. McKinsey, July, 2016

  9. Varun Gulshan, Lily Peng, Marc Coram, et al, 'Development and Validation of a Deep Learning Algorithm for Detection of Diabetic Retinopathy in Retinal Fundus Photographs', Journal of the American Medical Association, December 13, 2016, http://jamanetwork.com/journals/jama/article-abstract/2588763 and Dyllan Furness, Digital Trends, 'The chatbot will see you now: AI may play doctor in the future of healthcare', October 7, 2016, http://www.digitaltrends.com/cool-tech/artificial-intelligence-chatbots-are-revolutionizing-healthcare/

  10. Alan Turing, ‘Computing Machinery and Intelligence’. Mind 49, (433 – 460) 1950. See: http://www.loebner.net/Prizef/TuringArticle.html

  11. Elhanan Motzkin, reply by John R. Searle 'Artificial Intelligence and the Chinese Room: An Exchange', New York Review of Books, February 16, 1989, http://www.nybooks.com/articles/1989/02/16/artificial-intelligence-and-the-chinese-room-an-ex/ and (for the reported views of some Google scientists), Gideon Lewis-Kraus, ‘The Great A.I. Awakening’, New York Times, December 4, 2016, https://www.nytimes.com/2016/12/14/magazine/the-great-ai-awakening.html?_r=1

  12. John McCarthy, ‘A Proposal for the Dartmouth Summer Research Project on Artificial Intelligence’, http://www-formal.stanford.edu/jmc/history/dartmouth/dartmouth.html. See also John McCarthy, ‘What is Artificial Intelligence?’, Stanford University, http://www-formal.stanford.edu/jmc/whatisai/

  13. John Hutchins, ‘The first public demonstration of machine translation: the Georgetown-IBM system, 7th January 1954’, March, 2006, http://www.hutchinsweb.me.uk/GU-IBM-2005.pdf

  14. Leon Dostert had been the chief interpreter / translator at the Nuremberg Trials. He and Cuthbert Hurd, director of the Applied Sciences Division at IBM, led Georgetown’s Machine Translation Research Project.

  15. Jeremy Norman, 'The First Public Demonstration of Machine Translation Occurs', HistoryofInformation.com, December 26, 2016, http://www.historyofinformation.com/expanded.php?id=852

  16. Jeremy Norman, December 26, 2016

  17. Hans Moravec, ‘The Role of Raw Rower in Intelligence.’ Unpublished manuscript, May 12, 1976, http://www.frc.ri.cmu.edu/users/hpm/project.archive/general.articles/1975/Raw.Power.html

  18. Gideon Lewis-Kraus, December 4, 2016, and Daniela Hernandez, 'The thinking computer that was supposed to colonize space', Fusion, February 26, 2015

  19. David E Rumelhart, Geoffrey E. Hinton and Ronald J. Williams, 'Learning representations by back-propagating errors', Nature, Vol 323, October 9, 1986

  20. Gideon Lewis-Kraus, December 4, 2016

  21. Jean-Antoine-Nicolas de Caritat marquis de Condorcet, ‘Essai sur l’application de l’analyse à la probabilité des décisions rendues à la pluralité des voix’, 1785, https://archive.org/stream/essaisurlapplica00cond#page/n5/mode/2up

  22. Cass Sunstein, 'Infotopia, How Many Minds Produce Knowledge', Oxford University Press, 2006, p25

  23. ‘Lifelong learning: special report’, Economist, January 14, 2017

  24. See for example, Mark A. Cohen, 'How Artificial Intelligence Will Transform The Delivery Of Legal Services', Forbes, September 6, 2016, http://www.forbes.com/sites/markcohen1/2016/09/06/artificial-intelligence-and-legal-delivery/20ebaa842647

  25. David Pierce, 'This Robot Makes a Dang Good Latte', Wired, January 30, 2017, https://www.wired.com/2017/01/cafe-x-robot-barista/

  26. Economist, January 14, 2017.