By Kevin Cheung, Associate Professor, School of Mathematics and Statistics
AlphaGo’s recent victory over 18-time world champion Lee Sedol at the game of Go in the Google DeepMind Challenge Match stunned the world.
AlphaGo is a Go-playing computer program developed by Google subsidiary DeepMind. Its victory was a big deal because many people thought that it would be another 5-10 years before a computer program would beat a nine dan professional Go player. Nature reported that South Korea would invest $860 million USD in artificial intelligence (AI) research after the AlphaGo ‘shock’. The Chronicle of Higher Education ran an article describing the potential economic ramifications of AI after the historic event.
Since the publication of the landmark paper by Benjamin Bloom titled The 2-Sigma Problem: The Search for Methods of Group Instruction as Effective as One-to-One Tutoring, intelligent tutoring systems (ITS) have been viewed by some as the solution to truly personalized learning. Even though the benefits ITS could bring are undeniable, we have yet to see widespread adoption of such systems. A Wikipedia article on ITS outlines some of the limitations that could explain the current lack of popularity.
From my experience in teaching mathematics, the main tasks of a human math tutor are identifying a student’s weaknesses, selecting suitable practice problems, and providing appropriate hints or remedial instructions when needed. (A good tutor would also provide emotional support and motivation). Adaptive homework systems such as WebAssign can already perform the first two tasks reasonably well. Item response theory is likely the backbone behind such systems. However, providing appropriate hints or remedial instructions does not seem so simple.
One of the limitations listed in the Wikipedia article is that students would simply click through all the hints that a system is programmed to provide so that they could finish the problem at hand quickly. A human tutor would not simply hand out hints just because a student has asked for them, though he or she might be tempted to do so if there was big reward. A human tutor would take into account many factors. Sometimes, a human tutor might decide that it is counterproductive to keep a student on the same problem and introduce a different problem or an entirely different activity, such as talking about the weather. The question now is, can a machine become effective in assessing what to do when a student is stuck?
If one reads the many commentaries on AlphaGo’s behaviour in the five games that it played again Mr. Lee, one might get the feeling that the answer is “probably soon.” I can already imagine how a digital assistant like Apple’s Siri or Microsoft’s Cortona, when tasked with helping a user deal with Internet addiction, would decide to lift a scheduled Internet ban when its user makes an emergency request for looking up something online. Each time a ban is lifted, the digital assistant learns from the user’s actions and determines the legitimacy of the request. This sounds like science fiction. However, the reality is that the technology enabling such intelligent behaviour is already here. For instance, digital assistants reportedly could infer a user’s emotional state by “looking through” the camera.
At the moment, the main factors that are stopping the technology from taking over humans is cost and the human-machine barrier. Fortunately (or perhaps unfortunately), both will inevitably come down over time. It is just a matter of when. And what will education look like when machines become faster learners than humans?