Over the next few days, Google’s Go-playing algorithm,AlphaGo, will take on the current world Go champion, Lee Se-dol. It is an event which is being watched closely by both Go players and coders, since until very recently Go was thought to be a game intractable for machines to play competently. (At the time of uploading to this blog, AlphaGo is 2-0 in the lead)
I’ve worked in various ways with AI over a lot of years now, so thought it was high time I wrote about it here.Far from the Spaceports, and the in-progress follow-up By Default, have human-AI relationships at their heart. Mitnash, a thoroughly human investigator and coder, has Slate as his partner. Slate is an AI – or persona, as I prefer to use in the books – and the two work together in their struggle against high-tech crime.
How far are we away from this? In my opinion, quite a long way. There have been huge advances in AI during my working life. This has largely been made possible by corresponding advances in the speed and capability of the hardware systems on which they run. However, creative ideas for how to code learning algorithms and pattern recognition have also come taken huge strides. Nevertheless, I don’t think we are very close to working with Slate or her fellow personas just yet.
Of course, you have to be mindful of a quote attributed to Bill Gates: “We always overestimate the change that will occur in the next two years and underestimate the change that will occur in the next ten.” But that said, I still think we’re some way off.
There are a lot of different, and very useful, ideas as to what constitutes intelligence, but for the purpose of this blog I am largely focusing on the abilities to learn and then detect meaningful patterns, work usefully with inconsistent or poor quality information, and communicate about all this with another individual in such a way that both parties can revise their opinions.
Part of the problem is that most people are working on a very small part of the problem, and the organisation paying them only really wants quite a specific outcome. So one team might be working on machine health monitoring and fault prediction, to improve aviation safety. Another will concentrate on whatever is needed to identify objects in photographs. Another on voice recognition. Another on being able to beat human champions at a specific game. And so on. Comparatively few are integrating all this into a single entity.
Human intelligence is also noteworthy for being able to adapt flexibly to new situations, calling for similar but not identical responses. So my guess is that Lee Se-dol probably also plays an outstanding game of chess, or Senet, or any of dozens of board games. At a guess, he could probably hold his own very well at some game he had never seen before, after a comparatively brief explanation of the rules. I have serious doubts as to whether Google’s codebase could make such a transition.
Another issue is repetition and predictability. If you’re coding a safety system, you really want to know that the same set of circumstances will lead to the same consequences. Quite apart from giving confidence to your immediate users, there is the whole matter of getting the system qualified for use. Imagine your system has failed to recommend replacement of a critical component. There has been a crash, and you are at the investigation. “Why did your system fail to recommend that the component be changed?” And you reply, “Oh, I don’t know – it says something different every time.” I can’t imagine this going down very well with the investigation committee. For the reaction of a friend, however, unpredictability is part of the fun.
We find it difficult to define what intelligence really is, or which part of our being is responsible for it. Recent comparative studies in which bird and primate intelligence are contrasted, have questioned the idea that it is seated in the cortex: birds don’t have such a thing. In the light of such basic uncertainty, corporate reluctance is understandable. It is hugely easier – and hugely more cost effective – for an organisation to say “build me a system which can identify patterns of word use by different authors” than “build me intelligent partners for my human staff.”
As someone working in a tech industry, I am keenly aware of, and excited by, the possibility of AI. How would my team carry out quality assurance for such a system? It’s often hard enough to do this for a complex but entirely rule-bound application. The challenges are immense.
But as an author, I am entirely free to suppose that all that has been done, and focus on the storytelling issues of how such a relationship would work.