Monday afternoon. The classroom projector announces: “In 2 minutes the projector will go into standby mode.” After 60 seconds, it changes to: “In 1 minutes the projector will go into standby mode.”
Was it really too hard to make that “1 minute”?
Tuesday, early morning. No one else in the building. The elevator wakes as I press the “Up” button. But before the doors open a synthesized voice inside announces: “Lift going up!”
The system has been idle, doors closed and no buttons pressed, all night. So who is the addressee? Some stray blind person sleeping in the elevator? Couldn’t the program have included a line meaning “if not idle”?
Wednesday. At the supermarket self-checkout machine I press “Pay with cash.” The electronic voice says: “Insert cash, or select payment type!” (I already selected payment type.) I feed in a £20 note, and again it says: “Insert cash, or select payment type!” (No condition saying “unless cash already inserted.”) The change due is less than the smallest banknote (£5, roughly $7.70), hence must be all coins; yet after “Please take your change!” the voice adds: “Notes are dispensed below the scanner!” (No condition meaning “if change due ≥ £5.”)
Thursday. My coffee reheats in the microwave, and just as the minute ends I open the door. Five long beeps nonetheless warn me that the minute is now up. (No condition saying “unless user has already opened the door.”)
Friday. “You have one new message,” says my voicemail service; and then it says “Message One: … ”
But if there’s only one, you don’t need to number them, do you?
Let me draw out the moral from this catalog of one week’s typical interactions with thoughtlessly programmed linguistically communicative machinery.
Since the 1980s I’ve been reading predictions that soon, very soon, intelligent and responsive machines will converse with us in our own language. I think not. I predict that future devices will be programmed by the same sort of people who now write scripts for elevators and self-checkout machines. It won’t be the Dilberts who program our robots; it will be the Wallys.
The Dilberts are busy writing sophisticated code to make Volkswagen diesel-emission controls operate only in laboratory tests. The programming of devices that interact with ordinary users like us will be left to the Wallys.
The general public swallows astonishingly naïve, overblown claims in the press about future artificial intelligence. People as intelligent and well-informed as Stephen Hawking and Elon Musk think robots might doom the human race by taking over.
They won’t. Our robots certainly won’t care whether we live or die: Forget Asimov’s laws. Robots will (as I mentioned in a previous post) occasionally kill us. (Self-driving cars will kill us too, of course, quite often. I plan to stay away from them.) But they will not be anywhere near smart enough to dominate us.
I’m not overlooking the great algorithms that have changed our lives in fields like web search, predictive texting, route-finding, etc. But issues of cooperative or sensible interaction in context don’t arise with these: They rely on superfast hardware and huge masses of stored data and statistics, and on our intelligence, but not on contextually appropriate responses.
Smartphones, for example, do seem to be clever enough to guess your next word during SMS message composition. But they’re just consulting a massive frequency table of short attested word sequences. When I type “walking the” my phone suggests I might want “dog” as the next word, and indeed I might. Cute. But it doesn’t know whether I have a dog, or what “dog” means. Notice that by accepting all suggestions about probable next words you can get the phone to reveal its ideas about complete texts. I tried that, on a Samsung Galaxy A3 running Android 4.4.4, and it composed this:
I am a beautiful person. The comments for your help. I have authorised and regulated by the way, and the tire.
So much for any fantasies about its grasp of sentences or meaning.
Machines are great at data-searching and number-crunching in narrowly defined domains; but the user-interface software they currently use is artificial stupidity: canned announcements telling us to do things we’ve just done, boilerplate phrases blurted out pointlessly to nobody.
I’d love all this not to be true. Confound me, user-interface engineers: Emulate the achievements of your hardware colleagues. The device that I refer to as my phone fills me with awe. So does my MacBook Air: as fast and powerful as the scientific supercomputers of the 1980s yet smaller and lighter than a copy of Vogue.
I hate having to connect such a wonderful laptop to an appliance so stupid that it says “In 1 minutes the projector will go into standby mode.”