It’s May of 1997. Gary Kasparov, the reigning world chess champion, losses to IBM’s supercomputer Deep Blue in their second tournament. Just another event? Hardly. The notion that certain levels of human cognizing could never be surpassed by a machine was shattered.
Okay, maybe you weren’t shocked at Kasparov’s defeat, or you were too young at the time to appreciate it. Or maybe, because it’s just a game, it didn’t occur to you that if a genius like Kasparov could be laid low by a device you plug into a wall, what the future might hold for the rest of us engaged in jobs that don’t require one-third of his brain power.
News and events are always creating a turbulent front from which the more strategic-minded among us are obliged not to shrink, even if it means staring into the abyss, such as a dystopic future where the only jobs are servicing machines that do the thinking for us. That said, if Kasparov’s loss didn’t alarm you, there would still be opportunities ahead to have your mind blown.
Let’s flash forward 14 years from Kasparov’s defeat. It’s the Winter of 2011. Computers have become many times more powerful. To demonstrate their prowess, IBM rolls out Watson, its latest machine-learning monster. Having conquered Kasparov and all the Queen’s men, IBM sets its sights on Brad Rutter and Ken Jennings, two of Jeopardy’s multi-million-dollar champions, in a machine-versus-human Jeopardy challenge. The result? Another humiliating blow for humans against an unblinking silicon-based adversary in a game of wits, memory, and intelligence.
Harlow Shapley advised earnestly: “We must get used to the fact that we are peripheral.” That was 1958, and Shapley, an astronomer, was not alluding to AI but rather to our humble place in the universe — an undistinguished location in an ordinary galaxy among billions of star-rich galaxies. When I think of Shapley’s advice what strikes me is the word “peripheral.” Because if today’s computer scientists working in AI achieve their alchemical dream (which is tantamount to creating “Artificial Life”) that may be our fate, winding up as some peripheral device that is peripheral to One Humongous Cloud run by Watson’s decedents.
Well, nothing new there, right? That theme has been a staple of sci-fi writers for decades. Except now one can argue that we are peripheral — we just haven’t reached the status of being replaceable.
When it comes to performing numerical calculations humans were out-classed long ago. But that hasn’t stopped AI developers from assigning non-numerical tasks to willing participants. Recently, for example, that includes crowd-sourcing “analogies.” Here is why that is relevant to the creative process, and has larger implications than you might think.
Arguably, the heart and soul of creative thought is analogy. Kekule’s vision of the benzene ring, for example, was served up in a dream by way of an image of a snake biting its tail. But computers cannot draw meaningful analogies since they cannot recognize meaningful distinctions. At least not yet. That is why they need humans to feed them information. If a digital machine says “such-and-such A is like such-and-such B,” it may appear to have meaning, but that is a result of whatever interpretation we bring to it.
Albert Upton, an early writer on semantic theory, noted that the use of metaphor represents “the most complex form of semantic growth” and “the most important linguistic consideration in the study of conscious life.” By programming computers to think metaphorically, we are entering yet another domain where the belief that machines can never out-perform us may soon be shattered. Still, with respect to the creative process, the idea of programming analogies seems — to use one — ass-backwards. Because it is not how inventive human minds actually work.
In his book, The Psychology of Invention, the French mathematician Jacques Hadmard describes the creative process as entailing four steps: preparation, incubation, illumination, and correction. Analogy belongs to step three, illumination.
Henry Margenau, a physicist and philosopher of science, put it this way: “Concepts are the result of human processes of abstraction, sifting, reasoning; they emerge at the end of a long chain of activity in which man feels himself intelligently involved and responsible.”
There’s no question that creative solutions may coalesce as a verbal or visual analogies. Or that analogy can spark insight. But as Hadmard notes, it requires a prepared mind to recognize what the analogy means in the first place. So while it’s putting the-cart-before-the-horse for humans, you can see how warehousing analogies might be a boon for AI developers, allowing machines to seem creative. But that’s still a long way (or maybe never) from actually being creative, as well as feeling involved, intelligently and responsibly, as Hadmard puts it. That will be the day when machines have crossed the threshold into the sentient world.
Let’s pause for a brief recap: IBM’S Watson beats Kasparov. Watson then demolishes its flesh-and-blood trivia-game opponents, Rutter and Jennings. So who’s next on the hit list? Copywriters.
Yes, dear reader, that means some of you. You, the poets of commerce. The bards of brand. The wordsmiths who hammer out twenty headlines for every good one, and who bravely dig for another when the client inevitably rejects the ones you like. But that was then. Soon you may never need to experience such aesthetic pain in the execution of your trade.
As reported in Adweek (May 18, 2017), Saatchi engaged IBM’s Watson to work on ad campaign for Toyota. The goal was to work with Watson so that it could “whip up thousands of pieces of copy that sounded like they were written by humans,” and create ads “for almost every single potential buyer of the car.” How do you make copy sound human and engaging? How about tossing in a few fresh analogies!
Does Watson-Cum-Copywriter spell the end of ad-creation as a profession? And what happens to other creative pursuits like poetry, art and theater when machines understand the nature of metaphor? Then again, perhaps the idea of truly creative machines is an absurd projection — an attribution of our own demiurge? What’s your opinion? Please let us know.
Meanwhile, until such time when we find ourselves in a world where machine-to-machine dialogue becomes the dominant conversation, there is still time to honor our humanness and the Motivational Styles that do not so much define us as reflect our deeper needs and desires, and what we each find meaningful, without the need to look for a machine to express it.