The Most Human Human

This book describes the author's participation in The Loebner Prize, the competition that evaluates the current state of the Turing test: "pitting sophisticated software programs against humans to determine if a computer can 'think'. The machine that most often fools the judges wins the Most Human Computer Award. But there is also a prize ... for the 'Most Human Human'".

While the book was interesting and a worthwhile read, it ultimately left me slightly disappointed. What follows are quotations that intrigued me, nuggets that I found useful takeaways.

Statefulness

In 1989, twenty-one-year-old University College Dublin undergraduate Mark Humphrys connects a chatbot program he'd written called MGonz to his university's computer network and leaves the building for the day. A user (screen name SOMEONE) from Drake University in Iowa tentatively sends the message "finger" to Humphry's account - an early-Internet command that acts as a request for basic information about a user. To SOMEONE's surprise, a response comes back immediately: "cut this cryptic shit speak in full sentences." This begins an argument between SOMEONE and MGonz that will last almost an hour and a half.

(The best part is undoubtedly when SOMEONE says, a mere twenty minutes in, "you sound like a goddam robot that repeats everything.")

Returning to the lab the next morning, Humphrys is stunned to find the logs, and feels a strange, ambivalent emotion. His program might have just passed the Turing test, he thinks - but the evidence is so profane that he's afraid to publish it.

Humphrys twist on the age-old paradigm of the "non-directive" conversationalist who lets the user do all the talking was to model his program, rather than on an attentive listener, on an abusive jerk. When it lacks any clear cue for what to say, MGonz falls back not on therapy cliches like "How does that make you feel?" or "Tell me more about that" but on things like "your are obviously an asshole", "ok thats it im not talking to you any more", or "ah type something interesting or shut up". It's a stroke of genius, because as become painfully clear from reading the MGonz transcripts, argument is stateless.

Reacting Locally

Micromanagement and out-of-control executive compensation ad odd in a way that dovetails precisely with what's odd about our rationalist, disembodied, brain-in-a-vat ideas about ourselves. When I fight off a disease bent on my cellular destruction, when I marvelously distribute energy and collect waste with astonishing alacrity even in my most seemingly fatigued moments, when I slip on ice and gyrate craziy but do not fall, when I unconsciously counter-steer my way into a sharp bicycle turn, taking advantage of physics I do not understand using a technique I am not even aware of using, when I somehow catch the dropped oranges before I know I've dropped them, when my wounds heal in my ignorance, I realize how much bigger I am than I think I am. And how much more important, nine times out of ten, those lower-level processes are to my overall well-being than the higher-level ones that tend to be the ones getting me bent out of shape or making me feel disappointed or proud.

A Robot Will Be Doing Your Job

I would also like to note something, though, about the process by which jobs once performed by humans get taken over by machines, namely, that there's a crucial intermediate phase to that process: where humans do the job mechanically.

Note that the "blue collar and white" workers complaining about their robotic work environments in Terkel's 1974 book Working are bemoaning not jobs they've lost, but the jobs they have.

This "draining" of the job to "robotic" behavior happens in many cases long before the technology to automate those jobs exists. Ergo, it must be due to capitalist rather than technological pressures. Once the jobs have been "mechanized" in this way, the much later process by which those jobs actually get taken over by machines (or, soon, AIs) seems like a perfectly sensible response, and, by that point, perhaps a relief. To my mind, the troubling and tragic part of the equation is the first half - the reduction of a "human" job to a "mechanical" one - and less so the second. So fears over AI would seem to miss the point.

Micromanagement; the kaizen-less assembly line; the over-standardization of brittle procedures and protocols ... these problems are precisely the same problem, and pose precisely the same danger, as does AI. In all four cases, a robot will be doing your job. The only difference is that in the first three, the robot will be you.

Defining Games

In real life, and this cuts straight back to the existence/essence notion of Sartre's, there is no notion of success. If success is having the most Facebook friends, then your social life becomes a game. If success is gaining admittance to heaven upon death, then your moral life becomes a game. Life is no game. There is no checkered flag, no goal line. Spanish poet Antonio Machado puts it well: "Searcher, there is no road. We make the road by walking."

...

Games have a goal; life doesn't. Life has no objective. This is what existentialists call the "anxiety of freedom." Thus we have an alternate definition of what a game is - anything that provides temporary relief from existential anxiety. This is why games are such a popular form of procrastination. And this is why, on reaching one's goals, the risk is that the reentry of existential anxiety hits you even before the thrill of victory - that you're thrown immediately back on the uncomfortable question of what to do with your life.

...

Bertrand Russell: "Unless a man has been taught what to do with success after getting it, the achievement of it must inevitably leave him a prey to boredom."

Leveraging the Medium

Apparently the world of depositions is changing as a result of the move from written transcripts to video. After being asked an uncomfortable question, one expert witness, I was told, rolled his eyes and glowered at the deposing attorney, then shifted uncomfortably in his chair for a full fifty-five seconds, before saying, smugly and with audible venom, "I don't recall". He had the transcript in mind. But when a video of that conversation was show in court, he went down in flames.

Rivals; Purgatory

Somehow, even during my Sunday school days, hell always seemed a little bit unbelievable to me, over the top, and heaven, strangely boring. And both far too static. Reincarnation seemed preferable to either. To me the real, in-flux, changeable and changing world seemed far more interesting, not to mention fun. I'm no futurist, but I suppose, if anything, I prefer to think of the long-term future of AI as neither heaven nor hell but a kind of purgatory: the place where the flawed, good-hearted go to be purified - and tested - and to come out better on the other side.