Your Lying Cheating Robot Heart

About a month ago, researchers at the Georgia Tech School of Interactive Computing garnered a lot of attention when their press officer announced they'd made robots that lie.[365] 047
Creative Commons License photo credit: Corie HowellIt's early days in the robot-deception research space so the technique on display is pretty rudimentary.

To test their algorithms, the researchers ran 20 hide-and-seek experiments with two autonomous robots. Colored markers were lined up along three potential pathways to locations where the robot could hide. The hider robot randomly selected a hiding location from the three location choices and moved toward that location, knocking down colored markers along the way. Once it reached a point past the markers, the robot changed course and hid in one of the other two locations. The presence or absence of standing markers indicated the hider's location to the seeker robot."The hider's set of false communications was defined by selecting a pattern of knocked over markers that indicated a false hiding position in an attempt to say, for example, that it was going to the right and then actually go to the left," explained Wagner.The hider robots were able to deceive the seeker robots in 75 percent of the trials, with the failed experiments resulting from the hiding robot's inability to knock over the correct markers to produce the desired deceptive communication.

Abby Vogel Robinson Researchers give robots the capability for deceptive behaviorThe impression you get is that the seeker robot really didn't try all that hard. Like when you were 4 and playing hide and seek and your mom spent 5 minutes making a big production of not being able to find you even though your feet were clearly sticking out from behind the sofa.Like I say, early days.The real meat of Wagner and Arkin's paper is their work on the theoretical models of robot deception. When to deceive, why to deceive, how to decide how to deceive. The challenge is that deception is a game of move and countermove in communication and interpretation. I want to do X, you want to stop me so I indicate that I'm going to do Y. You know I want to stop you, so you know that I'm likely to try to deceive you about what I want to do. I know that you know that I know that you know that I know that you know...Alice in Wonderland: White Rabbit - Who Killed Time?
Creative Commons License photo credit: Brandon Christopher WarrenThink too long about deceptive behaviour and you risk falling down the kind of rabbit holes that Gladwell details in his essay about World War 2 spycraft.

At one point, the British discovered that a French officer in Algiers was spying for the Germans. They "turned" him, keeping him in place but feeding him a steady diet of false and misleading information. Then, before D Day—when the Allies were desperate to convince Germany that they would be invading the Calais sector in July—they used the French officer to tell the Germans that the real invasion would be in Normandy on June 5th, 6th, or 7th. The British theory was that using someone the Germans strongly suspected was a double agent to tell the truth was preferable to using someone the Germans didn't realize was a double agent to tell a lie. Or perhaps there wasn't any theory at all. Perhaps the spy game has such an inherent opacity that it doesn't really matter what you tell your enemy so long as your enemy is aware that you are trying to tell him something.

Malcolm Gladwell Pandora's Briefcase The New YorkerWagner and Arkin discuss this problem in their paper.

The study of deception and deception avoidance presents unique methodological challenges. Because the success or lack of success of a deception algorithm hinges not just on the deception algorithm itself, but also on the means of controlling the mark, deception results may not be indicative of successful deception per se, but rather of a weak mark. The challenge then becomes how to gauge the success of one's deception algorithm relative to a mark.

Alan R. Wagner & Ronald C. Arkin Acting Deceptively: Providing Robots with the Capacity for Deception International Journal Of Social RoboticsIndeed, there comes a point where being a weak mark becomes an advantage. In one set of experiments, they run their deception algorithm against seeking robots (marks) with different configurations. One configuration has no sensors at all, searching for the deceiver at random. With no ability to discern the false signals being given off, the mark is immune to deception and ends up doing better than its more sophisticated but foolable compatriots.I'm reminded of the problem of bluffing in poker and similar games. For a bluff to work, you need the other player to have a minimum level of competence at the game. They need to know enough to recognize the signals that you are trying to send, but then they can't be so knowledgable that they are able to see past your bluffing strategy.K-9 Andy
Creative Commons License photo credit: The U.S. ArmyWhen they want to ground their work in a concrete example, Wagner and Arkin imagine a scenario where a valuable robot must hide itself from an invading force in a compromised military base. It it valuable and laden with useful intelligence by way of being an artifact of the technology that enables its construction.

In our running example, the hiding robot might create muddy tracks leading up to the center corridor (the false communication) while in fact the robot is actually hiding in the left corridor.

Alan R. Wagner & Ronald C. Arkin Acting Deceptively: Providing Robots with the Capacity for Deception International Journal Of Social RoboticsThis is a good strategy if your enemy has limited time and knows you are somewhere, but it's a significant gamble. In addition to providing a false communication that you are in a different room, you have provided the true communication that there is a robot somewhere to be found.If surveillance focuses on communication patterns rather than communication content, some acts of deception make you easier to find. Earlier this year the EFF conducted a survey called Pantopticlick aimed at fingerprinting user's habits online. They discovered that the particularly privacy conscious individuals who used browsers like Browzar were the easiest to track.

Sometimes, technologies intended to enhance user privacy turn out to make fingerprinting easier. Extreme examples include many forms of User Agent spoofing and Flash blocking browser extensions. The paradox, essentially, is that many kinds of measures to make a device harder to fingerprint are themselves distinctive unless a lot of other people also take them.... All 7 users of the purportedly privacy-enhancing "Browzar" browser were unique in our dataset.

Peter Eckersley How Unique Is Your Web Browser? [PDF] Electronic Frontier FoundationFor more deception from machines, see also Bandit. Bandit is a socially assistive robot that helps people exercise, all the while spouting phrases of encouragement.Of particular note is the moment at 1:00 when Bandit declares "I'm having fun." No you aren't Bandit, you really aren't. You actually, literally, do not know what fun is.When Bandit follows that false communication with, "I could do this all day," it sounds ominous. A threat from an intractable robot dad that's going to put you through your paces until you shape up.There are deeper deceptions to Bandit, evidenced by the fact that the pronoun I have been studiously avoiding applying to the robot is "he" and not "it". Bandit is not male, but the voice and face are designed to make me feel like it is.I'm fascinated by the white lies aspect of deception that's on display here. This is very different from the wartime agents-in-conflict model that Wagner and Arkin consider. It's a kind of user-interface problem, having a robot say or look in ways that are untrue so as to make interactions with them more pleasant. There are good reasons to do this, as it turns out.

Researchers in Japan setup two ATMs, "identical in function, the number of buttons, and how they worked." The only difference was that one machine's buttons and screens were arranged more attractively than the other. In both Japan and Israel (where this study was repeated) researchers observed that subjects encountered fewer difficulties with the more attractive machine. The attractive machine actually worked better.

Stephen P. Anderson In Defense of Eye Candy A List ApartMatt Jones of BERG explored this idea right around the time that Wagner and Arkin were releasing their findings. His shorthand for it is BASAAP: "Be As Smart As A Puppy"

It was my term for a bunch of things that encompass some 3rd rail issues for UI designers like proactive personalisation and interaction, examined in the work of Byron and Nass, exemplified by (and forever-after-vilified-as) Microsoft's Bob and Clippy (RIP). A bunch of things about bots and daemons, conversational interface.And lately, a bunch of things about machine learning – and for want of a better term, consumer-grade artificial intelligence.BASAAP is my way of thinking about avoiding the 'uncanny valley' in such things.Making smart things that don't try to be too smart and fail, and indeed, by design, make endearing failures in their attempts to learn and improve. Like puppies.

Matt Jones B.A.S.A.A.P. BERG LondonThis is a strategy exploited to great success in Pleo, the robot dinosaur which looks like a baby. Clumsy behaviours are transmogrified from a bug into a feature.lurking in the Shadows
Creative Commons License photo credit: Don SoloDeception shows up over and over in nature, much of it the result of the slow grinding trial and error process of evolution which has given us camouflage, mimicry, infidelity, and cuckoos. Wagner and Arkin note that in primatology circles, there is an ongoing debate about whether deception by non-human primates is indicative of theory of mind. Being able to lie may not be sufficient to indicate intelligence but it may well be necessary.In times of conflict, deception may be the key to survival or even cooperation. In times of concord, deception may smooth out interactions, making things go better. Humans lie all the time. If we are going to live in society with robots, must they be able to able to lie to us?

Previous
Previous

Echoes of a Soul

Next
Next

Wearable Ethics