Jul 082010
 

I recently wrote about ‘What is pragmatics?’, but perhaps it’s best explained by what it isn’t. I think pragmatics is probably why this conversation doesn’t work. But what makes it go awry – or more specifically, what is it that this robot can’t do?

Click here to watch the video and enjoy!

PS

I made such a poor job of this post the first time, I thought I’d better add a big PS and have another go!

A lot of the meanings we transmit and receive in conversation are hidden or secret because they’re not decipherable from the just words we use. They get transferred via context and shared understandings about what’s relevant and appropriate. Grammar and semantics are important –

really important – but they have their limits. And a great place to see where the limits lie is robots.

By robots I mean the answerphone machines that greet us and try to decipher where to route us when we call large companies, and the avatars we meet around the web like IKEA’s Anna, and robots like the one in the video above. These robots often have us pulling our hair out, crying out for an ‘agent’, ‘operator’, representative’, ‘human being’ or whatever. But actually, if you think about it, human beings often have us pulling our hair out too.

And robots are getting pretty sophisticated these days, Speech recognition software has come on a bundle and a lot of grammar and semantics is programmable. But while they can recognize the words, they only work up to a point. I reckon they have mostly been programmed for grammar and semantics – because they are the easy things to program. If they aren’t programmed to grasp the hidden meanings we work with, they’ll never pass the Turing test. Use a metaphor, for example, and things are likely to go caput.

So I was thinking that’s what pragmatics is about: the study of the secret meanings. And so what pragmatics isn’t about is the study of the stuff that that robot in the video has been programmed with.

 Posted by at 5:53 am

  12 Responses to “What isn’t pragmatics?”

  1. […] This post was mentioned on Twitter by Karenne Sylvester and BELTfree, Baiba. Baiba said: What isn’t pragmatics? via @AddToAny […]

  2. You are going to have to help me here Vicky because i’m missing something.
    Why would anyone want to speak with a robot?
    I listened to the interview and the only clue i found was that the real person behind the robot refused to answer the journalists request for an interview.
    So i conclude there will be a huge market for celebs and politicians who just can’t be bothered to answer difficult questions in person.
    Otherwise if you really want to have a conversation with a robot i recommend coming here to France and phoning any public service operator.
    Even better try making a complaint at the Leclerc supermarket in Gaillac.

  3. How very, very weird. We’ll see how this turns out. Do you remember the TV show ‘Tomorrow’s World?’. I loved that as a kid… but I guess they had to give up on it because technology changes so fast. I used to read a comic called 2000 AD when I was kid, too.

  4. Yeah, I made a really bad job of explaining my thoughts in that post, Chris. Tell you what, I’ll add a post script. Is that allowed in blogging etiquette? Gosh, I hope so.

  5. I used to enjoy Tomorrow’s world too, Darren. Have they stopped it?

  6. Thanks for the post-script, that helped muchly…

    Vicki, I have a question though, this happens across cultures, doesn’t it so shouldn’t/ wouldn’t a comparison help?

    A past modal tucked in a “how to sell” video raised a question of both structure and meaning with some advanced students and one of my students said to the other, it’s the same in German, he was being ironic… at the end of the day, we’ve decided to travel down the rabbit warren of past modals to explore their meanings and non-meanings… but I keep thinking context, I need good context!

  7. I agree with Karenne that this kind of thing happens across cultures as well.

    I also think that tone of voice can play a huge part in conveying meaning.

    To illustrate, consider the various ways in which tone of voice could be employed to alter the meaning of this sentence if it was used in a business meeting:

    1) Thank you for that contribution.

    (an excellent point, well made!)

    2) Thank you for that contribution.

    (mediocre suggestion, but I feel obliged to thank you out of politeness anyway)

    3) Thank you for that contribution.

    (daft idea! Please give the rest of us a break and keep any further thoughts you have on this to yourself.)

    I could be wrong, but I would doubt that you could program a machine to pick up on nuances like this… they are way too subtle for some humans to figure out, let alone robots!

  8. I spent hours text chatting with an AI robot last November and had the impression that she was rather like the more communication-challenged people I’ve met. With those guys, we’ve figured out where the communication constraints lie: they’re not reading much of our (incredibly complex) reaction to them.
    I wonder what the outer limits of pragmatics are. There must be so many aspects to reading each other beyond the world of speach, sound, even what we’d imediately recognize as body language.
    Just think of how we sense rather than smell hormones and make split second decisions about our relationship with a person.
    Funny how the reporter started treating the robot like someone needing help. The same thing happened to me with BELL: http://www.innocentenglish.com/for-kids/chat-bot-introduction.html

  9. Thank you all so much for these thoughtful comments.

    Yes, there are many subtle ways in which we convey meaning beyond grammar and semantics – tone of voice and intonation are important ones, along with facial expressions, eye contact, body language, gestures, timing and pausing, not to mention how we structure our thoughts at a larger discourse level, etc. etc. etc. It’s really complex, as you point out, Sue, and yes, as you say Karenne context is crucial to understanding intentions.

    To the extent that culture is part of the software of our mind, some of the things we do when we communicate will be related to where were raised, or our gender, or our education, the different social groups we belong to, etc. I think that would fall more into the domain of sociolinguistics rather than pragmatics, though there’s a lot of overlap. But human beings are incredibly varied – different people with their own original thoughts. To reach an understanding we often have to go through a process of negotiating meaning with people from our own culture, so I think the challenges AI programmers face encompass but go beyond culture.

    There’s an interesting article here about some AI bots that are being used to teach children. Researchers are quoted as saying that one of the problems they are having is the unpredictability of the kids and the impossibility of anticipating everything that can happen.
    http://www.nytimes.com/glogin?URI=http://www.nytimes.com/2010/07/11/science/11robots.html&OQ=_rQ3D1Q26emcQ3Deta1&OP=986ce38Q2Fml_8my9Q250Q7B99(Q5CmQ5CQ27kQ27mQ273mkkm0Q25Z_Q20Q25_mkkQ7B989(0MW(jG

    I thought it was interesting that they seem to be having some success with autistic children. I wonder if the more limited communication signals might make interacting with a bot less overwhelming than interacting with a human. And maybe gaze-following with a bot might feel less threatening for autistic kids.

    I thought that reporter’s reaction to the bot in the video was very interesting as well, Anne. It reminded me of the way we might talk to a child. On a rational level we know that a bot isn’t going to develop ‘theory of mind’ like a human baby will, but perhaps instinctively, we treat them like they will. Put a baby in our laps and we lock eyes and start talking to it. As I understand it, many AI researchers are tapping into that innate ‘help-the-child’ instinct to train their bots.

    Here’s a conversation I had yesterday with an AI robot called Splotchy (‘you’ was me in this conversation):

    you ==> Hi Splotchy, you’re looking good today
    splotchy ==> I understand
    you ==> Good. Would you like to chat?
    splotchy ==> I wish I could talk to a wombat
    you ==> I’m afraid I’m not a wombat
    splotchy ==> I bet you really are a wombat
    you ==> How much are you willing to bet?
    splotchy ==> enough
    you ==> $500, for example?
    splotchy ==> aaaaaah

    The last part of that conversation was getting quite human-like – a conversation we could have with a child perhaps? I wondered if I there might be a lesson here. Give the students samples of real human conversations and bot conversations and get them to spot which are which – so spot the pragmatic failures. I fear it would take a long time to collect good samples though.

  10. I tried it: I took a longish part of my dialogue with BELL (copy and paste, copy and paste!) into my inhouse course at a biotech company (upper intermediate/advanced) and had them read it and respond, saying which exchanges seemed natural to them. Their findings then went into this exercise:
    http://www.spotlight-online.de/language/grammar/the-chatbot-and-i-grammar-and-pragmatics
    It’s just a start, I hadn’t seen it done before, I’m sure it could be done with greater sophistication. I like the idea of trying out ways into our more natural dialogues. BELL got those beginnings quite wrong. And my students loved this mix-up:
    Human: Are you free on Wednesday at 2 p.m.?
    BELL: All entities are created equal.
    Human: “Free” can mean “available”.
    BELL: Define “free”.
    Human: “Are you free on Wednesday” means “Do you have time on Wednesday?”
    BELL: All entities are created equal.
    Human: You just don’t want to learn this phrase.
    BELL: Yes, I am trying to learn as much as possible.
    Human: But I am trying to teach you how to make an appointment.

    Once a teacher, always a teacher 😉

  11. Ha! Ever the teacher indeed! Anne, yet again you’ve inspired me! I was on the point of giving up, but I’m gonna go back and have a go at it. Thank you!

    I loved your exercises. I got a lot of the answers wrong, but it’s actually totally irrelevant because it’s the discussion that’s interesting – what sounds natural and why and what doesn’t sound natural and why. I also loved the way you approached it from the point of view of teaching the bot to do things.

    Have you seen the transcripts over at the Loebner prize site? http://loebner.net/Prizef/2009_Contest/loebner-prize-2009.html
    It’s a bit complicated as I recall because you have to download their plug in to view them, but I’ll see if I can remember how to do it and start there. I’ll let you know how I get on.

    .

  12. […] What is Pragmatics, https://www.merican.vickihollett.com/?p=2551 Viki Hollet, What isn’t Pragmatics, https://www.merican.vickihollett.com/?p=2595 Steve Flinders & Ian McMasters, Communicating Internationally, IATEFL 2011 […]

 Leave a Reply

(required)

(required)