Language and Artificial Intelligence

Linguist David Adger on the Facebook chatbots experiment, the history of getting the AI to understand and speak human languages and what’s the difference between humans and computers in using language

videos | July 28, 2020

A few years ago, there was a very weird incident: a bunch of people from Facebook had developed a couple of chatbots and then they got them to talk to each other. Then the chatbots started to develop the way that they were talking to each other, and it began to look really, really strange: they started off with English words, but actually, then the grammar changed, and the chatbots were not really using English sentences. At some point, the developers, the researchers and Facebook were like, ‘Oh, what is this? This doesn’t look anything like English’. But interestingly enough, the chatbots were still doing what the researchers wanted them to do.

So what the researchers wanted them to do was to negotiate, so they were really interested in seeing if they could get a system where a chatbot would be able to negotiate with a human. The way they wanted to do this was they essentially got zillions of humans to negotiate with each other about something silly, like ‘I want three balls, and you want two boxes’, and they have to figure out ways of satisfying each other’s needs through a conversation. Then they recorded all of those, and then they fed that data into the chatbots, and then they got the chatbots to start talking to each other.

The plan for the chatbots was to win the negotiation and get the best negotiation. As I’ve said, what happened is the chatbots went a bit crazy, and the researchers were like, ‘That doesn’t really look very much like English at all’, and so they turned off the program. It was kind of amusing in the press at the time because what happened was the press went crazy. You can imagine: ‘Oh my god, these AIs are going to take over the world; they’re creating their own language’ and stuff like that. That wasn’t really what was happening. Well, it was sort of what was happening; they were creating their own way of communicating. But it wasn’t so much the researchers were worried that the chatbots were going to take over the world: it was that they hadn’t really asked the chatbots to keep to English; they had asked them to win the negotiating strategy. Then they redid this, and they got the chatbots to speak more like English again.

But it was a very interesting experiment because it raised two interesting questions, one of which is what would happen if AIs became so good at language that they could just communicate with each other using language. Would that leave us behind? That’s the question I want to address.

Can it be the case that the AIs like Alexa or Siri become as good in language as we are? And moreover, will they do it in the same way as we do? Will they be using language like human beings are using language?

The answer I’m going to give is they could get pretty damn good, they could get really as good as us or better (whatever that means) but they would be doing it in quite a different way. If we look back at the history of how artificial intelligence has dealt with language, if we look back to the 70s, which is when this really started to happen, AI engineers were really interested in whether they could get AIs to talk to human beings like the computer on Star Trek. So people started building systems, and the most famous one was called the really weird name of SHRDLU. It was a system that was built by Terry Winograd, and it was just a very simple system where you had a world which consisted of coloured blocks of different sizes. The program knew the world, and then you could interact with the program, and you could say to it something like ‘Put the blue block on the red block’, and then the system would say, ‘I have put the blue block on the red block’, and it would work like that.

Philosopher David Chalmers on artificial intelligence in movies, consciousness of computers and moral rights of artificial intelligence systems
To make that system work, Winograd took everything he knew about the grammar of English and turned it into computerised rules, and then he connected those rules with the meaning of the various words and the meanings of the way the words were put together in this little world, and then he used that to build this program. It’s quite an incredible piece of work, actually. You could get the program to do stuff, it was a bit like Star Trek, but it was just this world of blocks: that’s not really very helpful; blocks are not great.

So the question was, could you take something like SHRDLU and expand that and make it completely general, make it work like a human being works? That’s what people tried all through the 70s and 80s, and it’s really difficult. It’s really difficult because you never know what human beings are about to talk about: human beings talk about all sorts of stuff, and they talk about things that are not present in front of them; they talk about things that are in their imaginations… To get a computer program to do that is just incredibly difficult.

So what people did during the 70s and 80s was they attempted to get the grammar better: bigger sets of rules connected to bigger domains. In fact, they started to computerise that effect as well, so they would get loads and loads of text, and they get hundreds of graduate students in linguistics to analyse all the texts, and then they’ve got a computer program to troll through all the text and extract the rules, and then they used those rules to build another computer program to try to understand the language. That’s basically what people did in the 80s. People are still doing it, and these gigantic collections of analyse sentences are called treebanks, and they’re still very, very useful. But it never really worked because of this same problem: you never know what people are going to talk about; things just break, basically. These systems are very fragile, so they don’t work like humans work, or they’re not as flexible as we humans are.

Something happened in the 80s, though, that has created a new artificial intelligence that does things in a very different way, and that’s what surrounds us right now. If we look at Alexa or Siri or ‘Hey Google’, they’re all using this new system, and it’s a system that works in a very different way. What you don’t do is you don’t try and work out what the rules are in these systems: what you do is you get the system to be very, very general, and it’s very good at analysing whatever you throw at it, and these systems are called neural nets. Essentially, what you do is you feed the system some data, and you tell it what output you want, and then you just let it go, and it just turns and turns and turns and turns, trying to match the data it’s got to the output that you want. Once it’s turned a lot, you’ve got a new system: you’ve got a system that’s structured in a way that will give you the output that you want for the data that you give it, and that’s how most AI systems work these days.

They’re really quite brilliant, so if you say to Siri, ‘Oh Siri, play me Rachmaninoff’, then Siri will probably play you some Rachmaninoff. They do that essentially by using these neural net systems. It’s quite interesting because what they really do is not very much like a human being at all.

What we do when I say ‘Play Rachmaninoff’ is it goes into your brain and you just work out what it means and you go and get Rachmaninoff record. What these systems do is they hear what you’re saying and parse it statistically using these neural nets, and they do that by sending it off across the Internet using Apple or Google servers in Ireland or in California or wherever they are, and then you get all that data and then they spit it back to you.

It’s like a global intelligence, really, which is quite fascinating.

So they manage all that sounds, and that works really well, but then they’ve got to manage the grammar, they’ve got to figure out what you mean by saying ‘Siri, play me some Rachmaninoff’, and they’re pretty bad at that. All they essentially do is they kind of just make a good guess by looking at what is most probable for what those words mean. It’s the kind of a task that you have looking across the whole of the Internet what’s the most probable likelihood that those tasks will mean ‘Play Rachmaninoff’. All you’re doing is just taking the various words and putting them together: it’s like you’ve got a soup of the words, you’re stirring them all around, and you’re just making the best guess; that’s called ‘semantic soup’ strategy sometimes. You’re using keywords.

So that’s not how humans do it, but there’s been new systems. Quite recently, a couple of years back, Google started to use a different approach, and it was using it mainly in Google Translate, but it’s now being used everywhere. What that approach does is it doesn’t just use keywords to work out what’s going on: what it does is it uses not just the likelihood of what’s going to happen when you’ve got a bunch of words together but actually which word is most likely to follow or precede the next word. So if I say to you something like ‘the cat sat on the…’ you’re probably going to say ‘mat’ because you’ve heard that a thousand times. But if I said ‘the cat sat on the cloud’, that’s a bit more unusual. If I say ‘the cat sat on the independence’, that’s really unusual, and if I say ‘the cat sat on the the’, you’re like – what?! These are all sentences in English; you can work out what they might mean. You can imagine for the last one there’s a big book and it’s got words on it, and the cat sat on the ‘the’. Perfectly reasonable. But ‘the cat sat on the the’ is really rare; it’s not going to happen pretty much, whereas in ‘the cat sat on the mat’, the word ‘mat’ after ‘the’ is really common.

So, you can use how commonly certain words follow other words to make a good guess as to what the grammar of the sentence is going to be. That’s what these new systems do, so if you notice Google Translate getting better in the last few years, it’s because it started to pay attention to this particular idea that you can look at the statistical likelihood of one word following another. There are really sophisticated versions of this now; they’re called long short-term memory neural nets, and what they do is they kind of keep a memory of what we’ve already heard as well. So, in my example, ‘the cat sat on the mat,’ they’ll remember the cat, and they’ll think that mats are pretty likely places for cats to be. So essentially, what happens is that they’ve got a sort of memory, and you can use that essentially to make the grammar work.

Linguist David Adger on the similarities between all the languages, creation of the new grammatical rules and how social factors affect your word choice
So let’s take an example like ‘the foxes in my greenhouse are jumping around’. If you think about it, foxes and then ‘are’, they are quite far away from each other. But the noun that’s right next to ‘is’ is ‘greenhouse’, but you don’t say ‘the foxes in my greenhouse is’; you say ‘are’ because it’s foxes you’re talking about, not greenhouse. That’s always been really difficult for these neural nets. With these new sophisticated ones, they can get that; they can figure out what the most likely verb to agree with the noun is.

I’m going to finish by just saying, wow, that looks incredible, but it turns out that, actually, these neural nets don’t do this in the same way that we do. So people have done experiments: there’s a researcher called Tal Linzen who’s done experiments on these sophisticated neural nets, treating them as though they’re humans and getting them to do tasks with language and at the same time, you get humans to do the tasks with language. Both the nets and humans make mistakes, but the mistakes that the neural nets make are really quite different from the mistakes that the humans make. So even though their performance and their outward behaviour are very, very similar, actually, what’s going on deep down for both of them is quite different. So it turns out that even though neural nets, the kind of AIs we have just now, can be absolutely brilliant in our language, what they’re doing is something quite different from what we, human beings, are doing with language.

Become a Patron!

Professor of Linguistics at Queen Mary University of London
Did you like it? Share it with your friends!
Published items
0779
To be published soon
+92
New