Yorick Wilks is a British computer scientist who has contributed to a wide range of academic fields, including philosophy, artificial intelligence, computational linguistics, natural language processing, and machine translation. He is Emeritus Professor of Artificial Intelligence at the University of Sheffield, and Visiting Professor of Artificial Intelligence at Gresham College in London, a position created for him. He is a Research Associate at the Oxford Internet Institute, Senior Scientist at the Florida Institute of Human and Machine Cognition, and a member of the Epiphany Philosophers. He is a Fellow of the British Computer Society, and of the Association for Computing Machinery. He is a Fellow of the European Association for Artificial Intelligence, and of the American Association for Artificial Intelligence.
Yorick Wilks was born in 1939 in Buckinghamshire. His father was a builder and furniture maker. During the war, he built bombers and wooden planes in London. His father died when Yorik was eleven. Yorick has a younger brother who is a retired accountant.
Yorick describes his childhood as classic working-class childhood. His father was a great influence and despite having left school at thirteen and been educated by the Army, he knew the value of education and told Yorick quite firmly; ‘You are going to go to Cambridge.’ Yorick adds: “Now why he said that, and what he thought going to Cambridge meant, I don’t know. He was a very intelligent man, but uneducated. But he was right, I did as he said.”
Having gone to the local primary school, Yorik passed his Eleven Plus and went on to grammar school – the only boy out of his school to do so. After the death of his father, the family moved to Devon and Yorick went to Torquay Boys’ Grammar School where he earned a scholarship to Cambridge. He says: “I went to Pembroke College, Cambridge and I went there to do physics, but on arrival I didn’t want to do physics, so I changed to mathematics, and after a year or two of mathematics, I changed to philosophy. My career has been in artificial intelligence, so in some sense my life has been going down from more difficult subjects to easier ones all the way; physics, mathematics, philosophy, artificial intelligence. It’s been steadily downhill.”
Yorick says that he didn’t work very hard at Cambridge and instead spent much of his time doing theatre and politics. Speaking about the influences on his life at that time, he says: “I suppose my influences were largely literary at the time. People like Bertrand Russell and George Bernard Shaw were huge influences. They were intellectual giants in those days. I wasn’t that influenced by teachers locally. Margaret Masterman, who was the philosophy tutor for my college, she became the most important influence in my life intellectually, but as a teacher she was terrible. … So really, I was entirely self-taught.”
Cambridge Language Research Unit
Following his degree, Yorick started on his PhD at Cambridge and went to work for Margaret Masterman, his ex-philosophy tutor, who ran a small research unit called the Cambridge Language Research Unit, which did research in machine translation, information retrieval, and language and machines. Yorick says: “She was an utterly inspiring person with enormously strong views on language machines and religion. She was a charismatic character; the kind of person who founds religions, she was that kind of force. She made me believe that metaphysics was very important.”
Having studied under the philosopher Wittgenstein, Margaret believed that metaphysics was tied up with the nature of language. Yorick adds: “What she had carried away from him was this idea that metaphysics was strange and bizarre, but whatever it was, it was, a) important, and b) tied up with language and how language works. Unlike Wittgenstein, Margaret had become a more practical person, and thought that work with computers on language would show you how language worked, and therefore could show you how metaphysics functioned, and that’s what I wanted to do my PhD on.”
Yorick’s thesis was ‘Argument and Metaphysics’, and it took him several years to complete, including a trip to America to gain access to sufficiently powerful computers which weren’t available in Margaret’s unit.
The first computers that Yorick used he describes as “absurdly primitive.” He explains: “Initially all we had was a Hollerith card sorting machine. It looks like a horse on four legs with a range of slots. It was built by the Americans to sort punch cards for elections. It sorted on the holes in the cards. But a very, very ingenious man at this unit called Fred Parker-Rhodes worked out how you could use repeated sortings of cards to do any computing at all.” Yorick used it to do the syntactic analysis of some sentences using a theory of syntax which he says; “we believed in the time there, which, nobody believes in anymore.”
In 1966, he was invited to go to California by the one of the directors of American Air Force research which was involved in providing grants to Margaret’s research unit. Yorick was given a one-year contract by the Air Force and moved to Los Angeles where he was able to use IBM 360 machines at Systems Development Corporation.
After the initial year, Yorick stayed on; he says: “This was the era of Los Angeles being the centre of sex, drugs and rock ’n’ roll, so I had no urge to leave. People were wearing wonderful clothes, sitting in parks, carrying flowers, taking drugs. I was doing all those things, but I was also working and writing books. It was such a wonderful time.” He supported himself for another two years while he wrote his thesis by taking a job in TV as a small part comedian drawing on his acting experience at Cambridge.
Stanford Artificial Intelligence Laboratory
He returned to the UK and submitted his thesis in 1968 before returning to the US to join the Stanford Artificial Intelligence Laboratory run by John McCarthy, which he describes as “exactly the place to be at the time. If you couldn’t be in MIT, you should be in the Stanford AI Lab, if you wanted to be in AI.”
John McCarthy had invented the programming language called Lisp which drove artificial intelligence for decades. Yorick adds: “He made logic as acceptable as a basis for reasoning and thought as anybody ever has. He ran this amazing lab on American defence money where people did anything they wanted. … I was there trying to develop my thesis, away from the metaphysics, into what had been the core of it, which was semantic representation of language, I was trying to turn that into a machine translation project.”
Institute for Semantic and Cognitive Studies in Lugano in Switzerland
In 1974 Yorick moved back to Europe and became a research associate at the Institute for Semantic and Cognitive Studies in Lugano in Switzerland which had been established by a millionaire called Dolle Molle to invent a universal language of reasoning. Yorick says; “Our director, who was a Swiss German, kept the owner happy by claiming to invent this language, but he wasn’t really, he was directing an institute that was doing artificial intelligence. We were having a good time and I developed some of my ideas there, away from meaning representation and more towards belief structures. I became interested in belief, what it is to believe, what human belief models are, and how we detect what other people believe, things like that.”
When the Institute moved to Geneva in 1975, Yorick chose to return to the UK and went to Edinburgh for a fellowship year, where he worked on developing belief models with a Polish scholar in Warsaw.
University of Essex
After completing his year in Edinburgh, Yorick decided to stay in the UK and get a “real job”. He took a readership in language and linguistics in the University of Essex and became Chair of Theoretical Linguistics one year later. He admits that he didn’t know much about linguistics saying: “I read a book on introduction to linguistics on the train on the way to the job interview, but there wasn’t a very big gap between what I was doing and computation and linguistics.”
The Eurotra machine translation system
In 1977, Yorick became part of the Eurotra machine translation system project which was funded by the European Commission and aimed to devise a translation machine. Yorick says: “They needed it because the volume of translations inside the Commission was so great, it was greater really than they could find humans for, particularly to translate between the major languages of French, German and English. They were using SYSTRAN, which they didn’t like largely because it was America.”
SYSTRAN had originated during the cold war as a Russian translation machine allowing US scientists to read Russian scientific theses.
The EU project cost 70 million Euros and ended up as a failure. Yorick says: “The good thing about Eurotra was that it served to create a very large group of trained people in the European countries on computational linguistics. …We all agree Eurotra was a failure, but it was a sort of grand failure in a way. What it showed was that the American method of doing things was simply better. One of the deep mysteries in Europe is it has no big software companies that can rival Apple, Facebook, Amazon, Microsoft. Why is that? Every time we create one, the Americans buy it. But we can’t do it, there’s nothing in Europe as influential, and, powerful. The Chinese have got them in Baidu. Eurotra, in a way, is almost a microcosm of that. It’s a failure in design execution, although it had so much expertise, and yet the big American machine translation companies went on and took over the world. It’s a great mystery.”
Yorick took part in the Eurotra project while at Essex where he had moved from the linguistics to the computer science department. He says: “I found that being in the linguistics department was not good for getting government grants into artificial intelligence so I moved to the computer science department.”
In 1985, realising that the Eurotra project was going to fail and with his Cuban wife getting bored of the English climate, Yorick applied for and was successful in becoming director of a state-funded artificial intelligence laboratory in Las Cruces, a city in the south of New Mexico.
He says of life there: “It was great. The laboratory was really quite successful. We had some good people, we got some good grants, we did some really good work.”
Yorick was director for eight years during which time the lab helped to found new technology called information extraction, he says: “I’ve been the proponent, through my thesis and my research life, of deep analysis of language by computer, but suddenly the wind changed, and we realised that that wasn’t producing the results we hoped for and that maybe there was more to be got by a more superficial skimming of language for certain limited purposes. That was a technology which DARPA, the American funding agency, helped to fund.”
University of Sheffield
In 1993, following his divorce from his first wife, Yorick and his son returned to the UK. He had also started a new relationship and had another child. Yorick became the Professor of Computational Linguistics at the University of Sheffield where he stayed for almost ten years. He says that many of his students from his ten-year tenure are now professors in the field.
At Sheffield, Yorick set up a group and a software platform which anybody could download and do their own language understanding, it was called GATE and was government funded. He explains: “I, together with Rob Gaizauskas and Hamish Cunningham, set up GATE and that has been amazingly successful. It now has its own tutorials and has been downloaded thousands and thousands of times. It’s quite complex, but it enables any laboratory, any person if they have the time, to do computational linguistics on texts and it embodies all kinds of functionalities and different modules. Modularity in a platform; it’s the classic model, but we were one of the first to do a modular platform that anybody could use for nothing. The core of it was information extraction which I had started in New Mexico.”
In 1997 Yorick led the team that received the Loebner Prize for machine dialogue. He explains: “This was another thing I was doing in parallel with GATE. I had spent quite a number of years working on belief systems and how humans in order to communicate have to have a model of the beliefs of the other person. This led to a theory of dialogue, a quite deep complex theory, which was very hard to implement in computers because it required so much complexity and power. So just as I shifted from a deep theory of senses to a more superficial theory, I then did the same move for dialogue, and realised that you could probably model human dialogue with more superficial methods than these belief models.”
The Loebner competition is based around the Turing test which tests whether you can have a dialogue with something and know whether it’s a machine or a person. The Loebner competition is based on journalists talking to a range of laptops and machines and then deciding whether there is a person or machine behind each one. Yorick adds: “Every year the machines got better. They never actually beat the people, but they got closer to them.” Yorick lead a team at the request and funding of David Levy, a Scottish Grandmaster in chess who became interested in dialogue and wanted to win the competition. The team created a system called Catherine; she was a British journalist. Yorick says; “It was the biggest Loebner program they’d ever loaded. It was full of dictionaries and thesauri and it had a lot of knowledge of things. It wasn’t superficial. Catherine knew a lot of things and she remembered what you had said, and if you went back, or changed your mind, or said something, she spotted this. Catherine was really quite a good conversationalist.”
Yorick also managed a large-scale EU project called Companions. He explains: “It came out of the Loebner work. I got interested in dialogue and I wanted to go on with machine dialogue, but I wanted more than dialogue. I began to write about the idea that what we needed were computer companions that had dialogue, but it was more than dialogue; they would know about us and be our personal companion. It sounds obvious now because we live in the world of Siri and Alexa and Cortana, but 20 years ago, in the late nineties, this was not common talk.”
Yorick’s motivation for the original EU project was to be company for old people. He adds: “We wanted a good conversationalist in Companions that would talk to people, know about them, be their companion. It’s personal. It wouldn’t be owned by the State or a company. Your data will be safe with it. It wouldn’t just give you company, but ideally of course it would do all the things they do now, call the restaurant, order food, call the taxi. In a dream world it would deal with forms for you.”
Unfortunately, the project became riven in politics and the EU Commission which was funding it requested that Yorick was removed from the project.
Oxford Internet Institute
In 2003 Yorick moved to the Oxford Internet Institute on sabbatical where he began to consider further ideas about companions. He wrote a paper called ‘Death and the Internet’ where he envisaged the potential of companions being able to provide family and friends with additional information after the death. He explains: “The idea that, properly understood, your companion could become your substitute after death. If it knew all about you and if your relatives could talk to your companion, they’d find out more about you than talking to you. … I’ve seen a gravestone in Italy with a tiny solar panel where you can press a button and the person talks to you on the gravestone. I thought, what a brilliant idea. … and now with fake video, of course you can talk to dead people. Talking to dead people’s going to become normal.”
Institute for Human Machine Cognition
At the age of 70, Yorick moved to Florida to work at the Institute for Human Machine Cognition founded by Ken Ford. While there Yorick founded a new AI group to research on metaphor, cybersecurity and belief, and emotional propagation in groups.
Yorick says: “Somebody in navy research had read my papers on belief and liked them. So, I managed to raise some substantial grants on models of belief in others and how individuals communicate. … One of the things I think I can say we established is that when you detect changes in belief in people, computationally, it’s often preceded by changes in their emotional level in the language they use. So, emotion can become a signal, if not a trigger, for belief change. I thought that was interesting because, it’s much cheaper computationally to detect emotional change in language than change in belief. Change in belief is very complex and difficult; to find out that someone’s belief has shifted is a very big computational task. If change in emotion is a trigger or a clue to change in belief, that will make it much easier to detect.”
Yorick has also run grant funded projects on understanding metaphors and the role of emotion. He continues: “I’ve come to realise in the last decade a thing I hadn’t seen before, how important emotion is in artificial intelligence. Once upon a time there was one man in this country called Aaron Sloman, in Brighton, who said emotion was important in AI and everybody thought he was mad. Emotion? Artificial intelligence? Are you crazy? But he’s right. Everyone now agrees, emotion is crucial to the understanding of language and understanding of people.”
As he approaches his 80th birthday Yorick has turned to writing a popular book on AI – ‘Is artificial intelligence modern magic, or dangerous future’. He enjoyed the process so much that he’s now planning a second. He has also become interested in artificial intelligence and religion and is working with a group of people in Cambridge who want to do some interesting and different things on AI and religion, which he says might be the subject of another book.
Yorick says: “What I did wrong and I advise against, can be summed up in a simple slogan; if you are in a very good university, stay in it. I have been in a very good university several times, Stanford, Cambridge, but I always moved. I’ve been peripatetic, I’ve moved around for all kinds of personal and crazy reasons. That wasn’t a good idea. The other thing I did wrong is similar; I lacked the stamina that some of my friends have had to stick with a theory, or a claim, or a piece of research idea. Stick to it and develop it thoroughly for your life.”
On the things that he is most proud of, Yorick says: “I was lucky; I got picked up and put into the very place, like Cambridge, where I had every chance to do well. I’m not sure I’ve fulfilled it at all, so I don’t know I’m proud. I think this thing I call preference semantics is a bright idea. I don’t think I own it, other people had parts of it before me, but I pushed that for some distance. I think the stuff on belief models is still a pretty good idea. I think the idea of companion, although, companions are now everywhere, and I don’t own the word, but I was one of the first people to use it. So, I think there were three reasonably good ideas, but I don’t think I had the stamina to see them through to the end.”
Biggest challenges for AI in the next ten years
On the general public’s attitude to AI, Yorick says: “I think the educated public, and even the uneducated public, are quite suspicious of AI now. … They’re double-minded. They see the benefits; they can’t imagine living without intelligent phones. People are aware that there are dangers and there are positives; this is always true of science. Lots of science is both frightening and enormously beneficial and AI is just like that. As always with these things, science, AI, engineering, have given us the tools that are dangerous, but it will also, I believe, I’m an optimist, give us the tools to correct it. So however dangerous we think Facebook and political advertising are, it will also give us the tools to correct it.”
Yorick says the biggest challenge for AI is deep learning. He explains: “The best people in deep learning can already see what the limitations are. They know that much of the success of deep learning is because of very careful choice of what are called priors. They choose very carefully which features to probe the data with. You get some more naïve deep learning people who say, ‘Well, give me lots of data, more data. Data will do everything for us.’ This is nonsense. Data tells you nothing. One thing we know is that human beings do not learn in the way deep learning systems do. Human beings learn intuitively and immediately, and we don’t know how. … Deep learning has had some great advances, but it’s narrow and it’s fragile. We’re still going to have to go back at some stage to other methods that try to mimic how it is we do things.”
“I think it’s still important to go and live in American for a bit. I did that. I think in artificial intelligence, America is still, for now, so far ahead of Europe, at least in the applications of AI, and still so far ahead of Japan, and China. I think they (America) have the right way to do AI and to think about it.”
Roles and Honours
Emeritus Professor of Artificial Intelligence at the University of Sheffield.
Visiting Professor of Artificial Intelligence at Gresham College in London.
Research Associate at the Oxford Internet Institute.
Senior Scientist at the Florida Institute of Human and Machine Cognition.
Member of the Epiphany Philosophers.
Fellow of the British Computer Society, the Association for Computing Machinery, the European Association for Artificial Intelligence, the American Association for Artificial Intelligence.
In 1997 he led the team that won the Loebner Prize for machine dialogue
In 2008 he received the Zampolli Prize of the European Languages Research Association and the Lifetime Achievement Award of the Association for Computational Linguistics.
In 2009 he got the Lovelace Medal of the British Computer Society for contributions to meaning-based understanding of natural language.
Interviewed by: Elisabetta Mori on the 5th September 2019 at The Reform Club
Transcribed by: Susan Hutton
Abstracted by: Lynda Feeley