Dr Noll has been active in the Internet and Computing industries for over 60 years and a theme of this conversation is that there is nothing new under the sun. Michael is an entertaining speaker and a great communicator: I am indebted for his kind and clear explanation that his birthplace of “Nork, New Jersey” is not be linguistically confused with our town of “New-Ark on Trent.”
He started his career with an incisive paper in 1961 about the opportunities and dangers that computers might bring and has seen much of it come to pass. Michael’s career has spanned a huge range of technologies and it is fascinating to see that many big ideas were conceived in the very early days of commercial computing but had to wait to realise their potential. Michael, for his PhD, built a haptic or force-feedback device (and patented the concept) 50 years ago and still has a vision of how more could (and will) be done with the technology. He can cite at least two false starts to the realisation of what we now know as the Internet, to which he made his own contributions as well as being an acute observer of its potential and use.
Michael is an outstanding scholar, Professor Emeritus and former Dean of the Annenberg School for Communication and Journalism, an enthusiast for the Arts, including his own role in them, and a world-renowned pioneer of Digital Computer Art.
Interviewed by Tom Abram on 13 April 2022 via Zoom.
Michael Noll was born in 1939 in Newark, New Jersey. He is an only child. He says: “Back in the 40s and 50s, Newark was called the centre of New Jersey, with large insurance companies like Prudential and many more, a great museum and concert hall, great shopping, buses galore, people running around the whole city, it was very much an exciting place. It was near New York city but Newark had its own culture, its own sense of being.” Noll’s parents did not graduate high school, his father was a carpenter and builder and his mother was a secretary/typist, a writer. Noll adds: “I think my interest in engineering, building and making things probably comes from my father as he was always doing that. My sense of writing came from my mother. They both valued education and they very much cared about me getting a good education.” Early Life
Noll says that Newark was renowned for great schools. He says: “Newark is still a great education hub with wonderful institutions such as Rutgers which has a major campus and Seton Hall Law School etc.” Noll’s schooling was through the private education system. He explains: “My mother and family were very religious, so, I went to my grammar school and was taught by nuns, the Sisters of St Joseph. My high school was taught by Benedictine monks, some of whom came from Hungary. The teachers cared a lot about teaching and about people learning. My parents were strict and I was expected to study and take things very seriously.” One of the monks started Noll with a life long love of classical music and he has used that passion to write columns on concerts and concert halls, including the Disney Concert Hall which he criticised at the time, he adds: “I was extremely critical of the acoustics and apparently, it was the only negative review of that hall. Nowadays, it’s been more accepted that there were indeed some problems with the hall.” After school, Noll went on to study electrical engineering. He says of his choice: “At one time, I thought I wanted to be a nuclear physicist until I read about the horrors of the bombs that were dropped on Japan and what they did to people and I decided I wanted nothing more to do with that field. Then, because of my interest in classical music, I started to get an interest in hi-fi, high fidelity, audio. I got interested in hi-fi and while at school and through college, I became a salesperson working in a little local store in Newark, New Jersey. That led me into audio, sound, electronics, and loudspeakers, and Bell Labs, because that’s where all that kind of research and work was going on. So, my career objective in college was to someday get a job at Bell Telephone Laboratories, in New Jersey, which I finally did.” Education
Noll’s first experience with computers was in the sixties via a summer job at Mutual Benefit Life, an insurance company. His next encounter was in college where there was a computer with punched tape to programme it, Noll says: “I wrote a little programme for that computer, so, that started to get me interested in it.” It was working on IBM 7090 and 7094 at Bell Laboratories that Noll discovered Fortran. He says: “It was so easy to use, and I started using and writing programmes in Fortran to do data analysis, in the Human Factors department, and that got me more interested. Then, I had a summer job working in the research part of Bell Laboratories that exposed me to using computers and programming to analyse and simulate various electronic devices and things.” He adds: “Associated with the IBM machines was a high-speed printer. It was considered a printer, but also could do graphics. This meant I could use it, rather than sitting with tables and numbers and manually plotting things. I’m very graphical, I have to see things, so, being able to programme the computer and look at the graphical output was great and it got me into this whole world of computer graphics. Back then, we called this area of communication man-machine communication. That got me into that and on a summer assignment, somebody’s programme went wild and that’s when I started doing elements of randomness and graphic things to start doing computer art.” First computer
Having achieved his dream and gained a job at Bell Laboratories Noll was put on the company’s in-house master’s degree programme which then inspired him to continue and gain a PhD from the Polytechnic Institute of Brooklyn by studying at night school. Noll’s dissertation project was on what he describes as ‘the tactile project, the man/machine communication feely thing’. He adds: “That was my doctoral dissertation at Brooklyn Poly and also, an important piece of work, I realise now. Back then, the term haptics did not exist, that wasn’t what it was called. In 1971, when I got my doctorate and an important patent on, in essence, broad enough to include the whole field of what is today concerning haptics; the idea of touching something, feeling something that doesn’t exist, being able to bump into something and feel around it. Noll became involved with the project through his boss Peter Denash, a Hungarian. Noll explains: “Peter got the idea of getting his own laboratory computer; a DEC DDP-124 and later DDP-224. I then had to construct a 3-dimensional input device so you could draw it in 3D with a 3-dimensional joystick moving it around while you’re seeing stereo, left and right images on the screen. You could actually be drawing in a 3D space. Maurice Constant from Canada visited Bell Labs and said, “Wouldn’t it be great if you could feel things in the computer?” The idea that you could mould virtual clay with your hands, the term virtual wasn’t used then though. So, that gave me the idea of motorising the device, and that became the tactile project. It meant for example, that if you were building a new telephone handset, you would be able to pick up and feel something, you would have a head-mounted display so you could see in 3D and actually pick it up and feel it and design and get a shape for it. That was the ultimate idea, the practical relevance of it to the Bell System would be industrial design and using it for that.” Asked about the metaverse versus virtual reality, Noll says: “It keeps stretching for all sorts of ideas. In the end, there is a lot of hype. We still do not have a 3-dimensional input device; we don’t have a 3D joystick. We still have the mouse, Doug Engelbart’s great invention is still the major way of inputting information. We have touch screens, another way of doing it. … This is amazing to me, decades, decades, decades later. We get so caught up on the hype of these things, and most of the hype doesn’t go very far and all, it sort of peters out.” After a while, I got bored by the project and went on to other things.” Bell Laboratories
In the early sixties, while at Bell Labs, Noll started to pioneer computer art. He explains what computer art means: “The computer is just a medium, it can be programmed to do anything. At that time, the art was not interactive, today it would be called algorithmic art. You write a programme, maybe have some computations that looked random and mathematical and do graphical types of things, but back then, we always had to put the adjective digital in front of the word computer because they were analogue computers back then and analogue computers were being used to do artistic patterns too. The idea of using a digital computer to do this was somewhat innovative, a new approach. There were some things that could be very graphical, for example something like a Brigit Riley op art image, which is very mathematical, you could easily write a programme to produce those kinds of patterns and things. Op art, abstract art were just a natural for the digital computer.” Noll’s passion for creating digital computer art also extended to exposing others to the ideas and potential of computer in art, including 3D art. He gives the example of having programmed a computer-generated ballet to inspire choreographers about the opportunity. He says: “The inter-relationship of the dancers, gave me the idea of using stick figures and I programmed a little computer-generated ballet, it was originally in 3D and there is a 2-dimensional version of it. The BBC did a documentary about it that aired in 1968. The idea again was to interest choreographers and dance groups with the possibility of getting themselves involved with the computer to help during the creative process, to help inter-relationships, looking at things, simulating what things might look like at each stage. Now, those ideas are now decades old but the idea of computers and dance that’s still a fresh topic area for research and innovation.” Computer Art
While at Bell Labs, Noll experimented in recreating a Mondrian inspired piece of digital art. He explains: “Somebody at Bell Labs mentioned a a series of paintings by Mondrian that featured vertical and horizontal little bars. I looked at it decided black and white was easy to do on the computer so I had the computer programme its own versions of it. I showed the painting and the computer version to 100 people and asked which they preferred. The majority preferred the computer version. I asked which, did they think, had been made by a human and which by a computer aid; the majority got it wrong. It was a classic experiment. In effect, it was a Turing example. This idea of having a computer in one room, a human in another, you ask them to do things, can you, therefore, guess which was the computer and which was the human? It wasn’t designed with that in mind but in the end, it was something of the Turing Test. It was an interesting experiment that got carried on to a second experiment which was the question of do people who have an artistic training have a different sense of aesthetics than those who don’t; does the artist have a special aesthetic sense that we normal people don’t? The idea was to use these Mondrian-like patterns as stimuli to show to people who have some artistic courses and training versus those that don’t and, and determine are the preferences for the patterns different? Although, the results of experiment became statistically difficult to analyse, the conclusion was that there was no real difference between them.” Mondrian Experiment
While at Bell Labs, the MULTICS project was set up to implement the idea of a large computer that could be time-shared by hundreds of different users via dumb terminals. General Electric were involved in creating the hardware. Bell Labs worked on the software and MIT worked on the systems engineering. Noll says: “We put all these three, who have never, ever before done a big real-world project, together and the critic in me said it was doomed from Day 1. In the end, nobody figured out that if you had a hundred users, and all the overheads that go with that you would have an ultimate bit rate of one bit per second. It just wasn’t going to work, and of course, it collapsed and died. The people who worked on the software at Bell Labs were sent up to the attic of building 3 where they came forth with Unix and C. So, Unix and C programming languages were born from the failure of MULTICS. The person responsible for the project was an engineer at Bell Labs, Ed David, Edward E David Jr. Ed ended up going to Washington to be science advisor for President Nixon and I got hired by Ed to work for him for two years in Washington.” Multics Unix and C
He joined the science advisory team for President Nixon under his ex-Bell Labs colleague Edward E David Jr. While working with Ed, Noll was responsible for negotiating with the Soviet Union on a science agreement. Noll says: “I had to negotiate the specifics of that section of the agreement with the Soviets. That was very difficult, I was a young kid, in over my head, scared to death. The difference was, I could fail and go to my boss and say, “I’ve failed”, the Soviet could not. So, in the end, everything got done the way we wanted because the system imposed things on him that didn’t give him flexibility, it was scary.” Science Advisory Team, Washington
“The person in the early 70s, when I was in Washington, who I associate with the ARPA and ARPANET, was Larry Roberts, who was the project manager in charge of the whole thing. He was the driving force for it all, in my mind. The question was what to do with that, how to commercialise it? My little role in that history was via a call from Dick Bolt of Bolt Beranek and Newman. The science community, the national science people, the academics all wanted access to the ARPANET, but they weren’t part of that defence department constellation of people. So, ARPA was going to sell access which would have made the government a common carrier and that would have been opposed to the Nixon administration’s idea of telecom and private industry. So, the issue was what to do about that and I remember there was a meeting we had with when the White House Office of Telecom Policy met with the ARPA people. “ The result neither AT&T nor Bolt Baranek and Newman wanted to get involved with packet switching, however, Dick Bolt told Noll that they were starting a company called Telenet to carry packet switching. Noll continues: “That’s how Telenet was started and that was the movement that took it away from ARPA and made it something available for the whole world.” Noll adds: “It was a scary experience, Washington was a scary place. Once you had been in Washington, you’re different, so, the idea of going back to basic research at Bell Labs was not exactly the best thing for me.” ARPANET
Noll moved to work with AT&T looking at videotext. He explains: “The idea was to use your TV screen with a little terminal to get access to databases. We got into it a Knight-Ridder newspaper in Southern Florida.” The idea involved a giant computer database that was accessed via a tree menu. Noll continues: “The problem was that most people got lost in the tree and never got the end and all; it wasn’t the way to do it. I realised it had to be millions of decentralised computers. We were right back to the early days of MULTICS, the idea of one giant computer being shared by everybody versus decentralised computers everywhere. We’re back to this basic idea of centralisation versus decentralisation. In the end, it becomes politics, big government versus decentralised government. There were those who believed in the idea that big government, the dictatorship controlling everything, there were those who believe and everybody trying different things and having their own. So, we had a lot of computer databases, how do you find anything, how do you search it? And that is where the Google people came in, they solved that great problem.” AT&T
Noll says of his career: “I’ve always jumped around, the longest amount of time I spent anywhere was 25 years at Annenberg school, but before that it was at Bell Labs doing speech research, human factors research, graphics research, tactile research. I spent a few years in Washington working with computer security privacy and negotiating a science agreement with the Soviets. Then marketing at AT&T, getting involved with video text, Teletext, also to new services, picture phone revisited, but usually, very much a human factor, always looking at the human dimension and trying to understand that better. “Then I got involved in teaching technology to non-techies. My general sense was that people going into business or communications should have a basic understanding, a basic literacy of the basic principles of modern technology, particularly electronics and computer technology.” Reflections on his career
Looking at the history of computing, Noll highlights how much was done in the early days, citing computer art as an example with work by Jasia Reichardt who put on a show called Cybernetic Serendipity at the Institute for Contemporary Art; one of the first shows talking about technology in art. He also highlights work by Howard Wise in 1965, and Nees and Nake. He says: “A lot of things go back earlier than most people realise, but then again, when you think you’ve found that, there is always something earlier too. There’s usually not one person, one thing, there’s usually a time and environment when it’s right for the flowers to emerge for the innovation to occur. It’s not just one flower, there are many, although sometimes, one flower grabs all the attention.” With regards to haptics, Noll highlights that work started long before the mid 1980s when most people appear to think it started. He adds: “The patent on it was applied for in 1971. It has a drawing in it that shows a tactile input, a computer, 3D output, force speed back, everything. That drawing is broad enough to cover all of the world’s haptics, and that’s what’s covered by this patent. So, it was not the mid-80s, this goes back many, many years before.” On the subject of AI, Noll says: “I’ve never figured out what AI meant, the term is back in vogue again, it was in vogue in the early 60s too. John Pearce used to talk about artificial intelligence versus the natural stupidity of humanity, he thought the two went together. Computers had artificial intelligence, humans had natural stupidity, John used to love these plays on words. One of the first publications (Friend or Foe) I ever did was for a college magazine and it was talking about computers and the absolute horrible risks of computers being allowed to make an ultimate decision and act on it. “For example, a local bank checks everything, if you try to do some money transfers, it checks your patterns and sees did you ever do this before, does it make sense? And then it throws up a red flag and prevents you from doing it. I have no problem with the red flag, but the computer is now making an ultimate decision of preventing me from doing something and that becomes a problem. You try to have to call to get a human to intervene and the human cannot override the decision of the computer. That doesn’t make good sense to me and for those who know how to break into computers is a serious security fraud. You ultimately want a human to be responsible, not a machine. You don’t want to go to war because some computer has decided there is a pattern that looks like we’re under attack and it was a false pattern. If the computers see a pattern, warn somebody so that a human can get involved, but let the human make the ultimate decision.” He points to the Y2K bug as a case of “humanity trying to scare itself”. He continues: “It certainly did a great job. In Los Angeles, they were predicting dams would break, fire engines wouldn’t roll, aeroplanes would fall from the sky, it was like Halloween all over again and in the end it was no problem at all. We scare ourselves, and artificial intelligence, in the end, is nothing more than computers being used, just like they’ve always been used, it’s a new term, it attracts the investors, that’s what the game really seems to be, attract the investors.” The history of computing – it’s older than you realise
AI
Asked where he thinks the technology of the internet is going next, Noll says: “I would look at old ideas and how they could be done more easily and less expensively with newer technology. It’s hard to find something really that new. When I was doing the tactile project, one of the things I was envisioning was the idea of a machine which I could be here in New Jersey and somehow feel and touch a piece of cloth maybe in China; we still can’t do that. These are old ideas but still not that easy to do yet. “Some ideas keep rolling back, 3 dimensional movies, stereo movies for example. Every 10 or 15 years they come back again and then somebody does a movie, everybody goes and sees it, the novelty quickly wears off, because what attracts people to entertainment, isn’t that it’s in colour, it isn’t that it’s in 3D, it’s the story that’s being told, the excitement of the story. “What I tried to teach my students was there are 5 factors that are involved in understanding the future, one of them is technology. You’ve got to be able to finance how are you going to pay for it, and what is going to be the profitability? The government policy issue has got to be looked at too. How are you going to get this idea to the consumer; do you have a business structure to get it there, get it out in the real world through supply chain. Lastly, do consumers really want it, is it something that is going to make their lives different or not? You get to keep trying things but be prepared that when you drop a new idea into that funnel very few make it all the way through and finally come out with success. Most of them die inside. It doesn’t mean don’t try, but the other thing is, don’t try something that’s been done before and all things do change.” Technology of the internet
Interview Data
Interviewed by Tom Abram
Transcribed by TP Transcription
Abstracted by Lynda Feeley