fbpx

 

 

Robin Mansell, Panel Chair: Thank you to AIT and Bill Dutton for inviting me to be in this room. I began to study telecommunications in the 1970s in Manitoba, Canada. I worked with AT&T, MCI and Trust Pace and learned something about technology.

Later I work at the Science Policy Research Unit at Sussex, where my research interest was in the evolution of IT and the internet and I was so interested that I went to night school and got a degree in engineering in public communications, even though I’m a social scientist.

So, in a nutshell, that’s me. I’m delighted to chair this session and our first speaker is Jonathan Alyen who is from the University of Manchester.

Jonathan Aylen – Telecommunications and Computing: British Rail’s Nationwide Train Operating System and its Evolution

OK, this is really by way of an elevator pitch for a much longer paper which is I’m delighted to say online at the conference. Something about me. I’m an economist who drifted into mechanical engineering and then became a management scholar and then latterly a historian.

I’m satisfied my plans for total world domination by working on the history of early history, unwritten history of nuclear weapons, and also missile guidance systems. So, if anyone would like to join me in taking over the world, I have the expertise if you have the motive.

What I want to do is to be messianic and persuade you geeks in this room that there was what amounts to an internet before packet switching. There was something called circuit switching of telecoms links, way back when. And they were large centralised computer networks linked by circuits switching. And I can see the more mature members of the audience nodding and the younger members of the audience looking completely bewildered.

In this method of telecommunications, believe it or not, you had to keep end-to-end communications while the data was being transmitted. That is the key point, it was not being sent in little packets and reassembled at the destination.

I want to just very quickly discuss one example of a large telecommunication stroke computing network before the internet and that is remarkably British Rail’s network nationwide train operating system. I bet you thought that British Rail was an exercise in curled up sandwiches and steam trains.

Pioneering huge nationwide computer networks

While they were running those steam locos, they were also pioneering huge nationwide computer networks. Okay, for those of you who are train geeks, when I gave a longer version of this paper to computer conservation, every single question during and after the discussion related to trains and not to computers.

This is probably the first time someone from a preserved steam railway has given a computer paper. It’s a joint paper with Bob Gwynne who’s a very well-known commentator but also has written with me on the history of the railways and computing.

Tops (Total Operations Processing System)

What we want to emphasise here is that Tops (Total Operations Processing System) kept track of every single freight loco, every wagon across British Rail’s network by 1975.

It was actually rolled out across Britain for a period of about three years. And the key point to take away from this is that this was a command-and-control system. As I will explain it worked upon a hub and spoke with information the data came in from the spokes into a centralised hub which was essentially a large computer centre near Marylebone and then was radiated back out to other points on the spokes that need to know.

It was a huge data handling exercise, hub and spoke, centralised command-and-control. And it worked, it was able to work because British Rail were advanced in telecommunications and signalling. Basically, they knew what they were doing. They had their own large-scale coaxial network and with apologies for those who worked for GPO, who I understand they’re quite a few in the audience, they were probably well-ahead in many areas at this time.

So, you think of them as steam trains, I think of them as a leader in telecommunications and signalling worldwide during the 1960s.

Chain home radar masts

In this way they handled 4,500 trains a day. Okay, where does the story begin? The story begins with chain home radar masts including two of the four remaining just outside Dover. I’m partisan as I was born in Dover, brought up with these chain home masts. This was an early warning system against Luftwaffe aircraft coming into the UK.

It worked by people phoning in over those circuits, phoning in to a centralised command. The aircraft are coming, this is their likely range, elevation, direction and someone then processed that.

However, that ultimately suffered from one major fault. It was far too labour intensive. Indeed, towards the end of the war, they closed down a number of stations simply because they didn’t have enough people to operate them.

What was the way forward? Well, as enemy aircraft got faster with jets during the Cold War, the Americans thought we can computerise warfare.

Relying on computers to handle this complex, fast moving information

The idea was to take the human out of the communication system by relying on computers to handle this complex, fast moving information from a wide range of radar towers and observation posts. Bring it into a central data centre and then deploy aircraft and guide the missiles as a result. This was Sage, you can read in the paper about the early Cape Cod experiments and the development of Sage.

But the essential point here is again it was a hub and spoke system centralised command-and-control to regional data centres.The breakthrough in the Cape Cod system was twofold. Firstly, conversion of digital data to analogue for telephone transmission and then demuxing at the other end. The second development was allowing better use of circuit switch telecoms through modulation. And in that case time division multiplexing, although British Rail was too late to be frequency division multiplex.

This idea did not involve IBM as a software supplier, it only involved IBM as a hard wire supplier. What it did teach IBM is about interaction between telecommunications and computing. If you like it represented the nexus between two technologies. Circuit switch telecommunications and large-scale mainframe computing.

As a result of that IBM set out to sell teleprocessing as a commercial opportunity. Let’s not go into this. I don’t think they were the best. I think that they were probably pretty pedestrian about it all. They weren’t the only people trying this. I happen to have a weakness for GE in check clearing.

Sabre Travel Reservation System

Of course, the one system that’s become famous, and Martin here has done much to pioneer this, was the Sabre Travel Reservation System.

But at the same time Sabre was going on, IBM was also working on TOPS with Southern Pacific. And for ten years they developed a system which was effectively an early warning system for freight trains. Converting the Cold War technology to coal trains.

Southern Pacific was actually quite an easy problem because let’s face it, American freight trains may be long but they’re pretty slow and there are few of them. So, if you’re only going to dispatch one freight train a day, you’ve got plenty of time to punch some cards, haven’t you, before you send it on its way?

British Rail decided after a worldwide search not to develop their own computer-based freight system but to develop TOPS. That was not straightforward, there were political obstacles. We were supposed to buy our ICL not complete IBM equipment.

There were technical obstacles, British Rail freight was much faster moving and much more complex than Southern Pacific. And there were adoption problems. The language of American colossi had to be translated into the language of British guards’ vans.

Space age Blandford House

So, what has happened? They converted a large engineering workshop at Marylebone’s Blandford House into what was described by one of our respondents for the interviews space age in its day.

They had centralised data processing using IBM 370s, always two because it was real-time computing, so you always have to have to have a backup. They picked up that idea from Sage for instance.

It was real-time access and it was all controlled by comms on the first floor, which was the British Rail switching of their telephone system on there.

So Blandford House ‘space age in its day’ was the hub of this command-and-control system. One of the great breakthroughs was actually not so much the IBM 370s, which are the usual pedestrian stuff. I should say in spending software programs on the top floor, quite a nice environment to work in, I gather, from the interviews.

Well, another one of the breakthroughs was the hardware that was used. Because out in the sticks there were 152 area freight centres. Half of these were in Portakabins and if anyone from Portakabin is listening please do not send me another email saying I breached copyright on the word Portakabin, as you literally supply them, right?

Using mini-computers to dispatch the information

Of these 152 around 70 were in Portakabins around remote freight yards many of which of course operated at night when the passenger trains were quiet all of which had huge quantities of super rich tea, always available throughout the night. One of the breakthroughs was to use mini-computers to dispatch the information. They used uni data terminals, which was a British Rail adoption.

We are talking here about computer interfaces. The mini-computers produced digital data that then had to be converted into analogue for forward transmission which was done by these linker modems. Very clever, very beautiful piece of engineering, because these linker modems also provided the frequency stacking or the frequency division multiplexing that was needed to get up to eight different channels of communication down one circuit switched telephone line to Blandford House and back.

Now what I want to do is to sum up and say before the web, before packet switching, there was circuit switching, there was computer to computer communication, there were large scale systems that relied upon digital to analogue conversion and they relied upon multiplexing to get the capacity from the phone lines.

I assure you, before the web, there were these large-scale systems. And I’d be delighted if anyone in the room can then say, oh yes, but what about the gas network? What about the those that organised the power stations or above all if anyone given our plans for total world domination knows about the defence network, please tell me.

Robin: Thank you for summarising an excellent paper. It’s a great story about dual use technologies and all of these transitions going from one country to another but also through different technological standards.

I would now like to welcome the next speaker, who is Ed Smith who has worked in the telecommunications and computer industries for BT and several other organisations. He is going to present a joint paper written with Chris Miller and Jim Norton on Evolving and Exploiting Packet Switched Networks.

 

Ed Smith – Evolving and Exploiting Packet Switched Networks

I started life as a chemist, Mike, Jim and Chris, you’re real engineers. I then progressed into computers and telecommunications and was with BT for about 30 years.

Hence why I’m speaking on this particular topic, which is Evolving and Exploiting Packet Switched Networks covering the timeframe between 1982 and 2000.

This was an interesting time for BT who were the main network provider at the time and it had rather a lot on its plate. It had just separated from Post Office and was about to be privatised and it had to break into new markets to see off the competition and keep revenues up.

And among those were mobile communications. At the beginning of the period, most of the networks were customer provided and made use of our IT provider’s architectures and equipment.

This would be linked by the customer organisation using basic telecommunications systems. And a standardised approach didn’t become available until about 1976.  One was X25, which was standardised as a packet switch technology.

BT adopts National X25 service

And in 1981 BT replaced its experimental package service with the national X25 service, based on Telenet equipment, first of all because they were headed up by a chap called Larry Roberts and secondly, they were a subsidiary of BBN [Bolt Beranek and Newman] both of which are pretty significant in the development of the internet.

Speaking of the internet, at the beginning of our story it was the Arpanet and TCP/IP has been around for a little while but it hadn’t been separated out into its two components for very long and it wasn’t actually made mandatory on the Arpanet until 1983.

And we didn’t see too much growth in the internet until towards the end of the 1980s and 1990s for a variety of reasons including ownership changes in the US, the popularity of TCP/IP as a technology in the economic world, the establishment of ISPs in the UK, general awareness of online services and the rise of the PC.

So, this is what PSS national coverage looked like in 1981 [slide of UK networks] but one of its weaknesses was that it didn’t reach all high streets very easily. Providing coverage to some locations was quite expensive.

When BT first launched the service, they would sell connectivity and that evolved into selling connections to networks. Generally when you priced up a private network, you tended to ignore key elements to do with the running of the network as well as the management and infrastructure to support it.

Shared infrastructure

This network was provided more cheaply because it was using shared infrastructure. But most cost models ignored it. And what the commercial people in BT had to get over to the network providers was pushing a full costing model that tilted the argument in favour of a managed network.

As it evolved, the product evolved itself as a monied, services network provider rather than a platform provider and started to address corporate network sales.

That led to aligning resources with clients from customer centric teams, then project management and improved commercial packaging and became much more requirements focused.

The next stage was finding a low-cost way of extending the network to high street locations. This was achieved in late 1980s and the retail sector picked this up in a very big way and this reinforced this customer-centric approach.

The financial services area was largely IBM dominated and required high speeds and Lan services. Fortunately for BT they had acquired an outfit called Tymnet in 1991. And they offered technology based on frame relay for the core network, which offered a low approach overhead and therefore could give you faster data transport.

‘Snazzy’ bridge routers

And these snazzy new things that came from Cisco Systems called bridge routers that allowed you to interwork Lan technologies with Wan technologies through the mechanism of TCP/IP.

As time went by it became clear that the real value that the monied services area was adding as a provider was at the edge and ultimately it became more economical transfer responsibility to the core networks such as BT, certainly the frame relay network.

What that meant was that as we went through generations of networking by keeping up to date with the bridge router technology and taking into account uIP developments, managed network service providers could take advantage on any core network that BT chose to offer customer solutions.

IP solutions

So this greater emphasis on the IP solutions led to getting into things like managed Lans, managed intranets and managed voice over IP services.

This takes us from being a managed network service provider to a solution provider and ultimately with this area of business, you start to look up to become global services to look after big customers in general.

So, I don’t think that that is a considerably different evolution to any of our competitors at the time. But the main point is really that BT and other providers adopted to TCP/IP and internet product strategies at time when it became commercially advantageous to build on expertise accumulated in the field of corporate networking: this was supposed to be a willingness to engage with new technologies such as local area networks, bridge routers and frame relay.

 

Robin: Thank you very much. Thank you. Again, a fascinating paper, this time making me think about the possibility of different pathways and different interactions between the technical standards that were coming along like TPC/IP and SNA and the corporate strategies and services.

Our next speaker is Simon Rowberry and he’s from the Department of Information Studies at UCL. And he’s paper is:

 

A Revisionist History of Videotex in Britain: The Importance of Connecting Editorial and Engineering in Teletext/Videotex Adoption in Britain

 

Simon Rowberry: So as everyone else has given some background of where they’re coming from, I was a literature graduate who ended up falling into publishing and looking at the history of digital publishing. And this question of reading on screen. Where there’s this kind of dominant narrative of people reading long form, fiction, nonfiction, so I’ve done a lot of work on ebooks and things like that.

But actually, most people do not read most of their digital content in long form format. So, we have social media and things like that and actually an early form of this would be teletext and video text.

There is just a kind of terminological difference between those two in terms of how I’m using them. Teletext with a lowercase T refers to the, kind of overall service rather than the specific ITV implementation. And that is a one-way transmission. through the television signal.

Videotext on the other hand there’s some things like Prestel and the BT worked on where it was two-way. And often histories of these formats will focus on Minintel, the very successful French implementation while Prestel is often not considered within the kind of same framework, primarily because of what I’m terming the difference between the kind of engineering prowess and editorial prowess.

Ceefax, Fortel and Prestel

In the full paper I talk through this in greater detail. But the three that I’m looking at, I’ve gone to the archives of them, are Ceefax, a BBC service from 1974 onwards. Channel 4’s Fortel, which is an interesting example because Channel 4 was a new channel and as soon as it launched its teletext service was part of that, so it’s more integrated into how people consider Channel 4 as a platform. And then finally Prestel as something that built on the strength of BT but never really took off because of those editorial considerations. So, the gold standard that if I say, oh, do you remember teletext to most people? They will say, yes, I remember Ceefax because it ran for longer than most of these other services and was supported by the BBC.

So, it really got the idea of the editorial identity of Ceefax and Teletext in general. And this is something that’s often ignored in terms of histories of innovation. Talking about teletext, it’s often the case of ‘Oh, oh, that’s not interesting. It was just using the pre-existing television signal and making more of the vertical blanking interval’.

But overall, it wasn’t doing something completely revolutionary. So why should we care about that? Well actually, from an editorial perspective, it’s how you use that very low-resolution text mosaic to create interesting and engaging content? How can you use the fact that this refreshes over time? There was a lot of creativity and Ceefax can be seen as the kind of pinnacle of this.

They understood how the format worked, how to use it effectively, and although there were some mistakes because it was live service, the most memorable one was they announced the Queen Mother’s death in the 1980.

Proto form of Twitter

It was very strong in terms of how this was a brand, how we’re going to use it to convey information effectively, rapidly, quickly. So we can see this as a kind of proto form of something like Twitter in its broadcast mechanism. So not in terms of interaction but very quickly saying these are the headlines to the extent where you could have some TV channel that had the Teletext Ceefax breaking news appear down the bottom of the screen.

So, you’ll get that instant breaking news story that you otherwise wouldn’t get on TV unless you changed to another channel.

On the other hand, Channel 4 did not own any of the servers, they did not own any of the hardware they were borrowing the IBA’s equipment and that meant that they didn’t have much in the way of engineering prowess, but they had this very clear understanding of how teletext should be embedded within a television service.

They were very keen in terms of here’s the content that we’re going to run in terms of TV programming, here’s the content that should run alongside that in terms of our teletext offering. But they were using the same teletext bandwidth as ITV and similar television stations, which meant that they were in competition with teletext as a service and there were lots of tensions around, for example, having information about the top 40 singles and this idea that this was part of the Teletext service so therefore it couldn’t appear on Channel 4 because it would eat up their audience and it would limit their advertising revenue accordingly.

Channel 4 and Fortel were never really successful in terms of long-term success even though they innovated in things such as digital or electronic storytelling such as serialising a cartoon and things like that.

So very good editorially but unfortunately didn’t have that engineering. Conversely, Prestel did not have that editorial kind of angle, the idea that you have far more kind of capacity for lots of businesses, lots of organisations to use this as their platform didn’t really work at the time, because there wasn’t that strong kind of cohesive editorial understanding behind it. So, unfortunately Prestel as a service never got to the same space as Minitel.

This demonstrates that you can’t just look at the kind of technical aspects within these historical platforms, you need to figure out how it works from the social perspective as well as that kind of socio technical analysis of these things. Then it will allow you to understand more effectively why this was successful in some cases, but not in others. Thank you.

 

Panel Discusssion

Robin: I think you do an excellent job in highlighting a term, which is one of my favourites, socio technical. Sometimes it sounds a bit awkward, but it is really needful because where would we be without the interfaces between content and engineering and the technical standards.

This is what the world we live in today is about and historically there must be huge numbers of lessons. We can start by seeing whether any of the panellists want to ask each other questions?

Is there anything that you would like to put to one of your colleagues? If not, I do have one question to get us going and then we’ll open it up to the audience.

And my question is really to ask each of you to talk about the transitions that you’ve explored in each of your papers that were really significant both in terms of advancing technology and in terms of business strategy. And I was wondering if you could pick out of those lessons any insights for today’s struggles over next generation technologies?

Jonathan: These are the big questions. One person described TOPS as trying to put a patch on a failing system. British Rail’s freight network was facing acute competition from road transport, particularly with the opening of the motorways from 1959 and the increasing reliability of diesel lorries and the increasing size of diesel lorries.

So that individual waggon load freight traffic was essentially in severe decline as TOPS went in and I suppose one lesson you learn from this is you can’t rely on a technological patch to solve your fundamental management problems.

Having said all that as I say in my paper, TOPS is still running but in a much larger form using packet switching and controlling all of the train networks in particular.
TOPS is responsible for delay attribution between all the private operators under Network Rail.

So, they did make tremendous use of their computer expertise but that was almost an accident. Their skills developed into other areas and now dominate the rail network. That was not the original intention. That was to fix waggon load freight traffic.

Ed Smith: Our eye-openers were that commercial colleagues would come to us and say, ‘we need to do this with this customer. How are we going to do it?’ That the picked us up and drove us forward.

I think generally in the early days of the high street network there was not a lot of learning about what our customers were trying to do and what their concerns were. And therefore it was important, particularly, in terms of reliability. So it’s the case of allowing customer requirements to drive design and technology.

Chris Miller: I’m the co-writer of that paper with Ed and Jim. I think it’s worth just mentioning that we weren’t being passive at the time. We saw that customers had a problem because they had real difficulty getting sales data back to their centre because they didn’t have a nice set of lines with coax wires along them. They were relying on putting tape cassettes in courier pouches or floppy disks in courier pouches.

And they were wringing their hands as to what they could do about this appalling delay for their re-fulfilment cycle. At that time people were seeing data networks as a big part of clients’ operation but they didn’t know it yet. So we were simultaneously educating customers about what possibilities were while frantically building the networks out behind the scenes. It was a very, if you like, added transition. It wasn’t purely evolutionary it was a conscious decision to see value for the client and grab that as a market.

Simon: I think for me, this draws on the previous panel and the problem of these gaps in archives that if you go to the institutional archives of any of these teletext providers that have very little direct evidence of their service.

But actually, people will be very creative and be able to reconstruct things in interesting ways. So in terms of teletext, if you have a VHS recording of a television programme that had a teletext transmission, you can reverse engineer the cassette to actually extract a partial reconstruction of that material.

In the same way with web archives, some of them might be lost, but there might be some way of reconstructing material and there might be hope in a few generations’ time of hobbyists, archivists extracting content in interesting and innovative ways.

Robin: One of the things that struck me when I saw the curve for internet explosion was that quite often we’re told that with internet everything was disruptive and very rapid and yet the stories that these histories uncover are relative, you know, not so much that huge exponential take-off, but a process of learning and sometimes making mistakes and sometimes correcting for those mistakes and learning from the United States as well as continental Europe, also the innovations in Britain of pushing forward with new services and technologies. And I think that’s what’s fascinating about these histories. And why it really does need to be preserved.

I’d like to open the floor to questions to all of our panellists.

Questions from the audience

Tim Johnson in the audience: Thanks so much, I was hoping this would be interesting, but I didn’t expect it to be exciting.

I wrote what was claimed to be the first article in the press about packet switching for the Sunday Times in the late ‘60s and started Ovum reports and I wanted to say thank you for such a stimulating account from so many different directions.

Jim Norton: Can I just link the two: circuit switching a packet switching? Probably with hindsight the biggest mistake we make in PSS packet switching was we were hooked on the idea of creating a virtual circuit.

Because we had all that circuit switching, telecoms were probably saying we knew circuit switching was the way to do it so could you just imitate circuit switching using packet technology? And that in a sense put us years behind what was happening in the US. And so there is an interesting connection between the circuit switch legacy and packet switching.

Member of the audience: Complimentary to that when was X.25 actually finally abandoned.

Jim: Not for ages. That’s why I’ve just kept running in banking.

Another member of the audience: I am not a historian so I can’t remember that X.25 was certainly used in several core networks for banking until relatively recently.

Ed: The last time I had anything to do with it was 2014. There were a few people who knew about it left in BT who were in heavy demand. At the time I was working for in the finance division looking at our projects and they were active then. They would have some very interesting stories to tell you. I think it went on a bit beyond that.

Member of the audience: It’s interesting you should say that because when I was a telecom director of a big investment bank, technologies that are in the networking field last a lot longer than you’d think. They tried to get rid of Telex in the early 1990s, and then along came the investment banking problem and they did everything on Telex for the next ten to 15 years.

Ed: People like Cisco were trying to kill off Token Ring but a whole number of banks were not prepared to get rid of it because they had xyz applications that was written years ago by ‘Fred’ who’s left and they needed Token Ring to keep it going.

Jonathan: But doesn’t this make a general point about the fact we know very little about the fact we know very little about the legacy software still out there?

I mean, one of the reasons I studied TOPS was because I couldn’t get any inside access to banks who have huge amounts locked up legacy software, so my stepson tells me who has IT involvement with them.

And TOPS is still running 50 years on, still has elements of IBM Assembly language in it, so one almost needs archaeological dig to establish what’s going on in current uses of software and I’m sure you will find many horizons and layers in any given organisation.

John Carrington: Just to kind of comment on what was just made. Something that fascinates me is the technological development and the customer aspiration, what the customer wants because sometimes the two don’t necessarily run together.

The point about Telex reminds me of the situation in the early ‘80s when I was responsible within BT International for commercial strategy and the world then was divided between voice and Telex and they were two different relationships into the states as elsewhere.  And I remember we were looking at something called SatStream which could provide a required fast digital service end to end, certainly from London to capitals to the states and so on and so forth.

And the top 10 BT international customers at that time, most people had never heard of, were they were foreign exchange dealers and we rolled out a trial so that they could use this because it from the point of view of who knows first it’s important for their transactions.

And I always remember that the answer came back, ‘we’ll stay with Telex because you’ve got the answer back and that gives you a legal contract.

So you have to remember that with all this technological development there are other things that play upon it and they’re not always thought about by the organisations providing the service. But once the legal situation had been sorted Telex died a death quite quickly, but that’s an interesting piece of industrial archaeology, technical archaeology.

Paul Excell: Can I just build on that because I had the joy of running Telex, it was the best business I ever run. Everything worked and it was just being used by people, mostly in manufacturing for contracts and legal contracts as John said.

The point I want to make is a bit of a love in really as I want to thank Chris and Professor Norton because having taken on BT’s core systems and then also to bring in the also excellent other speakers on circuit switching shows that all the pre-work that was done on X25 and on the data networks and the frame relaying network then led into the cell stream and core networks and indeed Colossus IP network.

And because we were then able to leverage the circuit switch SDN network with dial IP and so on, which, you know, at the time was stored in 64 kilobits and then 128 kilobits, which is obviously so slow now, but then introduce a copper lining to everybody’s consumer home.

I suppose what I’m saying is for the speakers to pull together one big theme and to allow us to learn from what’s gone on before, and all that work in the past was so useful, and for us to go forward is so important.

Alison in the audience: My notes are very interesting because they’re about infrastructure and I wrote down command and control, coax networks at T. And so my question is about the socio-technical, and particularly about T and about work. I’m curious to hear from the panel about what kinds of work sustained these sorts of networks and how does the different kind of work that’s now expected in our telecom driven organisations make this kind of networking either very limited in its application or no longer possible? I was also noticing that some of the data processors on your slides were women and I was also wondering if you had any commentaries about the different kinds of class and gender labour that sustained TOPS and also Ceefax and the other systems. Thanks.

Jonathan: I could give a complete paper on that because I’ve done a large number of interviews. Let’s start with the women. They were predominantly the TOPS clerks who were put in the data entry into these Ventech machines, they’re called Ventech machines in, they are supplied by UniData.

They were predominantly drawn on the basis of their ability. So in principle they were British Rail clerks but some of the shunters turned out, and that despite their lonely origins and lack of education, to be very adept at this kind of work.

So it was a great opportunity for social climbing, particularly because you’ve got a premium wage rate for working at night. Secondly, there were some women, some of whom were formidable. In fact, I would love to interview them because the comments about them are a bit scary and I love to meet these scary women, but certainly for instance an area freight centre in Sheffield was run by a four woman and another one near Peterborough.

Predominantly male environments where the women succeeded, and boy they succeeded. So it was a very clear case of upward mobility, of people seizing the opportunity to get into a novel area of work that was better paid, namely computing.

And a very strong social structure around those night-time area freight centres.

There were also lots of young programmers on the top floor at Blandford House in their twenties who were had just come out of degree courses, places like University of Kent, the degrees in computing, trained by British Rail, the Grove, sent in to sort out this new computer system.

Simon from the panel: There’s not much around the people behind Ceefax in the archives or anywhere else of course the news stories weren’t given bylines of course so that labour is hidden.

And the only thing within the BBC Britain archives that’s interesting is around labour disputes and particularly the classification of people with input material onto Ceefax as journalists and being part of the NUJ (National Union of Journalists) and disputes around that.

So there’s some kind of higher-level labour history but beyond that it’s very difficult to reconstruct who was doing a lot of this work from the archives that I looked at so far.

Ed from the panel: As far as BT were concerned in the data networking division, the main contribution from women was marketing and customer service. There were a few women engineers but we really didn’t have that many.

Question from a female member of the audience: What is the understanding from anybody who has studied Teletext about these services being degraded or withdrawn in terms of their legacy and the huge digital divide as not all of those users would have moved on to the internet or to social media?

Simon: I’m not aware of anyone working on that. I haven’t looked at it myself, but certainly that is the other big thing about Ceefax that once we got to a certain point in time, every television had a teletext receiver built in.

So it was more mass than the early years of the internet and what comes out of some of the stories around this is that on September 11th it was far more reliable to go to teletext than it was the web because the Guardian website just collapsed under the strain while Teletext could deal with many televisions as possible.

That kind of late slash extinct media still was useful up until the point where it was turned off and then there’s that kind of interesting afterlife around when the BBC introduced the digital service with a friend button that never really captured the public’s imagination largely because of that editorial function that disappeared in terms of this more generic television-based web interface.

Member of the audience (Jane?): (1.55.30) One thing that occurs to me in what we’ve heard about the channels for communication for IT is that there are other technical stories that might have great implications for the socio-tech side such as the story about the growth of memory technology.

With the internet I’m minded of a conference at MIT where they said about NASA data it was read once only because it was on tapes that were deteriorating and only read once. The shift to the internet perhaps is giving us a different type of resource where the memory is a very large part and yet that’s something that’s been relatively silent in the discussions we’ve had.

Ed: I think memory was largely the problems of the chip manufacturers and it came almost as a given that PCs came with more memory as time went by, although it was not always used responsibly, and the communications devices you need more memory because you have more sophisticated software.

There’s an interesting inflection where Cisco went from being completely multi-protocol to actually trying to kill off tits support SNeT, Datanet and other things to focus on IP and new services that the old memory ones occupied.

Jonathan: To which one’s one’s inclined to say, what memory? What was it, 4 MB on an IBM 370? They relied heavily on these massive IBM disk drives and every time they updated, they rewrote the files.So that was a massive file store. It was updated all the time. There was no memory. So if you wished as a geek to recreate TOPS, you would actually have to do it from scratch.

Simon: In terms of the BBC and Ceefax, they only wanted to keep the data two weeks, so it was very ephemeral and a live service by nature. There were proposals around the 1980s to use video cassettes and other similar formats to actually store lots of content and then you could just kind of go forward one frame and have a new screen of teletext style material. So, there were efforts, but they never really led anyway.

Vassilis: This is a follow up to Jane’s question. I spoke to Robert Kahn for 20 minutes last year and the discussion revolved around memory. His latest writings have a little bit to do with internet history and he’s trying impress on people that it’s all about memory and the rest is search and digital object identifiers. So based on this discussion, the question comes to mind whose memory and for what purpose?

Ed: What I would say is that this is about the wider datasphere. In other words, the information we were carrying, but neither creating nor interpreting. So, for the network providers that was the question. And certainly a lot of the debate about social media and AI and other factors drives a wider discourse.

Simon: I think that this idea of permanence in any form for the web or anything else is very difficult and the teletext is just one example of why we should make more of an effort to have more substantial web archives.

Robin: With that I think I’d like to thank our speakers. And that’s why history matters and then some.

LinkedInTwitterFacebook