Dr Andrew Rogoyski, Director of Innovation and Partnerships at the Surrey Institute for People-Centred AI, trained as an astrophysicist. He did pioneering work in X-ray and laser-generated plasmas at Kings College, London, for his doctorate. His first computer was a BBC Micro which he programmed in BBC Basic to control external devices using the to micro’s port. He worked in Logica for 11 years where he developed AI techniques using early neural networks for signal and image processing.
He spent the next 13 years in five companies in the UK. These included Charteris, a government-focused consultancy practice. These included advising on the best ways to deliver complex IT systems on budget. This gave him valuable insights into why large IT projects often fail. He moved back to Logica, now renamed CGI, and worked in cybersecurity for space and defence clients.
Dr Rogoyski joined the AI Institute at Surrey in 2020 from where he makes numerous sallies into the wider world to comment on the development of AI. He has experienced several winters where interest in AI has nearly evaporated but is excited to now be in a summer of interest as big tech pours fortunes into development, some fruitful and some futile.
Andrew Rogoyski was born in the sixties. His father was an antiquarian book dealer at Hodgson’s auction house in London, which was later acquired by Sotheby’s. Andrew says: “My earliest memory was being taken to Hodgson’s in Chancery Lane, and being allowed to go in the book lift to the basement where they worked cataloguing and assessing the ancient tomes. There’s a definite sort of smell to old books, the vellum and the various accumulations of age that I certainly remember from the auction house, and at home, because my father then went on to become an independent antiquarian book dealer, so I lived with old books as part of the family for many, many years in my childhood.” His father was born in Poland and came to the UK as a refugee with his family in 1939. Andrew says: “He told the harrowing tale of him arriving by ship in Liverpool on his own, having been shipped on his own off from Portugal I think it was as when he was just a twelve-year-old boy. The idea of making that escape, and arriving in a strange new country, not speaking the language, at that age is you something that I think is profoundly terrifying, and something that I never properly understood, other than to think I have no idea what I would have done in those circumstances.” Andrew’s mother was born in Cairo when her father, a brigadier in the Army was serving in Egypt. He says: “My mother spent most of the war in a boarding school in St Swithun’s near Winchester, and didn’t see her parents for five years I think it was.” Andrew says of his first experiences with computers: “My earliest languages were 6502 assembly, BBC BASIC, and FORTRAN, FORTRAN IV and then FORTRAN 77. I didn’t really start playing with computers until my teens when they became accessible to us at home. At school we had a computing club which ran punch cards and we had to send them off to Imperial for processing, which took the zing out of it because you had to wait a week to find that you had completely mis-programmed something. Allegedly, one of our teachers at the time was one of the very first hackers and had been prosecuted as such. I always thought this actually made him ideally qualified for the job of teaching computing to us at that stage.” Andrew attended Latymer an independent school in Hammersmith where Hugh Grant was in the year above him. Andrew says the he does not have particularly fond memories of school. He says: “As a small boy I used irritate the teachers, I was one of those disruptive types, probably nowadays would have been classified either as ADHD. I often suspected that I am slightly on the Spectrum.” He developed an interest in science because he enjoyed the practical nature of it and he says he could understand the world through it. Both of his parents were very keen for Andrew and his sister to have a good education. He adds: “I don’t think they received a particularly good education in the formal sense because of the war that had disrupted their upbringing, but they saw the value in it, and historically and culturally they were very keen that we had that good education.” After school, Andrew attended Queen Elizabeth College, based in Kensington, which later became part of King’s College, London, in a merger. Andrew says: “I went on to study astrophysics at university, and that was a conflation of the interest in science but also the wonder of science. I ended up being a King’s person for my first degree and my PhD, which I stayed on to do at King’s.” Andrew’s PhD entitled ‘Characterisation and Optimisation of a Laser-Generated Plasma Source of Soft X-rays’. He explains: “It was about using very high-energy lasers to create plasmas of substances, very very hot energetic gases, and then use those as sources of soft X-rays for things like lithography, VLSI, chip design, chip lithography, X-ray microscopy, and other applications. At the time, you could only do it with a synchrotron, so we used to work at Daresbury with the synchrotron that they had there. They later built the bright source in Harwell. The idea was to come up with something that we could build in a lab rather than have a national facility. We had a very large Class 4 laser that took up half the room, and then we had a vacuum chamber. I spent several years building this system and then using it to characterise but also to use it in lithographic applications. We did a lot of work with Rutherford Appleton Lab, the Central Laser Facility, a laser called Vulcan and a laser called Sprite. Vulcan was a huge infrared laser, and Sprite was an ultraviolet laser.” In the early stages of his PhD, Andrew used a BBC Micro to control his experiments. He says: “That’s where I ended up learning 6502 assembly, to drive things like stepper motors, and to activate high-voltage sensors and also the laser firing as well. I rather enjoyed that because it had very real physical manifestation. I also did some hydrocode development. We ran some big codes on the Cray supercomputer at the Rutherford, which you had to talk to through an IBM 3090, through JCL That’s where my FORTRAN skills got honed. As the facilities got more powerful we were able to do more interesting things with them, including the visualisation and graphical side of things, so that we were able to interpret the outputs of these in more meaningful fashion. That also took me into the world of X-ray lasers. At Rutherford we were doing ICF, internal confinement fusion. There were two competing technologies for fusion, one is ICF and one is the Torus, which we had also at Culham. So, those were my experiences of computing, very much applied physics.” Speaking about the people who influenced him during his research, he highlights his supervisor Professor Alan Michette, at King’s. He says of him: “He was always very good as a supervisor because he let me make my own mistakes, and, didn’t try and tell me what to do in the lab. He was very encouraging and supportive. There was another, Professor Geoff Pert, in York University, who I worked for a year or two when I was at the Rutherford labs, it was his grant funding. He was a very respected plasma physics expert and had written all sorts of complicated software, and was the originator of the many of the hydrocodes that we went on to develop. There was Professor Steve Rose, at the Rutherford Lab, who was a lovely guy, he was just a very kind and gentle but thoughtful scientist who was very encouraging to my ideas.” In the 1990s, with many of his contemporaries going into the City, Andrew felt a bit lost and lonely and decided to move into industry and joined Logica. He says that he spent most of his time there doing contract research for different organisations including the Admiralty Research Establishment, the Royal Signals that became DRA and DERA before splitting into QinetiQ and DSTL. He says of the time: “I’ve had a bit of an orbit around that community. It was quite a nice introduction into all sorts of interesting scientific problems that were getting solved using computers, before Logica became more a mainstream IT services company. It was very much sort of beard and sandals then. We had people working on speech recognition systems, on knowledge-based systems, expert systems rather, and that was my first touchpoint with proper AI, or at least what’s now seen as AI.” Speaking of some of the projects he was worked on, Andrew adds: “I was involved in quite a lot of work in what was called passive sonar, the interpretation of sound as received by underwater sensors, listening for submarines and ships at enormous distances. It was a combination of signal processing and a variety of different heuristics, what we then later went on to recognise as sort of early AI algorithms. That then went on to some of the image processing applications. We got involved in a whole variety of different projects. “One of the most interesting projects was an expert system we built in the sonar domain which used Lisp. It used a symbonics coprocessor, a dedicated Lisp processor. That was in the days where we believed that AI was an expert base where we would have deep elicitation interviews with human experts, and abstract that knowledge, codify it into our knowledge base system, and then use that to perform the task as an automated task. The great thing about that was that it was built on explainability, so that the expert operators could come along and see this machine churning away, but it could follow its reasoning. So when it made a conclusion, you had a textural description of how it made that conclusion, and the things then triggered it to do that, which is one of the great challenges of current large language models for example, the black box problem.” Andrew describes the black box problem, adding: “The big problem is that neural networks at scale are very difficult to understand how knowledge is codified within thousands, millions, billions now, of weights within a complex mathematical structure that is the neural network. So you end up with a system that becomes a black box, i.e. you know what information you put in, you know what’s coming out, but you’ve no idea actually what went on within it. A bit like the human brain. We know what we’re seeing, what we’re feeling, what we’re touching and so on, but we don’t know what’s going on within the individual neurons. … That is a problem for current very large neural network AI implementations. If you imagine it, an AI system making a decision about whether you get a mortgage, or it’s driving your car, if something goes wrong with that process, explaining how that went wrong and correcting it is not straightforward at all, and in many cases it’s not possible.” Andrew also relates the acceleration of AI, he says: “There’s a new law out on AI, although it hasn’t got a name, whereas Moore’s Law was compute power doubles every eighteen months, over the last ten years, the size of compute required for a large AI model, has doubled every three and a half months. It’s gone up by a factor of 300,000 in the last ten years, which is why the large language models, the generative AI, at the moment is out of reach for most people. There’s very few people who can afford the scale of AI that’s currently needed for those types of AI, apart from the hyperscalers like Google, Amazon, Microsoft and so on.” Andrew stayed at Logica for eleven years and became an operations manager, and a programme manager. He ran a business unit called Surveillance Systems and in his last year, was abusiness manager for the Retail Banking unit. He says of being a manager: “The thing that motivated me as a manager was always getting the best out of people. Making them feel valued, making their work understood, and showing that I understood it and valued it. That required me to keep in touch with the technical side of things, with the language of what they were building. It’s what’s enabled me to keep close to technology and then return back to academia many years later, as I still try to speak the languages, and I can communicate about the complexities and the opportunities with these technologies as they develop.” Andrew left Logica because he wasn’t enjoying working in the financial services arm of the business. He says: “Purely by happenstance I got approached by a small company in Guildford called ESYS, who were a consultancy in space, and had done a lot of work with the European Space Agency. So that brought me back to my astrophysics first degree and I thought it could be fun and interesting. They were recruiting me with a view to succeeding the MD and the idea of sort of running a small company of about thirty people appealed to me. It was a move from a big company to a small business and that was a great experience. Logica was becoming big corporate, it had grown enormously under the, the leadership of Martin Read, but it also changed its nature enormously, and was becoming a mainstream IT services company as opposed to the beard and sandals doing really interesting stuff, which I had joined. It felt that we were on different trajectories.” Andrew continues to explain some of the interesting work, adding: “We had great fun. We did all sorts of interesting thought leadership world work across Europe from science missions to Jupiter, to the idea of introducing mobile phones in passenger aircraft which is now a thing. We were collaborating with a variety of European partners and that was great fun. Working with German, French, Italian, and other partners, gave me a wider view of how these organisations work together.” After three years with ESYS, Andrew was approached by QinetiQ who were looking for someone to run part of their space business. Andrew says: “I quite liked the idea of joining QinetiQ because it was doing some hard science, it had facilities, it had real things again, which, while the consulting side from ESYS was great fun and we had a lot of influence, I still liked the idea of getting back into engineering and science.” Two weeks after joining QinetiQ, the MD who recruited Andrew left and Andrew applied for the role. He says: “I put myself forward as the, the MD, and got accepted, so I took over the managing directorship of the QinetiQ Space Division. Then I realised what it was like to run an organisation that was previously civil servants, many of whom didn’t want to be privatised and had joined as lifetime scientific civil servants. The cultural problems that that led to, combined with Carlisle Group’s relentless drive to promote QinetiQ as a major commercial organisation, created some real challenging environments.” Andrew spent two years in the role and then decided to leave. Andrew joined Charteris, a consultancy business which specialised in delivery of large technology systems. He explains: “There was about 100 people in Charteris. We had a practice of Microsoft technologies, but we also had a whole collection of very senior, very experienced people who had been there and done it, quite a few ex-Logica people there, we were hiring them out as expert witnesses, as senior programme managers, as strategists and so on, across a number of different market sectors, but notably in Government, notably in areas like GCHQ. “I ended up running their government business with clients like GCHQ and the Home Office, and, and so forth, doing some very influential work from innovation to policy, the very early days of cyber, and then cyber security.” In 2010, after four years working at Charteris, Andrew next moved to Defence Strategy and Solutions, another consultancy working in a similar way to Charteris. Andrew says of the company: “We were a very influential, very high-powered consulting business, but as is the case with small business, we lost an anchor client. There was a big retraction of the defence industry at that time, and we just couldn’t sustain ourselves on that business. We tried to find new customers to fill that gap but didn’t. It was great fun while it lasted, but the company eventually folded some fifteen, eighteen months after I joined it.” In 2011, Andrew joined Roke, working on cybersecurity. He says: “I joined Roke Manor and was looking at their national security business, and I ended up being seconded into the Cabinet Office for a couple of years into the Officer of Cyber Security and Information Assurance which oversaw the national programme for cybersecurity, which spent £860 million over various different government departments to create a joined-up approach for Government to take cybersecurity really seriously, right the way across the piece. “We had serious ministerial support and an international profile. It was a good example of good Government policy-making that got cybersecurity as an emerging problem, and saw the importance of things like international leadership, the importance of making different Government departments aware. It wasn’t just GCHQ and the CESG banging the drum about it. It applied to everything from tax to healthcare, and I think made a difference. That was great fun to be in with, for the two years that I was at Cabinet Office, because I was there as a proxy for industry, and as a sort of strategic consultant. “I was directly involved in a number of different initiatives, and I ended up writing the UK strategy for cyber export, because we decided that the UK was quite good at this stuff, and that we ought to be exporting our expertise across the world, and the ministers at the time were very keen on progressing this as a thread for economic growth.” In 2020, Andrew joined the AI Institute at the University of Surrey. Andrew tried to sign the recent letter calling for a moratorium on the development and use of GPT-5. He says: “Unfortunately, the servers crashed before I could sign it. However, those that did sign the letter weren’t saying we should stop development or research. They were saying before we let things get too far advanced, we need to have a cold hard look at what we’re about to do. We’re getting closer to the point where some breakthrough may occur. Some argue that we’re getting closer to AGI, to artificial general intelligence; we’re getting closer to the singularity, the point that AI starts to design themselves and run away; we’re getting closer to the idea that we may not be able to control an AI that we develop. There’s a lot of debate about the pros and cons and the realities of those, but I do think we need to treat those topics very, very seriously.” Asked about built in bias in AI systems, Andrew highlights that while the current focus is on large language models, there are many different types of AI systems from expert systems, search style AI, fuzzy logic, etc. He explains: “There’s a lot of different disciplines within AI, and we’ve got this hotspot at the moment which started with machine learning and deep learning, and what we’re currently looking at with large language models, which started with Geoff Hinton in 2012 when he showed that neural networks could be used to good effect for image recognition with his ImageNet work. This created a whole buzz of excitement, neural networks were back, having been dropped and forgotten about for at least ten years previously. Then in 2017, 2018, Google’s Attention is All You Need, defined the transformer architecture which has then informed these large language models. “The interesting thing about the release of ChatGPT in November ’22 is that it made it accessible to anyone who had an interest to have a play with an AI system in a very accessible way, the fact that you were exchanging messages as a very advanced chatbot gave everyone pause for thought. Now we’re moving into image creation, sound creation, and other forms of generative AI. I think there’s been the tipping point; it’s not just the advance of these very large AI models which is a subsection of the wider AI work, but it’s also the engagement that people for the first time realise that this AI could have my job, or, if I’m more forward-thinking, I’m going to use this AI as part of my job, and enable me to work faster, which is already how some people are adapting. “They built on enormous corpus of text that’s trawled from the Internet, books, and all sorts of other textual sources, and they contain bias because they are what human beings have written over many years.” He goes on to highlight that some of these sources, such as the Reddit feed which is one of the feeds used in GPT can contain “some fairly extreme pieces of human writing which then gets absorbed into the model.” He continues: “Part of the model building is to then try and adapt and modify the grand model to de-emphasise and to catch some of those occurrences when users are interacting. So yes, bias and also toxic elements are built in because of who wrote the material that these systems are being trained on. Currently you can’t build those large models without those datasets, and without those datasets having some stuff that you don’t think represents humanity particularly well. So bias exists, yes, among other problems.” On the subject of copyright, Andrew says: “That’s for the, the courts to decide. … The law is inadequate for the age of creative machines. We need to have some really new thinking on what intellectual property and what copyright looks like, not just on a national basis, because of course it has to operate on an international basis for it to mean anything.” Asked about the infringement of copyright from the data that is being input to create new material by generative AI, Andrew adds: “Some of the models have been able to reverse-engineer the inputs and show that the model is drawing upon specific pieces of copyrighted or intellectual property. With the larger models, that abstraction gets more interesting, and there’s starting to be a realisation that there are emergent properties of the very large models. It’s no longer just a rehash of what’s already been written, drawn or photographed, but actually a genuine recreation of something new based on that input data. In the same way as you can argue that human beings create text and images and so on based on their experience of life. There, there is definite evidence emerging that, that these models are generating stuff that doesn’t exist in a human sphere.” More importantly, there is something being created within these large language models, that it is understanding of the structure of human communication in a way that actually we haven’t yet realised how to do. There’s a level of abstraction that’s starting to emerge that’s there’s something serious going on here.” Asked if AI will actually be able to create itself in the future, Andrew says: “I like the Douglas Adams creation that there is a supercomputer to answer the question of life, the universe and everything, that went on to design its successor and so forth. I think that’s the most likely line, increasingly intelligent AIs will then be used to design subsequently intelligent, or advanced AIs, and at some point the capabilities of that AI will become indistinguishable from intelligence as we understand it within human beings. That’s the singularity where machines then become too complex for us to understand and capable of desining new more advanced AIs, they become black boxes writ large. One of the fascinating points of that is will we recognise the intelligence, and will that intelligence actually be close to human intelligence, or will it be a different kind of intelligence with different motivations or alignment, that thinks and behaves in slightly different ways, or very different ways? I don’t think we can predict that at this moment.” He adds: “There have been many writers in the science fiction community that have said that AI is just the inevitable natural succession of any intelligent life form, at some point you will design your evolutionary successor.” Asked about the UK’s proposal for business rules for AI which are believed to be more lenient than those in Europe, Andrew says: “We’ve contributed to Government consultations on this and been voicing opinion in public. This is misjudged, which has surprised me, because I think they were doing quite a lot of serious thinking about this, but there are two things wrong with their thinking in the recent White Paper with their innovation-friendly approach to the AI regulation. One is that structurally, it’s kicking the can down the road, because it’s saying that the problem is with the individual regulators in different market sectors to deal with, and the Government may put this on a statutory basis to make sure that the regulators are regulating AI in their individual sectors. There’s one more aspect to that which is that if you look at the shape of AI as it is at the moment, it doesn’t operate within sectors; it operates across multiple sectors. So, for example, healthcare versus finance versus whatever, using the same technology, it’s a nonsense to think that you might have different regulatory regimes across these different sectors for the same technology. I think that’s problematic and that was specifically dismissed by the work that was done by the EU AI Act. They looked at the more federated approach to regulation, and dismissed it for a variety of reasons, including the sheer impracticality. I think it’s out of step with Europe, it’s out of step with the US, and it’s even out of step with China, who are all developing much stronger lines on the control and regulation of AI. “While the political instincts may be to not regulate things, I think in this case they’ve misjudged it, and we actually need guide rails in industry, we need people in academia advising, to really make sure that we don’t end up with AI disasters, that we’re using this technology for the benefit of human beings and humanity, and not just allowing a free-for-all which is entirely profit-driven at the expense of people.” Asked about trust of the big IT companies, Andrew says. “As a philosophical question, I think there is an unfortunate track record of organisations being driven by the realities of making money and profit, being driven by the need to be first to market to dominate a sector. … Accountability is one of the critical issues that’s missing from the development of AI as we create these abstract systems where it becomes the black box problem, the transparency problem and makes it hard to hold individuals accountable for taking shortcuts, for negligence and so on. That’s one thing we have to get right, there need to be people who are responsible and can be held accountable for mistakes, in order that we take a more considered approach to these very powerful technologies.” Asked about why the UK does not produce and sustain large IT companies, Andrew says: “I think it’s a cultural problem, it’s an institutional problem. We don’t have the same attitude to risk in the UK as exists in the America. We don’t have the same culture towards entrepreneurialism. We have some of the world’s greatest inventors. We’re great at being creative. It seems to me that in the US being an IT entrepreneur is similar to being a rock star, it’s one of those aspirational things that as a youngster you look at and think I want to be that next billionaire, in the same way as kids might look at Harry Kane and want to be a Premiership footballer, they want to be a YouTube influencer, earning millions doing, etc. But the attitudes are very different in the US than the UK between people who are in their late teens, early twenties, who are the lifeblood of these start-ups. … In the UK rock star envelope, compute, AI, that kind of tech entrepreneurship just doesn’t hold the same attraction as it does in the US and that’s a lot of the reason why we don’t see that progression.” Asked about why big IT systems often appear to fail, Andrew says: “There are lots of different reasons. One is divergent understanding of what it is you’re trying to build. There is a fashion for writing your requirements and saying, ‘This is what our system should do; go away and build it.’ You spend many years and many millions of pounds building it, and then realise that the requirements that they wrote at the beginning weren’t right for what they wanted, and that actually what’s been built is not what’s needed. The art is to actually keep those potentially diverging threads aligned by very close work with a client, and very good contract management as you go along. So every time you realise that something isn’t going to work out, that it’s not really what they should have been asking for, you can work with the customer to realign, make sure it’s in the contract, and make sure you progress on it. “All the problems I remember seeing over many years, not just at Charteris, was where that divergence occurred, where you thought you were doing the right thing as a supplier, but the customer had effectively asked for the wrong thing, or misunderstood what it was they were trying to build. With big complex systems, it is very hard to start off with a blank sheet of paper and say, ‘This is what it must do,’ with no real evidence or understanding of how this system will behave in real life with real users and real data and real customers.” Asked about the apparent failure in particular of public sector IT projects, Andrew adds: “I think it happens all over. The commercial sector is more flexible about how it corrects those issues and how much risk it takes. The public sector does seem to have run into a lot of problems, I think partly because they are bound by rules about public sector spending, the amount of risk that they can take on, their emphasis on the very onerous contracts that they place on their suppliers, all of which makes for a much more brittle way of developing software and complex systems.” On the subject of diversity in technology, Andrew explains that he is working with local government and businesses in Surrey to look at what influences young girls to not pursue a career in technology and more broadly in STEM subjects. He says: “If you look at companies, the male to female ratio of fifteen per cent women is not untypical. That is such a huge waste of potential talent. But what is it that turns young women and young girls away from careers in tech, computing and so on? It does seem to go back very early decision-making. The UK Government has made a number of interventions to try and encourage people to study STEM subjects because the shortage of STEM talent is a UK problem, and has been for many years, it is actually an international problem, no country is doing it right. So what is it that’s driving people away and not following those careers? The life stereotypes that permeate at very early ages in primary school is influencing people of all shapes and sizes and colours away from STEM, but particularly making it unattractive for women to be represented. It has been damaging, and will continue to be damaging not to have that representation but also that contribution from all sectors of society into these important technologies. It goes for both race and gender, but I am concentrating on gender because racial diversity, certainly at the university, and anecdotally in companies, is better. Gender is probably the single characteristic that is most out of kilter. “It’s a diversity question overall, and we are very interested in how we design AI to be inclusive, not just ethnicity and gender, but also people with different needs, neurodiversity, people with accessibility problems and so on, how do bake in new approaches to developing these technologies that represent the entirety of humanity so that we can use these technologies to ensure that they benefit the entirety of humanity.” Asked about mistakes he has made, Andrew says: “The thing that hurts the most is when trying to be decent and run organisations in a decent way doesn’t align with the corporate realities and you end up diverging those. Having a humane approach to building businesses, having an ethical approach where you value the people, you value what they do, and you understand that they have their friends and families and so on, you only need to get the best out of them, to get the company to perform the best. Putting profit before those people is a very corrosive thing to do, and I don’t like doing it. I wish in a way I could, because I would have been more successful I think on the corporate front, but in a way I’m glad that I can’t, because, I can sleep at night.” Interviewed by Richard Sharpe Transcribed by Susan Hutton Abstracted by Lynda Feeley Early Life
First computer
Education
Logica
ESYS
QinetiQ
Charteris
Defence Strategy and Solutions
Roke Manor Research
AI Institute at the University of Surrey
On GPT-5
On bias in AI
On AI and copyright
On AI creating itself
On UK AI regulation
UK and large IT companies
Big system failures
On diversity in tech
Mistakes
Interview Data