HomeOther Stuff10 European experts who have been paving the way to modern AI

10 European experts who have been paving the way to modern AI

When asked why he robbed banks, Willie Sutton famously replied, “Because that’s where the money is”. And so much of artificial intelligence evolved in the United States – because that’s where the computers were. However with Europe’s strong educational institutions, the path to advanced AI technologies has been cleared by European computer scientists, neuroscientists, and engineers – many of whom were later poached by US universities and companies. From backpropagation to Google Translate, deep learning, and the development of more advanced GPUs permitting faster processing and rapid developments in AI over the past decade, some of the greatest contributions to AI have come from European minds.

Modern AI can be traced back to the work of the English mathematician Alan Turing, who in early 1940 designed the bombe – an electromechanical precursor to the modern computer (itself based on previous work by Polish scientists) that broke the German military codes in World War II. Turing went on to become a leading computer scientist and, for many, “the father of artificial intelligence”. In 1950, Turing famously proposed to test the attainment of true artificial intelligence by the ability of a computer to carry on a natural language conversation, indistinguishable from that of a human. Only his untimely suicide in 1954 precludes him from our list of top ten living European AI scientists.

Scientists began comparing the human brain to computers as early as 1943. The human “computer”, however, is different from machines in important ways. It utilises analogue states, not just ones and zeros, and involves parallel processing on a scale that is still well beyond the capacity of our machines. Yet AI has come a long way in mimicking the human brain – enabling computer vision and image recognition for self-driving cars, producing speech recognition and generation, and allowing us to use existing data to predict the future.

Below, we take a look at ten often overlooked European experts who have led the way to our modern AI capabilities.

ingo_rechenbergIngo Reichenberg –  In the 1960s, Reichenberg pioneered evolution strategies. Reichenberg’s work wasn’t initially seen as AI, but over the years his evolutionary strategies evolved into what is now called genetic programming. A persistent criticism of AI has been that computers can’t come up with new ideas, but in Rechenberg’s model, a machine starts with a few inadequate ideas and a few evolutionary strategies. Over generations, the ideas evolve until the fittest idea survives. Genetic programming is still used in problem-solving, data modelling, feature selection, and classification, and in modern AI, it is often used to train neural networks themselves to select the most appropriate program, neural network, or system.

Since 1972 Reichenberg has been a full professor at the Technical University of Berlin. Now in his 80s, Reichenberg is still researching and publishing, and has received many awards for his contributions.

teuvo_kohonenTeuvo Kohonen is the most-cited Finnish scientist, and is currently professor emeritus of the Academy of Finland. Since the 1960s, he has introduced several new concepts relevant to AI. These include distributed associative memory, a fundamental contribution necessary for the development of computer networking, the development of CPUs, and artificial neural networks. In 1977 he invented the learning vector quantization algorithm, which led to his 1982 introduction of self-organizing maps (SOMs). SOMs became the first practical modern AI algorithm. Because they readily generate visualizations, SOMs are often used in financial, meteorological, and geological analysis, as well as the creation of artwork.

 geoff-hintonGeoffrey Hinton –  In 1980, after completing his studies at the University of Cambridge and the University of Edinburgh, where he received his PhD in AI, Geoffrey Hinton bounced back and forth between the UCSD and Cambridge, England before briefly settling down in Pittsburgh as an assistant professor at Carnegie Mellon University in 1982. In 1986, Hinton and his colleagues published a paper proposing an improved backpropagation algorithm, which remains a foundational technique in modern AI. Hinton has made several additional contributions to neural networks, including the co-invention of Boltzmann machines, distributed representations, time delay neural networks, and capsule neural networks.

Hinton relocated to Canada in the 80s, in part due to objections surrounding the military funding of AI in the US. He became a professor at the University of Toronto, and later joined Google in 2013. Hinton remains critical of the applications of AI, having stated “I think political systems will use it to terrorize people”, citing the NSA as an example. Nevertheless, he is often regarded as the “Godfather of Deep Learning”.

Juergen_SchmidhuberJürgen Schmidthuber did his undergraduate studies at the Technische Universität München, Germany, and is now a professor of AI at the Università della Svizzera Italiana in Lugano, Switzerland. While Hinton was implementing backpropagation at Carnegie-Mellon, Schmidthuber was further developing Rechenberg’s evolutionary strategies with meta-genetic programming. In the genetic model, each generation of data structures called “chromosomes” undergo “crossover” (recombination) and “mutation” under “environmental constraints”. Schmidthuber took the next logical step in the evolution of GP – proposing that the structure and rules of chromosomes, crossover and mutation, can evolve on their own, rather than strictly along lines determined by a human programmer. With his team at IDSIA in Switzerland, Schmidthuber was also among the first to use convolutional neural networks on GPUs, dramatically speeding up image recognition.

Schmidthuber’s interests and applications in AI are broad, even including a “Formal Theory of Fun, Creativity, & Intrinsic Motivation”. His webpage is an instructive read on all things deep learning since 1991.

yann_lecunYann LeCun received a Diplôme d’Ingénieur from the ESIEE Paris in 1983, and a PhD in Computer Science from Université Pierre et Marie Curie in 1987. He is a pioneer in deep learning, especially known for his contributions to computer vision and the development of convolutional neural networks, which has applications in image and video recognition, image classification, medical image analysis, and speech recognition. During his PhD, LeCun worked on backpropagation, and in 1989 he went to work alongside Geoffrey Hinton at the University of Toronto. From there, LeCun moved to Bell Labs in Holmsdel, New Jersey, where developed new computational and machine learning methods for image recognition, and his technology was adopted early on by banks in the 90s to analyse handwriting. He has since worked in a number of roles, and is currently a Professor of Computer Science and Neural Science at New York University and Chief AI Scientist at Facebook. In 2018 LeCun received the Turing Award alongside Geoffrey Hinton and Yoshua Bengio.

sepp_hochreiterSepp Hochreiter – In 1991, Hochreiter’s undergraduate thesis proposed the long short-term memory (LSTM) neural network design. And in 1997, with Schmidthuber as his co-author, Hochreiter presented an improved LSTM design, which gives artificial neural networks a longer “memory”, allowing them to more effectively problem solve, representing a major breakthrough in AI. Hochreiter has made many other significant contributions in the fields of reinforcement learning, drug discovery, toxicology, and genetics. Meanwhile, LSTM networks remain the most efficient AI model for many applications including drug design, music composition, machine translation, speech recognition, and robotics, and are used by companies from Facebook to Google. Currently, Hochreiter heads the Institute for Machine Learning at the Johannes Kepler University of Linz.

franz_ochFranz Och studied computer science at the University of Erlangen-Nürnberg, graduating with a Dipl.Ing. degree in 1998. In 2002 Och received his PhD in Computer Science at the Technical University of Aachen, and then went to UCSD where he wrote the landmark paper on phrase-based machine translation with his former fellow student at Erlangen-Nürnberg, Phillipp Koehn. In 2004 Och went on to Google where, for the next ten years, he launched and led the development of Google Translate. While Och’s version of Google Translate could not outperform any one human translator on any two language-pairs, it could outperform any one human translator on dozens of language-pairs. In 2018 Och became a Director at Facebook.

philipp_koehnPhilipp Koehn received a Diplom in Computer Science from the Universität Erlangen-Nürnberg in 1996. He then moved to complete his studies in the US, earning his PhD from the University of Southern California in 2003. In the same year he wrote the landmark paper Statistical Phrase-Based Translation” with Och and Daniel Marcu as co-authors, which combined LSTM designs with Och’s phrase-based algorithms. Koehn won first prize at the 2011 conference of the Multilingual Europe Technology Alliance for extending LSTM designs for speech recognition to machine translation. He continues to maintain his machine translation toolkit, Moses, as an open-source resource for AI researchers. 

Koehn is now a Professor in the Language and Speech Processing Group at Johns Hopkins University in Baltimore and Professor of Machine Translation in the University of Edinburgh School of Informatics.

demis_hassabisDemis Hassibis began his career as a computer game designer before going on to graduate from Queen’s College, Cambridge. He then returned to gaming, first at Lionhead Studios, and then as a founder of Elixir Studios. In 2006, Hassibis went to University College London to study cognitive neuroscience. His work on the hippocampus finding that amnesiacs could not envision themselves in new experiences – linking memory and imagination – received celebrity attention, and was listed by the journal Science among the top ten scientific breakthroughs of 2007. After obtaining his PhD in 2009, Hassibis co-founded DeepMind in 2010. DeepMind set out to connect neuroscience with machine learning and advanced hardware to create more and more powerful AI algorigthms. In 2014 Google acquired DeepMind for £400 million, although DeepMind continues to operate independently in London.

In 2015, DeepMind made the news by creating AlphaGo, the first computer program to defeat a world champion at the game of Go. And in December 2018, DeepMind’s AlphaFold system won the 13th Critical Assessment of Techniques for Protein Structure Prediction (CASP), besting the field by 20 points, and demonstrating the company’s promise in using AI to understand and cure diseases.

It’s no coincidence that Hassibis got his start in computer gaming. The most recent breakthroughs in deep learning have depended upon the evolution of CPUs into the more powerful GPUs (Graphical Processing Units) used in game machines. These GPUs replaced the capabilities of standard CPUs with blazingly fast matrix computations, allowing for many previously difficult problems to be solved simply by adding more layers to deep learning networks. The next frontier is even faster, more powerful chips.

simon_knowlesSimon Knowles is the CTO and 2016 co-founder of Bristol-based Graphcore. With an MA in electrical engineering from Cambridge, he may not have invented any algorithms – but hardware engineers are the unsung heroes of artificial intelligence. Even though GraphCore has not yet released its promised product, the IPU (or “Intelligence Processing Unit”), which it says will be 10 to 100x faster than current GPUs, there is a bright future for specialised, EU-based “fabless” chip designs. As Donald Trump hinders US chip giants like Intel and AMD with tariffs on Chinese silicon, hardware startups like GraphCore and the software startups that code to IPU-like architectures have a window of opportunity to build unprecedented AI in Europe. GraphCore anticipates its new chip will be “transformative, whether you are a medical researcher, roboticist, online marketplace, social network, or building autonomous vehicles”.

- Advertisement -
Mary Loritz
Mary Loritz
Mary served as Head of Content at EU-Startups.com from November 2018 until November 2019. She is an experienced journalist and researcher covering tech and business topics.
RELATED ARTICLES

Most Popular