AI tools like ChatGPT are currently taking the world by storm, but these systems come with risks of their own. They can, for example, reinforce racial prejudice, they can be used for novel forms of cyberattacks or they can simply generate inaccurate responses or misleading information. Due to the complexity of these systems and their lack of transparency, not even their developers know why their systems are making such mistakes. A new Research Training Group (RTG) at Saarland University will develop approaches aimed at making Artificial Intelligence more trustworthy and comprehensible. The German Research Foundation (DFG) will be funding the new RTG over the next five years with around €7.5 million.
Credit: Thorsten Mohr
AI tools like ChatGPT are currently taking the world by storm, but these systems come with risks of their own. They can, for example, reinforce racial prejudice, they can be used for novel forms of cyberattacks or they can simply generate inaccurate responses or misleading information. Due to the complexity of these systems and their lack of transparency, not even their developers know why their systems are making such mistakes. A new Research Training Group (RTG) at Saarland University will develop approaches aimed at making Artificial Intelligence more trustworthy and comprehensible. The German Research Foundation (DFG) will be funding the new RTG over the next five years with around €7.5 million.
Researchers in the new Research Training Group ‘Neuroexplicit Models of Language, Vision, and Action’, based in Saarbrücken, Germany, want to lay the systematic foundations for an approach that some experts call “the third wave of AI”: so-called “neuroexplicit models”. Such models seek to combine the best aspects of previous approaches in order to create an AI that is safer, more reliable and easier to interpret. Partners in the new research network include the Departments of Computer Science, Language Science and Technology, and Mathematics at Saarland University, the German Research Center for Artificial Intelligence (DFKI), and the Max Planck Institutes for Informatics and for Software Systems. The nearby CISPA Helmholtz Center for Information Security is also a project partner. ‘Our Research Training Group is the first collaborative research project in Europe that studies neuroexplicit models. We are doing pioneering work in a field that is receiving rapidly growing attention from experts around the world,’ said Alexander Koller, Professor of Computational Linguistics at Saarland University and spokesperson for the new Research Training Group.
The first wave of Artificial Intelligence is referred to as ‘explicit’ or ‘symbolic’ AI. Symbolic AI systems are given precise rules and knowledge about the world that enable it to act autonomously. The second wave of Artificial Intelligence has been based on ‘deep neural networks’. These systems have led to a quantum leap in AI capabilities and – thanks to ChatGPT – have recently generated massive public interest in AI. Models based on deep learning scenario are trained on huge amounts of data from which it independently learns the rules and concepts that it needs to be able to act autonomously. ‘Neither of these approaches is without its problems. Given the complexity of the world, it is simply impossible to feed a symbolic AI system with a sufficiently large set of predefined rules,’ explained Koller. Because of their lack of transparency, neural networks are often referred to as ‘black boxes’. ‘As these systems develop their own response patterns based on the gigantic quantities of data they are exposed to, they are often so complex that no human can really understand why a neural network system acts in the way it does.
‘Neuro-explicit models offer a very promising third way,’ says Alexander Koller, since they can capture domain knowledge that then no longer needs to be learned from data, or structure a task so that the neural learning problem becomes easier. ‘In the case of an autonomous vehicle, for example, the vehicle would not only be fed the necessary traffic rules and regulations; we would also encode the relevant physical formulas, such as how braking distance increases in wet conditions, how the appearance of certain objects can change depending on light conditions, or how pedestrians and other road users typically behave. Only then would the AI system be exposed to the training data,’ explained Alexander Koller. One of the benefits of this approach is that it leads to safer and more reliable systems, as we are better able to understand what a system knows and how it arrived at that knowledge. ‘A further advantage is that because neuroexplicit models can use the predefined concepts to generalize across observations, they don’t have to be exposed to every possible situation in training and can therefore learn from a reduced data set,’ said Professor Koller.
The Research Training Group will pursue research in three key areas in which AI finds frequent application: language (e.g. language modelling à la Chat GPT), vision (e.g. automated image recognition and processing) and action. The latter area ‘action’ is crucial, for example, in autonomous driving, when the car has to decide how to respond in a particular traffic situation. Research in the RTG will also address foundational questions, such as how symbolic AI models and neural network-based models can be combined most effectively for specific applications. The researchers want to use the experience gained in these different areas to identify general design principles that can then be used to help develop neuroexplicit models more efficiently in the future.
The new Research Training Group will train a new generation of AI researchers who will have the knowledge, skills and confidence to address these challenging issues. ‘The risks associated with AI are significant, which is why it will be compulsory for all of our doctoral students to attend our award-winning course on “Ethics for Nerds”,’ said Alexander Koller. A total of 24 doctoral research positions and one position for an experienced senior researcher will be created over the next five years. Vice Speakers of the cooperation are Saarland University Computer Science Professor Vera Demberg and the Director at the Max Planck Institute for Informatics Professor Bernt Schiele.
The creation of this new Research Training Group is further striking proof of the outstanding quality of the research work being conducted in Saarbrücken and the high international standing that our computer scientists enjoy within the scientific community. At the same time, it underscores the exceptionally fruitful and long-standing cooperation that exists between university departments and the computer science research institutions and partners on the Saarbrücken campus,’ said University President Manfred Schmitt.
‘I want to congratulate all of those involved – at Saarland University and the participating research institutes – on their joint success. The Research Training Group underscores the competitiveness of Saarland’s computer science ecosystem and will undoubtedly attract even more talented young computer scientists to Saarland,’ said Jakob von Weizsäcker, Saarland’s Minister of Science. Saarland’s Ministry of Science contributes €12 million annually to the German Research Foundation through the joint federal-state funding programme.
Further information:
Press release of German Research Foundation
https://www.neuroexplicit.org/
Questions can be addressed to:
Professor Alexander Koller
Department of Language Science and Technology
Saarland University
Email: [email protected]
Tel.: +49 681 302-4345
Background information – DFG-funded Research Training Groups
Research Training Groups are established by universities to promote early career researchers. They are funded by the German Research Foundation (DFG) for a period of up to nine years. They offer a defined research programme and a structured training framework within which young researchers can pursue doctoral research. Research Training Groups with an interdisciplinary approach to research are generally preferred. The aim is to prepare doctoral researchers for the complexities of the job market in science and academia while also encouraging and supporting their early academic independence.
https://www.dfg.de/foerderung/programme/koordinierte_programme/graduiertenkollegs
Background information – Saarland Informatics Campus
900 scientists (including 400 PhD students) and about 2500 students from more than 80 nations make the Saarland Informatics Campus (SIC) one of the leading locations for computer science in Germany and Europe. Four world-renowned research institutes, namely the German Research Center for Artificial Intelligence (DFKI), the Max Planck Institute for Informatics, the Max Planck Institute for Software Systems, the Center for Bioinformatics as well as Saarland University with three departments and 24 degree programs cover the entire spectrum of computer science.
About CISPA
The CISPA Helmholtz Center for Information Security is one of Germany’s federally funded ‘big science’ institutions within the Helmholtz Association. Scientists at CISPA conduct research into all aspects of information security. By carrying out cutting-edge basic research as well as innovative applications-driven research, they are tackling the pressing challenges faced in the fields of cybersecurity, artificial intelligence and data protection. The results of CISPA’s research are being used in industrial applications and products around the world. CISPA is thus helping to strengthen competitiveness in both Germany and Europe. By fostering talented young researchers and acting as an incubator for highly trained, elite-level specialists and executives, CISPA is ensuring that its expertise and professionalism are being used to meet future challenges in business and industry.
Editor:
Philipp Zapf-Schramm
Saarland Informatics Campus
Tel.: +49 681 302-70741
Email: [email protected]