Cross-border pilot project on AI

The Saxony-Bavaria AI cooperation project gAIn (Next Generation Al Computing) was launched in 2025 with a grand opening ceremony. In this project, scientists from universities in Dresden and Munich are developing innovative AI systems.

Source pixabay

The AI systems developed at Dresden University of Technology (TU Dresden) , Ludwig Maximilian University of Munich (LMU), and Technical University of Munich (TMU) are not only extremely energy-efficient, but also meet the highest standards of reliability and security.

Saxony and Bavaria are providing a total of six million euros for this cross-state scientific cooperation until 2027, of which three million euros will go to TU Dresden.

gAIn is a forward-looking project with a clear vision: Germany and Europe should not only be able to keep pace in the field of artificial intelligence, but also take the lead in global development and secure our technological independence. The project is led by experts who have a deep understanding of the challenges of AI and are also visionary thought leaders. They combine various scientific disciplines to create an overall concept that perfectly integrates AI, hardware, and software. 

TU Dresden is participating in gAIn with Prof. Frank Fitzek, holder of the Deutsche Telekom Chair for Communication Networks and spokesperson for the Centre for Tactile Internet with Human-in-the-Loop (CeTI) cluster of excellence, and Stefanie Speidel, Professor of Translational Surgical Oncology at the National Center for Tumor Diseases Dresden (NCT / UCC) and CeTI spokesperson. From Bavaria, Prof. Holger Boche (TUM), Chair of Theoretical Information Technology, and Prof. Gitta Kutyniok (LMU), Chair of Mathematical Foundations of Artificial Intelligence, are involved in the project.

Background:

Despite rapid progress in the field of artificial intelligence, more and more serious problems with computing, i.e., IT infrastructures and networked systems, have become apparent worldwide in recent years. These problems can severely limit the further development of AI and future technologies based on it, particularly in the fields of communication, medicine, and robotics, or even bring them to a standstill in the event of a problem with the energy supply to the systems. AI applications require enormous amounts of energy.

In the long term, AI applications therefore face challenges in the areas of energy consumption, predictability, reliability, and the implementation of legal requirements (such as the EU AI and EU Data Act). According to renowned scientists at TU Dresden, LMU Munich, and TU Munich, these challenges can no longer be fully overcome with current AI hardware (CPU/GPU clusters). According to the current state of the art, so-called central processing units (CPUs) are used almost exclusively for computing worldwide, and graphics processing units (GPUs) are used as hardware platforms for AI applications. Applications in the field of AI require massive computing power, as does the calculation of virtual worlds; robotics and communication technologies are based on distributed computing. In particular, there are no data centers in the gigawatt range anywhere in the world – and so far there is no international experience with regard to energy supply for such large-scale infrastructures.
Solving the energy problem in computing via CPUs/GPUs is therefore one of the most important challenges on the path to exponential growth in the field of AI applications. Other challenges associated with computing via CPUs/GPUs as hardware platforms include:

  • Predictability: AI solutions are not predictable on these hardware platforms for many problems.
  • Reliability of AI: AI applications are currently still unreliable in many respects, as demonstrated, among other things, by the unexpectedly slow progress in the development of autonomous driving despite massive investments by large and well-known companies. Scientific studies have shown that the hardware used to date (CPUs/GPUs) is a root cause of the problem.
  • Legal problems: AI applications trained on current hardware platforms cannot meet the legally required "algorithmic transparency" and "right to explanation" for various critical problem classes.
gAIn (Next Generation Al Computing)