Humane AI net logo

Facilitating a European brand of trustworthy, ethical AI that enhances Human capabilities and empowers citizens and society to effectively deal with the challenges of an interconnected globalized world. 

There is a strong consensus that artificial intelligence (AI) will bring forth changes that will be much more profound than any other technological revolution in human history. Depending on the course that this revolution takes, AI will either empower our ability to make more informed choices or reduce human autonomy; expand the human experience or replace it; create new forms of human activity or make existing jobs redundant; help distribute well-being for many or increase the concentration of power and wealth in the hands of a few; expand democracy in our societies or put it in danger. Europe carries the responsibility of shaping the AI revolution. The choices we face today are related to fundamental ethical issues about the impact of AI on society, in particular, how it affects labor, social interactions, healthcare, privacy, fairness and security. The ability to make the right choices requires new solutions to fundamental scientific questions in AI and human-computer interaction (HCI). 

There is a need of shaping the AI revolution in a direction that is beneficial to humans both individually and societally, and that adheres to European ethical values and social, cultural, legal, and political norms. 

As core challenge we have identified the development of robust, trustworthy AI systems capable of what could be described as “understanding” humans, adapting to complex real-world environments, and appropriately interacting in complex social settings. The overall vision is to facilitate AI systems that enhance human capabilities and empower individuals and society as a whole while respecting human autonomy and self-determination. 

A key message is that this cannot be achieved through  regulation that hinders innovation, but must be instead a motor of innovation driving European researchers to develop new unique European AI solutions—which therefore will also have a unique potential to spin-out innovation from the project and thus generating an increased economic activity.

Humane AI

A key challenge is that such solutions cannot be found by working within the traditional AI silos, but instead require breakthroughs at the interfaces of various areas of AI, HCI, cognitive science, social science, complex systems, etc. HumanE AI thus brings together a unique community which has the expertise both within these silos and at the interfaces between them and can address those challenges. We are in a unique position to address the key gaps in knowledge that prevent the political vision of a human-centric, European brand of AI from becoming reality.

The core challenge is the development of robust, trustworthy AI capable of what “understanding” humans, adapting to complex real-world environments, and appropriately interacting in complex social settings. The aim is to facilitate AI systems that enhance human capabilities and empower individuals and society as a whole while respecting human autonomy and self-determination. The HumanE AI Net project will engender the mobilization of a research landscape far beyond direct project funding, involve and engage European industry, reach out to relevant social stakeholders, and create a unique innovation ecosystem that provides a many-fold return on investment for the European economy and society.

The main mechanism for implementing the research agenda are collaborative microprojects. These involve researchers from several partners jointy working for a period of up to a few months at a single location (which will be the location of one of the involved partners, the host). This is a much stronger collaboration mechanism than the typical project that involves collaboration with each partner’s researchers remaining at their own labs and only coming together for occasional meetings.

We will make the results of the research available to the European AI community through the AI4EU platform and a Virtual Laboratory, develop a series of summer schools, tutorials and MOOCs to spread the knowledge, develop a dedicated innovation ecosystem for transforming research and innovation into an economic impact and value for society, establish an industrial Ph.D. program and involve key industrial players from sectors crucial to the European economy in research agenda definition and results evaluation in relevant use cases.

Artificial Intelligence & Decision support, Learning, development and adaptation, Knowledge representation and reasoning, Robotic perception, Natural language processing, Human computer interaction, Human Centric AI, Ethical AI, Ubiquitous Computing

Scientific partners include 53 institutions

1 – Deutsches Forschungszentrum für Künstliche Intelligenz GMBH (Coordinator), DE

2 – AALTO Korkeakoulusaatio SR, FI

3 – Airbus Defence and Space SAS, FR

4 – Algebraic AI S.L., ES

5 – Athena-Erevnitiko Kentro Kainotomias Stis Technologies Tis Pliroforias, Ton Epikoinonion Kai Tis Gnosis, EL

6 – Vysoke Uceni Technicke V Brne, CZ

7 – Barcelona Supercomputing Center – Centro Nacional De Supercomputacion, ES

8 – Közép-Európai Egyetem (CEU), HU

9 – Consiglio Nazionale Delle Ricerche, IT

10 – Centre National De La Recherche Scientifique CNRS, FR

11 – Agencia Estatal Consejo Superior Deinvestigaciones Cientificas, ES

12 – Univerzita Karlova, CZ

13 – Consorzio Internuviersitario Nazionale Informatica, IT

14 – Eotvos Lorand Tudomanyegyetem, HU

15 – Eidgenoessische Technische Hochschule Zuerich, CH

16 – Fondazione Bruno Kessler, IT


18 – Fraunhofer Gesellschaft Zur Foerderung der Angewandten Forschung E.V., DE

19 – Generali Italia S.p.A., IT

20 – German Entrepreneurship GMBH, DE

21 – INESC TEC – Instituto De Engenhariade Sistemas E Computadores, Tecnologia E Ciencia, PT


23 – Institut National De Recherche En Informatique Et Automatique, FR

24 – Instituto Superior Tecnico, PT

25 – Institut Jozef Stefan, SI

26 – Knowledge 4 All Foundation, UK

27 – Ludwig-Maximilians-Universitaet Muenchen, DE

28 – Orebro University, SE

29 – Philips Electronics Nederland B.V., NL

30 – SAP SE, DE

31 – Sorbonne Universite, FR


33 – Thales Six Gts France SAS, FR

34 – Telefonica Investigacion Y Desarrollo SA, ES


36 – Technische Universitat Berlin, DE

37 – Turkiye Bilimsel Ve Teknolojik Arastirma Kurumu, TR

38 – Technische Universiteit Delft, NL

39 – Technische Universitaet Kaiserslautern, DE

40 – Technische Universitaet Wien, AT

41 – University College Cork – National University of Ireland, Cork, IE

42 – Kobenhavns Universitet, DK

43 – Universite Grenoble Alpes, FR

44 – Universiteit Leiden, NL

45 – Umeå Universitet, SE

46 – Alma Mater Studiorum – Universita Di Bologna, IT

47 – Universita Di Pisa, IT

48 – University College London, UK

49 – Uniwersytet Warszawski, PL

50 – The University of Sussex, UK

51 – Universidad Pompeu Fabra, ES

52 – Vrije Universteit Brussel, BE

53 – Volkswagen AG, DE

Europe map

  • WP1 Human-in-the-Loop Machine Learning, Reasoning and Planning
  • WP 2 Multimodal Perception and Modeling
  • WP 3 Human AI Collaboration and Interaction
  • WP 4 Societal AI
  • WP 5 AI Ethics and Responsible AI
  • WP 6 Applied research with industrial and societal use cases
  • WP 7 Innovation Ecosystem and Socio-Economic Impact


Our vision is built around ethics values and trust (Responsible AI). These are intimately interwoven with the impact of AI on society, including problems associated with complex dynamic interactions between networked AI systems, the environment, and humans. With respect to core AI topics, fundamental gaps in knowledge and technology must be addressed in three closely related areas:

(1) learning, reasoning and planning with human in the loop,

(2) multimodal perception of dynamic real-world environments and social settings and

(3) human-friendly collaboration and co-creation in mixed human-AI settings.


Thus, our research agenda is built on 5 pillars, as described below:

1.      Human-in-the-loop machine learning, reasoning, and planning. Allowing humans to not just understand and follow the learning, reasoning, and planning process of AI systems (being explainable and accountable), but also to seamlessly interact with it, guide it, and enrich it with uniquely human capabilities, knowledge about the world, and the specific user’s personal perspective. Specific topics include:

a.      Linking symbolic and sub-symbolic learning

b.      Learning with and about narratives

c.       Continuous and incremental learning in joint human-AI systems

d.      Compositionality and automated machine learning         (Auto-ML)

e.      Quantifying model uncertainty

2.      Multimodal perception and modeling. Enabling AI systems to perceive and interpret complex real-world environments, human actions, and interactions situated in such environments and the related emotions, motivations, and social structures. This requires enabling AI systems to build up and maintain comprehensive models that, in their scope and level of sophistication, should strive for more human-like world understanding and include common sense knowledge that captures causality and is grounded in physical reality. Specific topics include:

a.      Multimodal interactive learning of models

b.      Multimodal perception and narrative description of actions, activities and tasks

c.       Multimodal perception of awareness, emotions, and attitudes 

d.      Perception of social signals and social interaction

e.      Distributed collaborative perception and modeling 

f.       Methods for overcoming the difficulty of collecting labeled training data

3.      Human-AI collaboration and interaction. Developing paradigms that allow humans and complex AI systems (including robotic systems and AI-enhanced environments) to interact and collaborate in a way that facilitates synergistic co-working, co-creation and enhancing each other’s capabilities. This includes the ability of AI systems to be capable of computational self-awareness (reflexivity) as to functionality and performance, in relation to relevant expectations and needs of their human partners, including transparent, robust adaptation to dynamic open-ended environments and situations. Overall, AI systems must above all become trustworthy partners for human users. Specific topics include:

a.      Foundations of human-AI interaction and collaboration

b.      Human-AI interaction and collaboration

c.       Reflexivity and adaptation in human-AI collaboration

d.      User models and interaction history

e.      Visualization interactions and guidance

f.       Trustworthy social and sociable interaction

4.      Societal awareness. Being able to model and understand the consequences of complex network effects in large-scale mixed communities of humans and AI systems interacting over various temporal and spatial scales. This includes the ability to balance requirements related to individual users and the common good and societal concerns. Specific topics include:

a.      Graybox models of society scale, networked hybrid human-AI systems

b.      AI systems’ individual versus collective goals

c.       Multimodal perception of awareness, emotions, and attitudes 

d.      Societal impact of AI systems

e.      Self-organized, socially distributed information processing in AI based techno-social systems

5.      Legal and ethical bases for responsible AI. Ensuring that the design and use of AI is aligned with ethical principles and human values, taking into account cultural and societal context, while enabling human users to act ethically and respecting their autonomy and self-determination. This also implies that AI systems must be “under the Rule of Law”: their research design, operations and output should be contestable by those affected by their decisions, and a liability for those who put them on the market. Specific topics include:

a.      Legal Protection by Design (LPbD)

b.      “Ethics by design” for autonomous and collaborative, assistive AI systems

c.       “Ethics in design”—methods and tools for responsibly developing AI systems