EXPLAINABLE AI IN HEALTHCARE
The next AI Monday Berlin will be a satellite event to the in parallel running conference DMEA. Hence we will be focusing on the Healthcare sector, specifically on the topic of Explainable AI.
Four crisp presentations incl. Q&A. Some speaker also demo their AI solutions. Followed by networking. Please bring your own snacks and drinks as long as we are virtual.
AI curious people, change leaders, businesses with passion for data and disruption.
Share AI-knowledge, exchange and encourage each other on our change journeys.
Please register below to receive details.
19:30 PM – Berlin timezone
No Sales Pitches. No Math lectures or deep tech dives. No shallow consulting or marketing talks.
Dr. Wojciech Samek
Head of the Department of Artificial Intelligence and the Explainable AI Group at Fraunhofer Heinrich Hertz Institute (HHI), AI for Good
Dr. Sven Schmeier
Chief Engineer and Associate Head of Language Technology Lab Berlin of DFKI
In the XAINES project, the aim is not only to ensure explainability, but also to provide explanations (narratives). The central question is whether the AI can explain in one sentence why it acted the way it did or whether it has to explain it interactively to the user. To clarify this, one of the project focuses is the exploration of narrative and interactive narratives, which are particularly suitable for humans to assimilate knowledge in any form, in their application with AI systems. To obtain explanatory narratives, (speech) labeled sensor data streams and predictive models are used. Sensor information is combined with speech information, from which the AI system develops so-called scene understanding, which then generates explanations.