Welcome to AI Monday in Berlin
Since September 2018 #aimonday also takes place in Berlin. This is only natural as a Start-up capital and with 54% of all German AI companies the fourth largest global AI hub. Since then AI Monday happens every 5-6 weeks at changing locations.
We are looking forward to welcoming you to one of the next AI Monday Berlin – as guest or speaker.
Four crisp presentations incl. Q&A. Some speaker also demo their AI solutions. Followed by snacks, drinks and networking.
AI curious people, change leaders, businesses with passion for data and disruption.
Share AI-knowledge, exchange and encourage each other on our change journeys.
Always Monday after work.
May 20th, 2019
doors open at 6:30pm, talks start at 7pm. Networking starts at around 8:30pm.
No Sales Pitches. No Math lectures or deep tech dives. No shallow consulting or marketing talks.
Machine Learning Platform Lead @ Careem
Yoda: Scaling Machine Learning @ Careem
At Careem, every day, our platform solves different challenging problems affecting the lives of our users across 120+ cities. Each of these problems requires a local and optimized solution. This emphasizes a strong need for A.I. In this talk, you will be walked through the journey of building our machine learning platform and the challenges addressed while trying to build a scalable, usable and cost-efficient platform that facilitates democratizing Machine Learning usage across different teams.
Research Scientist PhD with Focus on Deep Learning @ Zalando
Security and AI
Calvin will talk about the risks associated with AI driven algos and the measures he and his team at Zalando have taken to overcome.
CEO & Founder – Deep Neuron Lab
Professor for Machine Learning at Beuth University and the Einstein Center for Digital Future, Berlin
Data Quality in Machine Learning Production Systems
Machine learning (ML) algorithms have become a standard technology in production software systems. This imposes new challenges onto the maintainers of software systems featuring ML components. While classical software systems can be tested before being put into production, such testing is difficult for machine learning systems: depending on the data ingested during training or prediction phase the behaviour of a system that learns from data can be different. Thus ensuring robust and reliable functioning of ML systems requires careful monitoring and improvements of various data quality aspects, which can be difficult to automate. This talk summarizes some recent work on leveraging ML technology for automating the measurement and improvement of data quality problems for ML production systems.