About the AI Congress
WeAreDevelopers AI Congress Vienna will focus on human-machine interactions and will bring together two sides: The academy and the industry. We will try to answer questions such as: Can we trust computer decisions? How to deal with decisions bias? How can we improve the user experience for machine learning software? And many, many more.
The Artificial Intelligence Congress revolves around the interaction between wo/man and machines. From trusting and rationalizing the black box, to improving the interface through which we communicate with the models.
AI Congress Topics
Machine vs Human
Trusting Computers’ Decisions
GDPR and Privacy
In the Era of Big Data
Decentralized Artificial Intelligence
The Future of AI
for Personal Usage
Demystifying the Black Box
Security and Safety
Meet AI Congress Speakers
for Speaker announcements
Get a sneak peek at some of the talks
- Intelligent Assistant
- Artificial Neurons
- Natural Language Processing
- Ethical Crisis in Computing
- Acumos AI
- Deep Learning
- Machine Learning
- Conveying messages by using NLP
Bixby: A New Take on the Intelligent Assistant
Adam Cheyer, Co-Founder and VP of Engineering,Viv Labs
Bixby is a new assistant created by Samsung with the goal of providing a unified conversational interface soon accessible across an ecosystem of hundreds of millions of devices. Designed from the ground up with developers in mind, Bixby offers the most sophisticated platform and tools available today, featuring technologies such as: Dynamic Program Generation, where artificial intelligence works with developers to create on-the-fly responses to every unique user request; preference learning and selection learning to help a specific user more efficiently accomplish complex tasks; and advanced natural language, dialog, and conversational contextual support to enable powerful multi-modal interfaces. In this talk, we will discuss why Bixby is a compelling proposition for developers looking to gain additional reach for their services, and through live coding sessions, demonstrate the power of this new approach to building intelligent interfaces.
Sabria Lagoun, Co-Founder, The Brainstorms
If the first artificial neurons were inspired by the brain, A.I soon diverged to become a discipline on its own. However, evolution produced performant, ultra-light, energy-effective biological neuronal networks. Insects and worms only possess a few hundred neurons, but they are able to navigate, make decisions, adapt their behavior and communicate. Through cutting edge techniques, we are now able to explore these natural networks in vivo, during the execution of extremely demanding cognitive tasks. I will describe fantastic encoding solutions that emerged from the brains’ biophysical constraints. I will give examples of bio-inspired A.I. algorithms for learning and spatial navigation, and explain how artificial neuronal networks can spontaneously show a behavior identical to the live brain.
Natural Language Processing
Navid Rekabsaz, Post-doctoral Researcher, Idiap Research Institute
Recent advances in Word Embedding models (representation of words in high-dimensional vectors) provide promising results in capturing semantics of language, becoming the essential building blocks of many Natural Language Processing (NLP) applications—from search engines and job recommendation platforms to automatic machine translators. Since these models are often trained on large amount of historical data, they automatically capture the inherent bias in data, which can potentially cause ethical bias in our decision making. In this talk, Navid first explains the fundamentals as well as interesting qualities of the word2vec algorithm, an effective and efficient neural network-based approach to word embedding. He then discusses a recent study to show how the definitions of several occupations, captured by word2vec from the English Wikipedia text, are biased towards either female or male.
An Ethical Crisis in Computing?
Prof. Moshe Vardi, Professor in Computational Engineering, Rice University
Computer scientists think often of “Ender’s Game” these days. In this award-winning 1985 science-fiction novel by Orson Scott Card, Ender is being trained at Battle School, an institution designed to make young children into military commanders against an unspecified enemy. Ender’s team engages in a series of computer-simulated battles, eventually destroying the enemy’s planet, only to learn then that the battles were very real and a real planet has been destroyed. Many of us got involved in computing because programming was fun. The benefits of computing seemed intuitive to us. We truly believe that computing yields tremendous societal benefits; for example, the life-saving potential of driverless cars is enormous! Like Ender, however, we realized recently that computing is not a game–it is real–and it brings with it not only societal benefits, but also significant societal costs, such as labor polarization, disinformation, and smart-phone addiction. The common reaction to this crisis is to label it as an “ethical crisis” and the proposed response is to add courses in ethics to the academic computing curriculum. I will argue that the ethical lense is too narrow. The real issue is how to deal with technology’s impact on society. Technology is driving the future, but who is doing the steering?
How We Democratized Artificial Intelligence with Acumos AI
Eyal Felstaine, CTO, Amdocs
Many Artificial Intelligence tools today are difficult to implement and require significant domain expertise which is why Acumos AI is going to have such a large impact. As the first tool to give users a visual workflow for designing AI and machine-learning applications, as well as a marketplace for freely sharing AI solutions and data models, the Acumos framework is user-centric and simple to explore. In this session, Dr. Eyal Felstaine takes you through the Acumos AI distributed platform, explaining its benefits, including how it will free up data scientists, developers and model trainers across different industries and fields, (from network and video analytics, to content curation and threat prediction) so they can focus on their core competencies and accelerate innovation.
Are we there yet? Remaining Challenges in Deep Learning based Natural Language Processing
Deep Learning has changed the face of Natural Language Processing (NLP) over the recent years. The state-of-the-art performance in nearly every NLP task today is achieved by a neural model, often with a large gap from previous models, which were based on domain knowledge. The availability of pre-trained word embeddings (such as word2vec), trained on a massive amount of texts, has given our models a good starting point for meaning representation. The ability to learn vector representations of arbitrarily long texts using Recurrent Neural Networks (RNNs) has facilitated re-using the same architectures to solve various NLP tasks. Relying on latent neural representations obviated the need to hand-engineer features, allowing machine learning practitioners to join the party without any knowledge of the domain. Popular media announces every few days that AI has solved language. But looking beyond the performance metrics, did Deep Learning actually solve NLP? In this talk, I will present some of the remaining challenges in Deep Learning based NLP. Among which, the need for a large amount of training data creates algorithms that only address certain domains and languages well while performing much worse on others. How performance metrics can lie when you don’t actually know what your black-box network is learning. The lack of robustness of our models. The shaky ground on which we base our meaning representation, distributional word embeddings. Our unreasonable expectations from sentence and document level representations. And finally, the lack of interpretability, that didn’t only kill the debugging, but also causes major fairness issues.
When humans teach machines – the algorithmic challenges in creating a Machine Learning DIY tool
Shai Hertz, Algorithms Team Leader, Refinitiv
Organizations face a need to identify many topics within huge amounts of unstructured documents. Oftentimes these topics do not occur frequently, so users face the need to find many different types of needles in one big haystack. Take for example interesting topics like “Drug Trafficking” or “Race Relations”; such topics are not very frequently mentioned in the news, but each occurrence of them could be important to a customer. In addition, these topics are complex, and it is hard to build a search query that would bring all relevant documents without too much noise. It is common wisdom to turn to Machine Learning (ML) solutions in such cases, but typical users don’t have the technical skills needed to build a ML solution on their own. On the other hand, data scientists often lack the domain expertise to create and evaluate such classifiers. Our Self-Service Classification solution is an interactive Do-It-Yourself tool that allows a user who is not a data scientist to train and deploy a classifier by herself. Several similar solutions have been suggested; they, however, typically require significant data collection efforts in order to create train & test sets, and these efforts might be a showstopper. Our solution, in distinction, provides a framework for creating a training set with minimal effort (a “labeling tool”), the ability to train an initial model very quickly, and the ability to improve an initial model via an interactive tuning phase. During the development of this tool we came to an understanding that reducing the data collection efforts is key, and built a workflow that simplifies the data collection process and requires much less work from the user. This process is built on the observation that we can identify ‘areas’ in the corpus of documents where the topic at hand is more prevalent. We help the user build and expand a query in order to retrieve highly relevant documents. After she labels an initial set of positive documents we identify keywords that enable us to define Triage terms to discard completely irrelevant documents. The result is that the user is required to tag a rather small set of positive documents, and we automatically identify the negative documents from an uploaded corpus of (unlabeled) production data. Our approach also offers model-tuning capabilities, which allow the user to intervene in the internal algorithmic steps, and customize them to her needs. This system enables domain experts to train a topic classifier in a couple of days. It is used in Thomson Reuters by several teams to expand TR Intelligent Tagging topic tagging capabilities.
How to be best in conveying the message by using NLP approach (Big Five modeling)
Tomislav Krizan, CEO & AI Evangelist, Atomic Intelligence
With data-driven world where every business entity relies on data to make better decisions, correct interaction with customers becomes imperative. One of key imperatives for successful interaction with customer is applying approach which best suits that customer. Every personality trait will demand different approach for communication, and thus, our intention is to show how interaction over Social Media, text, use of questionnaires or other channels, can help us to model and predict personality traits for every individual who gives appropriate consent. Based on that personality traits prediction, combined with other personal information (demographic, interest in our services, etc.) we can create unique strategy for approaching persons on individual level. Knowledge of personality trait can help any industry to proactively take appropriate action. There is study showing that members of teen population with specific traits are more prone to drug abuse and for example this can help with preventive and corrective actions on school level. Focus of this talk would be on using NLP modeling to predict personality scores along the Big Five dimensions (Norman, 1963), e.g. Extraversion, Emotional stability, Agreeableness, Conscientiousness and Openness to experience
Tickets are up for grabs!
AI Congress Pass includes
- Admission to both days and access to all keynotes.
- Entry to the expo area and admission to the official afterparty.
- Free meeting area and access to the networking zone.
- Access to the official recorded talks and congress goodies.
- Free Coffee, Tea & Water
AI Congress Venue
1010 Vienna, Austria