Deep Learning for Mobile devices
Over the last few years, convolutional neural networks (CNN) have risen in popularity, especially in the area of computer vision. Many mobile applications running on smartphones and wearable devices would potentially benefit from the new opportunities enabled by deep learning techniques. However, CNNs are by nature computationally and memory intensive, making them challenging to deploy on a mobile device. We explain how to practically bring the power of convolutional neural networks and deep learning to memory and power-constrained devices like smartphones. We’ll illustrate the value of these concepts with real-time demos as well as case studies from Google, Microsoft, Facebook and more. You will walk away with various strategies to circumvent obstacles and build mobile-friendly shallow CNN architectures that significantly reduce memory footprint.
The Fake News Arms Race: How AI Can Create – and Detect – Fakes
Recent advancements in AI have made powerful content-manipulation tools - that can create realistic images and videos - accessible to the public. Deep-Fakes have already undermined trust in democracy, incited violence and damaged the reputation of brands and individuals. This has motivated the research of technologies that can catch the fakes, such as AdVerif.ai. In this session, Or Levi - Founder of AdVerif.ai, will take us through the emerging Fake News arms race, between “Bad AI” for generating fakes and “Good AI” for detecting them. Learn how AI can help fight Fake News with a spectrum of tools, ranging from Machine Vision for detecting manipulated images to Natural Language Processing for identifying psycho-linguistic features, and data pipelines for Deep Learning at the scale of billions of items.
From CLI to APIs: You can easily talk to your network now
How you manage your IoT network infrastructure is changing from classic CLI towards a more API-fun way! This session will tell what is nowadays possible with tools like RESTCONF, NETCONF, gRPC and YANG data models to monitor, configure and manage gateways and other network components. It will also cover on how you can leverage these technologies with real-life use-cases, e.g. Connected Machines in IIoT. To make it more fun, there will be a demo about how you can talk to or manage your devices via a chatbot.
Leveraging Machine Data for Business Success Description
The rise in the use of sensors and IoT devices in factories and production floors has transformed the way how operations are conducted. Their impact on the efficiency and cost savings have been so significant that it has been dubbed as Industrialisation 4.0. Upgrading your existing data workflow incorrectly can defeat the purpose of real-time actionable guidance and can have huge cost implications. In this talk, you will learn how predictive maintenance is the holy grail of Industry 4.0 and how you can use CrateDB with an open-source stack to handle, enrich and derive valuable insights from the data. We will top this off with a real-world case study of a smart factory that made six-figure savings in 2018 through efficiency improvements. Join me and we shall put your machine data to work!
Data Breaches As A Necessary Evil
Bob Diachenko is a Cyber Threat Intelligence Director and journalist at SecurityDiscovery.com consultancy. To discover data breaches, leakages, and vulnerabilities on the Internet, he uses the Shodan search engine (and similar - like BinaryEdge, Zoomeye) and simple dorks. No special software or active scanning, just 'bare hands' and some luck. If he can find your data, then anybody in the world can do it. Not only legitimate companies or businesses forgot to properly configure their datasets. For the past two years, Bob spotted at least three cases when malicious actors or criminals have inadvertently put stolen assets on the public. Focusing on these real-time cases and breaches in unsecured databases managed by criminals, Bob uncovers the key checklist that keeps your data safe and how to protect your personal data from being stolen.
Automatic Spend Classification - An end to end solution
Using the most recent fast Neural Network prototyping capabilities, it is made much easier for Data Scientists to try out different approaches in resolving real-life challenges for various businesses.
In this speech, Emir will focus on a working solution that he developed using Tensorflow / Keras and which resolves one of the challenges that large companies face: spend management and analytics of unclassified articles.
The solution bridges articles to a product category schema (so-called: [email protected]), processing them using NLP techniques and matching them via hierarchical model structure. This can be done on-demand or via batch processing.
Once companies are able to match the articles to categories, it is much easier to check the spend per category and spot increasing procurement costs. This is especially true for large corporations where the orders are made in large volumes and range from papers, printers up to laboratory equipment and specialized parts.
What we need to do to continue advancing Deep Learning
Machine Learning algorithms beat top humans in complex games like Go, Poker, and Dota 2. But to do so, they need many times more gameplay experience than humans could ever get. AI speaks our languages. But to do so, they need to read gigantic amounts of text. And so on. Recent progress was possible because of a 300.000x increase in AI compute over 6 years. This can't continue; even Google hits a limit at some point. How can we continue AI research without increasing compute and data?
Achieving financial freedom using cryptocurrencies (2019 edition)
Real digital privacy starts with protecting your financial transactions. Leaving no traces. Making impossible to see or intervene with your voluntary economic interactions. With the rise of anonymous cryptocurrencies, for the first time in our human history, we can do a global business and stay anonymous. Liberate yourself.
The Future is already here – Open Sesame, Alibaba!
Alibaba is the heart of a global digital economy that touches people’s lives every day. No matter if it is cashless supermarkets blurring the lines between online and offline experiences, world’s biggest e-shopping festival generating more than 25 billion USD revenue on one single day, or smart cities, and smart factories: It is not a concept, it is already a reality and much of this is powered by Alibaba Cloud, world’s fastest growing public cloud provider. Let’s take a glimpse into the Alibaba Cloud cave – Open Sesame!
Sphynx: PUBG's Data Platform Powered by Apache Spark on Kubernetes
Kubernetes has achieved one of the dominating platform for container-based infrastructure. Many platforms are starting to support Kubernetes as first-class and Apache Spark, analytics engine for large-scale data processing, is one of them. From Spark 2.3, Spark can run on clusters managed by Kubernetes. PUBG Corporation, serving an online video game for 10s of millions of users, decided to migrate its on-demand data analytics platform using Spark on Kubernetes. At this talk, Jihwan Chun and Gyutak Kim will describe the challenges and solutions building a brand-new data platform project powered by Spark on Kubernetes. Sphynx, the project which will be discussed at the talk, is a platform for managing on-demand Spark clusters and connected Jupyter Notebooks as containerized applications on Kubernetes.
Ocean Protocol - Empowering a new AI ecosystem an open source blockchain protocol
The open-source and decentralized Ocean Protocol project has evolved from a proof of concept to deployed into production. The simplest case is unlocking data assets to empower artificial intelligence - more data is better! This talk will go beyond data and explore the future of decentralized AI. Ocean Protocol supports advanced access control and orchestration, built on our smart contracts and Service Execution Agreements (SEAs). These mechanisms allow more complex use cases, such as aggregate data assets (the whole can be greater than the sum of the parts!), and execution of a complete data science workflow. Ocean Protocol empowers data scientists with an open and cryptographically secure platform to build applications such as compute over private data, and federated learning to unlock the true potential of AI.
Real-world blockchain apps; challenges and solutions
With the first blockchain and Distributed Ledger applications going live this year Cees will explain the technical challenges ING blockchain faced and how they resolved them. This talk will cover code examples and live demo using Corda Distributed Ledger Technology (DLT) and Kotlin. Specifically, the audience will learn:
- How Corda works, how it’s both similar and different to Blockchain technology. and why we like it so much.
- How we do Continuous Deployment of smart contracts (that by definition are immutable and are operated by multiple organizations..)
- How keep confidential information secret in a distributed ledger (that needs to share information with many organizations in order to allow decentralized validation.
- How we meet our performance goals (using a technology that needs to reach consensus on a globally deployed network)
- How we reliably integrate Blockchain applications with our traditional IT systems.
Attacking Machine Learning Methods Used for Detection of Cyber Attacks
The number of users of connected devices and complexity of communication networks is increasing. This rises the interest of attackers since there is more information to gain access to. At the same time, it makes network traffic analysis based on detection and prevention of cyber-attacks inherently complex.
In order to enable effective detection of cyber attacks, many machine learning methods are proposed. Main reasons are that they scale well with the increased amount of information, they are fast and, at the same time, provide opportunity to protect network users not just from known threats but also unknown variants of previous attacks (big advantage in comparison to previously used signature-based detectors). However, recent research on security of machine learning shows that there are various inherent properties of machine learning methods that allow attackers to bypass these methods once deployed in practice.
In this talk, I will first discuss machine learning methods suggested for detection of cyber attacks and discuss their detection performance. Then, I will show how to perform few recently proposed attacks on these machine learning methods and how such attacks can affect previously observed detection performance. Finally, I will outline current open problems in security of machine learning and detection of cyber attacks.
Say hello to future's hardware!
Quantum computing is something everybody heard off, but nobody really knows about. Do we need it?
From lab data problems and AI, I stumbled into optimization and qubits. Interestingly, methods chemists use daily are tightly bound to quantum computing!
Her talk should give the audience a feeling for what quantum computers are, the chemistry and physics behind how they operate and how they are proposed to enhance AI. She will finish with a live-glimpse into IBM-Q (free portal to 'program' one of the world's first quantum computers).
Testing smarter, not harder with AI – a realistic overview of what is possible today
To meet the quality needs and challenges of a fast-paced future driven by ever shorter release cycles and trends like IoT and robotics, our testing approaches need to match up.
Continuous Testing is currently bridging the gap between development and operations, but we are fast approaching a time when Continuous Testing will be unable to keep pace with shrinking delivery cycle times, increasing technical complexity, and accelerating rates of change. AI, imitating intelligent human behavior for machine learning and predictive analytics, can help us to overcome these challenges.
In this talk, we will highlight application areas in which AI can help across different stages of the software quality lifecycle, from test design, redundancy prevention, and defect detection, up to finding and steering the right controls and providing a resilient test automation.
We will discuss the areas in which we are already using basic forms of AI and what is still to come in the field of software quality assurance. We need to continue the testing evolution in order to achieve the efficiency needed for testing of robotics, IoT, and similar trends. We will give a realistic perspective of what can and cannot be achieved in the near future and will discuss how we can make use of advanced algorithms like automated test portfolio optimization, self-adjusting risk assessment, automated defect diagnosis, or smart environment provisioning, which may still lack the self-learning component to be classified as AI, but can help bridge the gap between the current state of continuous testing and a future state of AI-based testing.
Kandinsky Patterns for Visual Concept Learning
Starting with the necessity and relevance of Explainable AI methods, we will introduce the notion of causability and its additional value over explainability.
Kandinsky figures and Kandinsky patterns are mathematically describable, simple self-contained hence controllable test data sets for the development, validation and training of explainability in artificial intelligence. With the help of Kandinsky patterns, we can develop explanations and IQ-Tests for AI in the medical domain. We will display an already worked out case, where a neural network implemented in TensorFlow has learned a visual concept. Real-time interaction with a prepared Colab notebook and explanation of the learned internal representations will be shown during the presentation.
Are you on the Edge? Or still in the Cloud?
This talk is about Edge Computing. As with the transition from mainframes to desktop computers, in the upcoming years a lot of processing will move from the cloud to the edge of the network, i.e. closer to the user. This will particularly affect areas with high data volume (IoT, AI) and low latency requirements (IoT).
In this session, I will give a short introduction to this exciting new area and its benefits and use cases. Furthermore, I'll show which frameworks and tools developers can use right now (Amazon IoT Greengrass, Microsoft Azure IoT Edge, ...) and where we might be headed. Finally, I'll address the upcoming 5G networks where edge computing will be a first-class citizen.
How we almost delivered 100 tons of Stracciatella Mousse
It's not a bug, it's a feature!
How often do we read or hear this saying?
At what point in time are you confident to say that it's a feature?
Is it when your service has a very good test-coverage, a user-focused design and never acts out of line because it utilizes fancy front-end testing tools and canary-releases?
Even with seemingly perfect coverage along all levels of the test pyramid, we wonder why bugs appear.
This talk is about how we almost delivered 100 tons of Stracciatella Mousse. On time!
This was not a bug in production. A store employee confirmed that specific order.
Nevertheless this behaviour sounds strange enough to investigate.
That's why we stepped back. Took a deep breath and had a look at our metrics.
Can we actually detect anomalies before someone drowns in a pool of tasty Stracciatella Mousse. Join the talk to find out!
Self-Supervised Learning - Towards Autonomously Learning Machines
Supervised machine learning is powerful, but has severe limitations. For example, supervised learning requires extensive labeling of data sets, which is expensive. Also, learned representations are tailored to a very specific task, which often goes at the expense of model robustness. Overcoming these weaknesses, self-supervised learning has gained significant attention in recent past. Self-supervised learning is inspired by the way infants learn to understand the world around them. Even in absence of a "teacher" (i.e. labels), infants quickly grasp new concepts and apply them to other domains. They do so by autonomously creating learning tasks (e.g. grabbing a toy) from sensory input of the world around them. Self-supervised learning mimics this behavior. It is a concept that is centered around automatically creating labeled data sets (learning tasks) from the input. For example, we could automatically and repeatedly remove parts of facial images and train a model to estimate the missing parts (labels) based on the surroundings. We can then use the learnt understanding of the structure of faces for other purely supervised learning tasks. This typically results in better prediction quality and improved model robustness as well as reduced training time and lower number of required labels.
Real Data vs. Anonymization - Prerequisits for AI and Machine Learning
Quality assurance for anonymization-software is a critical factor for a sustainable use of Artificial Intelligence. Questions about the comparativeness of efforts to anonymize data and overdoing anonymization have to be raised for adequate use of anonymization-software. BRZ experts will discuss these questions and further present anonymization principals and technical validation methodologies.