My partner is a freelance photographer. During the years she has gathered a ~5 terabyte collection of photos from over a hundred different occasions. All the photos have to be backed up in at least two physical locations and organized for browsing. This talk is about how I built the solution for her in a serverless way, costing less than 10€ per month. We will walk through my solution starting from local network and Raspberry Pi to AWS S3 and Recognition.
The session will start with a brief introduction about myself. Then I will move on to talk about what it was like to be a web developer at a time when social media was at its early stages and how that affected my day to day development tasks. I will also be talking about working on software development in the adult industry. Second, I will go through how monetization of websites changes the architecture and design of your web system, especially with high-risk industries. Last, I will end my talk by describing a coding problem we faced in my team at Shopify and how we resolved it. The coding problem is related to money objects. The developers will come out understanding how rounding and precision affect money transactions and why it is important to identify when rounding occurs within money formulas to ensure the least costs on all parties involved. There will be lots of coding calculations involved thus this coding problem will take the majority of the time of the session.
_Cryptocurrency & Blockchain Author, Evangelist & Consultant
The talk provides a comprehensive insight into the emergence and evolution of cryptocurrencies starting from the early concepts such as Digicash, Bitgold, B-money, and Hashcash in the 90’s up to the latest cutting edge tech trends in cryptocurrencies. The aim of the talk is further to outline the future trends in the cryptocurrency space and its impact on our society.
>Dr. Halil-Cem Gürsoy
_Senior Solution Architect
The decision for the usage of Kubernetes is made in many projects without a detailed requirements analysis, often done in the higher-level management because it is the latest craze, talked about on every IT business event or developer conference. There are, thought Kubernetes has obviously won the “orchestrator wars”, still alternatives, which would certainly have been a better match. If this is pointed out by critical project or organization members, it is not uncommon that they are getting confronted with behavior that resembles the Stockholm Syndrome, named after a condition there hostages develop a psychological binding to their captors. This is a talk about how Kubernetes betrays the DevOps community, deceives the DevOps culture and why it is not possible to talk about this in an open way. I would like to shed light on the psychology of such a hype that has evolved around Kubernetes and the impact of such development on the DevOps culture.
_Full Stack Designer
At xHUB, we have a comprehensive design system built in Figma. A little while ago, I decided to have a look at how we could use the Figma API to improve and maybe automate our workflow around this. The problem I instantly ran into is that there are no resources out there demonstrating how to use it, especially at a lower technical level. So I thought I would talk about how to get up and running and using the API.
_AI & NLP Software Developer
Artificial Intelligence (AI) is getting more and more involved in our daily lives, often without us even noticing or being aware of its presence. It may come hidden in intelligent smartphone cameras, it may influence us while shopping or it may even decide whether we get hired by a company or not.
In order for an AI to function as one, it has to be trained on data before it can actually act intelligently. It may behave in odd, unexpected ways, but one has to remember: Maybe the situation is completely new and it simply has not learned the correct behavior yet. Humans have to constantly learn and adapt over their lives, an AI, however, is trained only on a very limited set of data and thus may develop a completely different and biased view. This might be fine or even wanted for encapsulated industry use-cases when the AI analyzes and adjusts machine behavior. But when it comes to the public sector or when the decisions have a direct impact on humans, there is a high chance that bias, that was learned from past data, leads to the unintended discrimination of certain groups of people.
Which reasons are there to cause such bias, what influences it and how can we attempt to reduce the bias? Based on real-world examples, this talk sheds some light on bias in AI and how to overcome it when developing AI solutions.
_Senior Tech Lead
_Senior Full-Stack Developer
In the last few years, Confluent has been pitching Kafka Streams not only for Big Data and analytics but also for data-centric, event-driven enterprise application architectures. However, there are very few publicly known examples of systems that fully leverage Kafka as both a messaging system and a primary source of truth for business-critical data, as Confluent suggests.
What does such a system conceptually look like and how does it behave? This talk provides some answers. Can it be done – and should it be? We claim that yes, it can be done, but should be done only when an organization is committed to the use of Kafka as part of a wider strategy.
We describe a microservice-based platform built fully on Kafka Streams that we designed and built for our client – a global German brand –, and on top of which we built a CRUD-centric B2B inventory application. First, we take the audience through the evolution of the design during the concept phase, explaining how we went from the client problem to the solution. We explain our decision to make Kafka our system’s single source of truth and its interesting consequences. Via several examples, the talk walks step-by-step through patterns we applied using Streams, such as Event Sourcing to persist history, Command and Query Responsibility Segregation (CQRS) via command streams and materialized views and integration with secondary data stores for full-text search and other use cases. There are musings about data normalization vs. denormalization in Kafka Streams, and what it could mean for GDPR considerations. Furthermore, we use examples from our work to propose how traditional APIs could be replaced with published data streams and their opportunistic consumption across an organization. Expect a lot of diagrams.
As with any journey worth taking, there are also some painful lessons we learned along the way. For example, we dive a bit deeper into eventual consistency problems we faced with Kafka Streams and how we solved them by introducing state into our stream processors. There are also lessons learned from our initial experiences of running Kafka Streams on top of Kubernetes in production. There is a steep learning curve and a fundamental switch required in the mindset of development teams from an object-oriented way of approaching problems to a functional, data-centric one. This is why a system based on Kafka Streams should only be considered when an organization is committed to the long-term use of Kafka and to an investment in the build-up of the required expertise.
Many people have been playing around with neural networks and trained models using MNIST and similar toy datasets. The internet is full of tutorials on how to train such models and it is quite easy to get started. However, in the real world, we typically have more data that we can fit in the memory, data set can be constantly changing and it typically consists of noise. Therefore, there is a huge gap between being able to train models in playground environments and in the real world.
In this talk, I will explain how to train deep learning models in a realistic environment. I use self-driving cars as an example, but the methods can be applied to other applications too. I will present a deep learning pipeline starting from converting raw captured data into TensorFlow records, training different network architectures and track their performance by harnessing Valohai and GPU instances in AWS.
In this talk, participants will learn what it takes to move from Jupyter Notebook playgrounds into the real-world with real-world problems. Participants will learn about TensorFlow records, experiment tracking with Valohai and model evaluation and inspection.
“Containers are the new ZIP format to distribute software” is a fitting description of today’s development world. However, it is not always that easy and this talk highlights the development of Elastic’s container strategy over time:
- Docker images: A new distribution model.
- Docker Compose: Local demos and a little more.
- Helm Chart: Going from demo to production with maximum flexibility.
- Kubernetes Operator: Full control with upgrades, scaling, and operational best practices.
Besides the strategy we are also discussing specific technical details and hurdles that appeared during the development. Or why the future will be a combination of Helm Chart and Operator (for now).
Have you ever asked yourself, how would you be able to best serve your customer’s needs? Should you offer an app? A responsive websites? Should you do cross-platform development, a progressive web app or a native app? There are several patterns and ways to create mobile applications. If you want to have an app on your smartphone, which frameworks or ways are worth looking at, where should you start or is your responsive website already enough? Answers will be given in this talk!
In this talk we elaborate different ways of writing apps and provide an overview of state of the art frameworks and techniques. I will share my experience as a former Android developer and provide live coding with Google’s Cross-Platform-Framework called Flutter for a small Quiz App, which you can try out at our booth as well.
_Requirements Engineer & Scrum Master
I was so discontent with our waterfall software projects. I knew Agile was the right thing. These values just fit my beliefs too well. Agile Manifesto? Totally agreed! I had asked my dev friends, had been to conferences and had read the Scrum Guide over and over. But still… somewhere deep down a feeling of insecurity rose, knowing that I did not truly understand.
I would have had the power to change the way we worked. But I had never seen Agile in practice and I did not know how to start or where to go. I also did not know that much about Open Source. But as we worked with Drupal I somehow stumbled into this amazing community. Now that a few years have passed I can see clearly why I have been so enthusiastic about both Open Source and Agile. The Agile Manifesto’s first sentence says it all: “We are uncovering better ways of developing software by doing it and helping others do it”.
In my talk, I will share my insights and lessons learned. How I failed to implement an agile culture. How my Open Source experiences are helping me understand why I failed and inspire me on what to do differently today. It’s for people who want to understand and reflect the agile mindset and for everybody interested in Open Source. Anybody wanting to learn more about (non-code) contribution and how to create a diverse and empowering community. No previous knowledge required.
What you will take-away:
- Another viewpoint on agile values and principles
- Insights to open source and (non-code) contribution
- Where Agile overlaps with the open source way and where it differs
- Hands-on examples and what we can learn from each other
_Chief Technology Officer
Data is the most important thing in the world today. The number of data technologies literally exploded in the past decade. There are relational databases, non-relational databases, various analytical tools, numerous data warehousing solutions, data lakes, and various stream analytics technologies. SQL technologies have been with us for many years and we can openly say that they are among the most popular software products in the world. NoSQL databases are not yet well-known at this level, although this is practically changing day by day. The purpose of this lecture is to go through various types of NoSQL technology for storing operational data and their practical application. Because, unlike SQL databases that are equally intended (or unintended) for each business problem, each type of NoSQL database has its own specific niche and purpose. We will go through the key/value stores, which function on the principle of storing values indexed by the key(s), column/family stores, which are quite similar to the first variant, but with a bit more structure, document stores or the so-called JSON document databases and graph databases. This lecture will also touch the topics of partitioning or shading and replication, as well as provide practical tips when to use (and more importantly when not to use) a NoSQL database.
So, you have established your product-driven tech organization, including cross-functional product-driven teams that have end-to-end responsibilities… but the business still complains about features that take too long to develop, and tech debt keeps rising…
In this talk, we will share the essential ingredients of how developers can use a product-driven tech organization to finally reduce tech debt and drive true DevOps across the organization. We will use at three common issues to give actionable examples:
1. Upgrade Infrastructure:
Very often, legacy infrastructure is the main impediment to implementing DevOps. At the same time, upgrading the infrastructure is both a cross-team effort and not on top of anybody’s priority list. We will share effective ways how to focus on infrastructure upgrade projects (like cloud migration, database migrations, and consolidation of logging infrastructure), and get the whole organization aligned on this topic.
2. Consolidate Frameworks:
Everybody understands that cross-team collaboration would be much easier if the teams would use fewer frameworks. However, migrating from one framework to another is a huge undertaking both for rewriting the software but also for upskilling the developers with the new framework. How do you prioritize this migration that at first glance does not provide any value for the organization? And when is a good time for this migration?
3. Housekeeping tasks:
Product-driven teams prioritize tickets based on their business value. Often this means that short housekeeping tasks that do not have an immediate benefit are postponed and keep piling up. An example of these tasks is miss-classified error messages. They pop-up frequently, everybody ignores them but still, they make bug fixing harder. These unresolved housekeeping tasks contribute to tech debt. What are effective ways to enable the team to work on these housekeeping tasks?
We will share the findings from the journey of Aroundhome. The background is our shift from a strongly sales-driven organization towards a product-driven organization within the last twelve months. For example, we moved from Angular to React after just having finished our initial prototype. Additionally, we got active business support to make our migration to the cloud top priority. We conducted an IT-wide training day to teach DevOps basics to everybody and established a noisy error day. All of this while growing the business.
Along this journey, we will share actionable insights about what everybody can do to effectively reduce technical debt. Furthermore, we will give insights into what did not work for us. For example, why do we still not use Kafka – although we strongly believe that this is a cool technology.
This talk focuses on how developers can actively work on reducing technical debt. It will also give insights for Product Owners on how to better understand the developer’s needs and how to align and prioritize them with business requirements. Finally, it will give CTOs and CPOs tools on their journey towards a world-class tech organization.
In recent years, vulnerabilities in large software projects have become the leading causes for many security breaches such as data leaks and DoS attacks. Fuzzing is a powerful testing technology helping to find bugs in software projects effectively. For example, with the help of oss-fuzz over 16,000 bugs have been discovered in Google Chrome and 11,000 bugs in further 160 open-source projects.
Haven’t you applied fuzzing yet? You are not alone. While there are several open source solutions for modern fuzzing available (e.g., AFL or libFuzzer), fuzzing has not yet established itself in software testing. One of the main reasons is the difficulty of its integration into the project environment. Modern fuzzing tools like our solution CI Fuzz reduce the complexity of fuzzing making it more usable. This allows development teams to confidently push new releases to their users, continuously tested with fuzzing, faster than ever.
In this talk, we present an overview of fuzzing and its origin, the recent advances in fuzzing, and its current state of the art. We discuss why modern fuzzing is the future of software testing and will have an enormous influence on code quality. Every company can benefit from this technology as soon as it is easier to use.
You will learn about state-of-the-art fuzzing methods, including:
- The importance of software testing
- Evolution of fuzzing and modern fuzzing techniques
- How to build a continuous fuzzing framework
Since Pac-Man was originally released in the ’80s, it has been a beacon of fun and joy for people of all ages. What few people know is that this game can also be used to inspire developers on how to build event streaming applications. In this near-zero-slides talk, attendees will get to see the game being deployed in AWS straight from the GitHub repository, and once deployed they will play with the game to generate events. As they play, the presenter will write from scratch a scoreboard using ksqlDB – an event streaming technology built out of Kafka Streams. Finally, an serverless API will be used to monitor in near real-time the computed aggregated statistics from the game, revealing whoever is the most proficient Pac-Man player in the room.
_AngularJS Developer, AVP
/Barclays Investment Bank
Mercedes-Benz /developers is the developer portal to access car data APIs and SDKs. We aim to enrich the business models of start-ups to improve customer experience of our car owners and drivers. We empower developers to accelerate their business and offer everything they need:
- Valuable credits
- Comprehensive training
- Exclusive support
- Promotive spotlight
_Google Developer Expert, Speaker, Dev & Trainer
Ever heard about ephemeral state? REDUX and other state management tools did a great job in managing global state in SPAs. However, nobody talked about the complexity and pain of managing the local component state.
There is this quote of “Gang of Four”: “If you stick to the paradigms of OOP the-design patterns appear naturally”. This will be the fundamental motivation for this research. As a result of my studies, you will get an overview of terms and ways to categorize the state. You know the tricky problems and challenges and learn how to craft component state reactively.
_Software Development Analyst
Data analysis is on everyone’s lips when it comes to gaining new insights from business data. The question is: Why don’t we, as software developers, use data analysis to analyze our data from our software systems?
In this session, I will present approaches and best practices to mine software data based on the many ideas from the Data Science field. We’ll also look at the standard tools used in this area to analyze and communicate software development problems in an understandable way. With tools such as computational notebooks, data analysis frameworks, visualization and machine learning libraries, we make hidden problems visible in a data-driven way.
Attendees will learn how to leverage scientific thinking, manage the analysis process and apply literate statistical programming to analyze software data in a comprehensible way. The main part will be hands-on live-coding with Open Source tools like Jupyter notebook, Python, pandas, jQAssistant and Neo4j, where I will show which new insights we can gain from data sources such as Git repositories, performance measurements or directly from source code.
_Head of Data
No matter where you look, technical and non-technical companies alike are trying to figure out whether using Machine Learning could help their products and/or operations. Potentially with transformative impact.
That was exactly the reason why about a year ago I joined Slido, a company that builds an audience interaction platform that allows you to run Q&A sessions and do things like live polling. My long term goal, as well as the short-term mission, was to see whether many of the new and old Machine Learning (ML) and in particular Natural Language Processing (NLP) tools and techniques could be applied in this interesting “sub-field”.
To paraphrase great military leaders, however, my plan did not survive the first contact with reality. The data infrastructure I found was either in the first stages of its infancy or virtually non-existent. Recognizing this as both a significant obstacle and an opportunity to build a modern data infrastructure from the ground up, the original goals of mine have been put on hold and I’ve spent the better part of my tenure at Slido working on this “greenfield project”.
As a result, I would be happy to share with you what it took us to get from zero (essentially no data infrastructure whatsoever) to one (running ML models in production) in about a year. Throughout this story we’ll see how none of this would have been possible an extensive use of Cloud technologies (mostly AWS in our case), various well-established open-source tools (Apache Airflow, Apache Superset, Pandas, m2cgen, sqlfluff, …) as well as tools we built internally and open-sourced (such as for instance sqvid), good engineering practices applied to the data context and above all a (small) team of very dedicated people.
And as a bonus, I would also like to share a few lessons learned that we got from our experience so that the community does not need to live through them again.
_Head of Marketing & Business Development Northern Europe
I will talk about why knowledge transfer is an issue in the digital age and what Quora’s approach is on sharing knowledge via real name, be nice, be respectful policy and merging of questions in order to get a centralized go to the page for each question of humankind. I will also talk about the mission of Quora, to share and increase knowledge worldwide.
_(Frontend) dev loving UX
“Let’s add an icon.” But… where do icons come from? What about icon fonts and sprites in 2020? Shouldn’t we just use SVGs? And what about consistency? Do we still need icons on the web? We do! This talk covers the strengths (and weaknesses) of icons in digital products and shows modern ways of using icons on the web. It will provide the audience with some ready-to-use techniques and approaches for their daily frontend development/user interface related work.
/Corvid by Wix.com
During the Cold War, the CIA knew how to expose Russian spies disguised as American citizens with 100% certainty. They used only a piece of paper and a few questions. How did they do that? Hacking your mind is easier than you think.
Let’s explore how these mental hacks affect the code we all read and write. We’ll take a stroll through the world of cognitive and social psychology, and shed some light on some of our industry’s best and worst practices.
We will have a few interactive examples of our mind’s limitations, examine how these limitations manifest themselves in real code samples and engineering practices, and take away scientifically backed techniques on how to write better code.
_Consultant, Developer & Architect
I like pizza. I like it a lot. I want to use the sensation of ordering a pizza to demonstrate how easy it can be to develop microservices with contemporary and modern methodology – that being the API First! approach.
In this lightning session, I will show on the beamer how easy it is to write a small OpenAPI Spec for a tiny microservice. I will use the mighty OpenAPI Code Generator to generate initial projects from this spec file for a couple of programming languages. So the generator automatically does a lot of work for you, taking the task of project setup and the writing of boilerplate code off your shoulders. This demonstration will show that developing the “API First! way” can save you a lot of time and also forces you to think first of a clean and understandable API before you implement it. Thus enabling you to design awesome APIs to be used by consuming applications.
The developers who are going to use your API to build their own applications will thank you for that. You should bring with you a basic understanding of microservice development. Furthermore, it will help you to be able to read YAML code, as I write it live on the beamer.
IoT projects are getting more and more concrete and Industry 4.0 is actually here. But how does the connected factory look in detail? How can you connect industrial machines to your software? And especially how does the industrial architecture look like? What protocols are there? This session aims to answer these questions and also dives into real life use cases on how we at Cisco solved these problems with our customers.
Basically, it sounds very simple to just connect devices and gain insights and data out of it for a dashboard for example – but practically it is not, especially in the industrial world where the architecture and protocols such as PROFINET, Modbus etc. are different compared to the IT world. Therefore, an overview of the industrial architecture and protocols is definitely helpful.
When the hardware is connected, the data still needs to get normalized and converged in order to be sent to a database for example. And exactly this is a crucial point where this session will emphasize on. What open and closed source tools are currently being used in the industry and where is the trend going? Real life use cases will be shared in the sessions on how data from industrial protocols can be read and insights of these assets be gained. To make it more tangible there will also be a live demo.
_Founder, Lead Front-end Developer & SCRUM Master @ OnePointFour Consulting
/DIV:A Initiative, LLC
Our daily lives are filled with experiences that relate to a lot of programming principles, and African games for me come to the fore. A lot of the games that I grew up playing back home have been passed down from generation to generation. These games have made it easier for me to understand and teach some of my students using something they would otherwise find extremely difficult to understand. For example, a game we call “nhodo” (Shona) would easily demonstrate how a while loop works, or “tsoro”, to illustrate how an if-statement.
Using these teaching techniques; i.e, something children can relate to, helps them understand programming fundamentals much easier and much quicker but above all, they learn through play.
_Microsoft MVP, Senior Consultant
Svelte is a radical new approach to building user interfaces. Whereas frameworks like React & Vue do the bulk of their work in the browser, Svelte shifts that work into a compile step that happens when you build your app. And rather than applying techniques like virtual DOM diffing, Svelte writes code that surgically updates the DOM when the state of your app changes.
How good are your tests? Would they still pass if the tested code was changed? If so, there may be problems with your code, your tests, or both! Mutation Testing helps reveal these cases. It makes lots of slightly altered versions, called “mutants”, of each of your functions, and runs each function’s unit tests, using each of its mutants instead. If a mutant makes any test fail, that mutant “dies”. “Survivors” imply flaws: your code might not be meaningful enough that a slight mutation would change the behavior, your tests might not be strict enough to catch the difference that the mutation made or both!
You will come away equipped with a powerful new technique for making sure your tests are strict and your code is meaningful.
I will talk About Qbits, how Qbits work and the massive difference in algorithmics on quantum computing.
_Student and Robot Enthusiast
/Technical University Berlin
If you ask people to draw a picture of a robot, results will reach astonishing uniformity. This consistency stands in striking contrast to the actual diversity of robots. In 2020, a robot can be a vacuum cleaner in our apartments maltreating our furniture, a worm floating through our bloodstreams, or a monstrous machine at the shop floor scaring even the toughest metalworkers.
Since the invention of robots, humankind has come up with bizarre robotic inventions. These creations oftentimes mimic animals and humans and reveal a lot of their fleshy creators. People tend to ascribe human characteristics to robots to save energy and time, which leads to bizarre phenomena
Abstract of some scientifically based fun facts or cool social robots that make the world a better place:
- We have feelings towards our Roombas and see them as part of the family
- Robots in war zones have been decorated
- We refuse to hurt robots even they look like bugs when someone told us a sweet story of them before
- We show racial biases towards robots that we racialized as black
- People prefer being replaced by robots than by other human workers
- We like robots if they appear human (think Wall-E’s googly eyes) but not if they appear too human (The animation film “Polar Express” was horrifying)
- MIT researchers created a fluffy “Dragonbot” that helps school children to read
- Fleecy baby seal robot “Paro” comforts patients with dementia and even increases human-human social interactions in nursing homes
- A robot that looks like a grumpy cat can convince us to environmentally friendly energy using behavior
End of story: Developers love to save time and effort. But they are not alone. Most humans use mental shortcuts most of the time. People behave as if robots had human characteristics, too, to make it easier to interact with them. Companies and individual engineers can make use of these behaviors and tendencies when creating their products to make them more engaging.
_Senior Solutions Architect
In this workshop, we will teach you hands-on how easy it is to build React applications, so-called Nerdlets, that reside on New Relic and feature customized data visualizations and interactive interfaces. With integrated APIs, New Relic Query Language (NRQL), and GraphQL to define a programming model that gives you access to all your New Relic data. You will get an SDK, CLI, and APIs that have been designed to speed the time for building and deploying applications on top of New Relic – all with no additional software to run or operate.
The main idea behind this workshop is to take someone from knowing next-to-nothing about New Relic programmability to be able to develop their own Nerdlet from scratch. The course covers many topics from the basics of getting a Nerdlet up and running and configured to custom components and user interaction handling.
Developers love the idea of having safety nets when they work. The feeling that a stable framework, backed by top software companies and supported by community developers, will ensure they cannot go wrong. There is one excellent framework everybody forgets: the web browser.
Using modern web standards, we can add new features/powers into the browser in a snap. Is this too good to be true? Can it be that we are actually at the point where all the shiny component frameworks are disposable? Can we all be freed from the framework fatigue?
The opinionated session will cover the basic ideas of messaging, data binding, component authoring, routing – without dependencies – and compare them with the same features provided by the browser. DYI approach with real code will be presented and compared with features that simply cannot be provided without external tooling.
The following topics will be covered:
- Observables (Using proxies, getters, and setters)
- Messaging (Publish-Subscribe)
- Dependency Injection (Using native class mixins)
- Runtime environment variables solution (HTML meta-tags injections)
- Components (Web Components)
- Routing (With Web Components)
Conferences could be quite crowded and it is simply not possible to read all of the abstracts and meet all of the speakers and attendees. But AI can help, Recommender Systems such as Netflix or Amazon are already making our lives so much easier! We are extending this experience to conferences, by introducing RecSys that recommends the most relevant talks and speakers based on your interest.
CUDA is both the programming language and the platform which powers all of NVIDIA’s parallel GPU processors. Even if you have never heard of CUDA, you probably touch it or use it every single day. GPUs run the AI revolution and everything that depends on it: CUDA drives your voice assistants, your social media; it makes movies, plays computer games, drives autonomous cars, identifies music on your phone, translates web pages. It runs on everything from microcontrollers in robots to more than half the supercomputers in the world. It creates your weather forecasts, models climate change, simulates rocket engines, runs the Large Hadron Collider and the Fermi gamma-ray telescope up in orbit.
But what is really interesting about CUDA is that it doesn’t just run “on” the GPU that does all these things – it “defines” the processor that runs it. That is the really unusual thing about CUDA and the GPU: the state of the art GPU that runs all these things evolves because of the way developers want to program it. Instead of developers building on hardware, the hardware is built for us developers.
So in this talk, I will show you what the language looks like and how the whole stack is connected down to the hardware that runs it. I will tell you what we’ve been building recently for CUDA, and give you a view of the directions we’re heading in. It’s the direction that state of the art AI, supercomputing, graphics and cloud systems will be heading in because, even if you didn’t know it, CUDA sits inside all of these things.
/Be the first to know
JOIN A GLOBAL COMMUNITY
WeAreDevelopers community members benefit from early access at ticket start, convenient program and speaker updates, exclusive ticket upgrades, and secret event invites. Sign up now to be the first to know.