Over 30 years ago Guido van Rossum was struggling with one of his projects, a login program written in C. He missed the power of the ABC language he used to work on before because although he had around 10 to 15 years of experience in coding in C, he felt there still were a lot of bugs and it just was slow going. He wanted something new and intuitive, yet powerful and versatile. So he sat down, turned on his TV with one of his favorite movies by Monty Python, and started to code… And the rest is history! If you want to have some impactful insights into the world of Python and how it is used nowadays, tune in on our WeAreDevelopers LIVE - Python Day.
Lorem ipsum dolor sit amet, consectetur adipiscing elit. Suspendisse varius enim in eros elementum tristique. Duis cursus, mi quis viverra ornare, eros dolor interdum nulla, ut commodo diam libero vitae erat. Aenean faucibus nibh et justo cursus id rutrum lorem imperdiet. Nunc ut sem vitae risus tristique posuere.
Lorem ipsum dolor sit amet, consectetur adipiscing elit. Suspendisse varius enim in eros elementum tristique. Duis cursus, mi quis viverra ornare, eros dolor interdum nulla, ut commodo diam libero vitae erat. Aenean faucibus nibh et justo cursus id rutrum lorem imperdiet. Nunc ut sem vitae risus tristique posuere.
Save Your SpotTogether with BOSCH we invite you to a full day of learning more about the intersection of mobility and code. Get to know more about how modern mobility is defined by an intricate interplay of hardware and software and how cars are not only connected to the road, but also to the cloud.
Coding the Future of Mobility features a variety of talks and a workshop, that give you valuable insights into the world of mobility - wether you join in-person or online.
Together with Bosch we invite you to a full day of learning more about the intersection of mobility and code. Get to know more about how modern mobility is defined by an intricate interplay of hardware and software and how cars are not only connected to the road, but also to the cloud.
Coding the Future of Mobility features a variety of talks and a workshop, that give you valuable insights into the world of mobility - wether you join in-person or online.
Would you like to do a natural language processing project but feel overwhelmed by all the talk of ChatGPT, transformer models and text embeddings? Would you like to understand how you can take a set of raw texts and put them into a form that an AI model will understand?In this talk, you'll learn some of the theory behind two of the most widely used techniques in natural language processing today: word embeddings and large language models. You'll follow a practical demonstration of how you can use these techniques yourself, in which you'll see how to build a clickbait headline classifier in Python with user-friendly packages like `gensim` and `transformers`.By the end of this talk, you’ll have an understanding of why each technique works, and the advantages and disadvantages of using each of them. Even if you haven't done any machine learning before, you'll gain enough knowledge to go home and start experimenting with your own natural language processing project.
Dr. Jodie Burchell is the Developer Advocate in Data Science at JetBrains, and was previously a Lead Data Scientist at Verve Group Europe. She completed a PhD in clinical psychology and a postdoc in biostatistics, before leaving academia for a data science career. She has worked for 7 years as a data scientist, developing a range of products including recommendation systems, search engine improvements and audience profiling. She has held a broad range of responsibilities in her career, doing everything from data analytics to maintaining machine learning solutions in production.
There has been a huge rise in the use of graphics processing units (GPUs) for general purpose computing outside of purely visual applications, be it for accelerating scientific research simulations, or training artificial intelligences in an extensive range of fields, from analysing medical images to predicting new proteins. This talk will look at some of the Python-based software, tools and frameworks that are available to help developers and researchers quickly utilise the full abilities of GPU acceleration technology in their work.
At NVIDIA, Paul has responsibility for supporting customers and partners in delivering accelerated solutions for the Higher Education, High Performance Computing and AI communities in the UK. As well as providing advice on making the best use of NVIDIA hardware, Paul teaches on how to program for GPUs, mentors at hackathons and regularly engages with research software engineering groups. Paul is an advocate for using accelerated computing in HPC, and the use of AI as a powerful tool for researchers.
Let's demystify Gen AI and see how we can apply it to fun, approachable solutions, with a magical twist. We'll first explore how vector embeddings and LLMs work, before we set off to build our search solution (with a live demo).
Using the Elastic Python clients we first create indexes for Harry Potter characters, and film subtitles. We can import compatible 3rd party LLMs through an enriching pipeline; allowing us to add sentiment analysis and embeddings to our text. We’ll build a semantic search engine that can browse the books better than the ultimate fan.
After working on many sides of tech (as a cloud architect at MSFT, a sales engineer and PMM at Dataiku) I’ve finally found my perfect match as a developer advocate at Elastic - focusing on creating content for, and learning from the tech community. I specialise in data science & AI solutions and love creating videos, blogs, demos and sessions to get people excited about the possibilities of tech. I love to talk about anything NLP, cloud, data science 101, MLOPs, generative AI, and more.
The advent of real-time data processing has redefined how businesses operate, but the journey to real-time has its challenges. Many organizations find themselves juggling separate workflows and teams for batch and streaming data processing, each presenting unique difficulties. Batch processing, while effective for large datasets, can be costly, slow, and not well-suited for API integration. On the other hand, streaming, despite its speed and low latency, often has restricted functionality. This talk explores how Pathway is an open framework in Python for real-time data processing that provides a unified platform for batch and streaming. You will see with examples how simple it is to make your batch code run in streaming and learn how to build a data processing pipeline that ingests data from various data sources, processes, analyzes, and sends it to output streams in real-time.
𝐖𝐡𝐨 𝐈 𝐚𝐦B
obur is a developer advocate and speaker specializing in software and data engineering. With over 10- years of experience in IT, he blogs about open-source technologies and the community around them.
𝐖𝐡𝐚𝐭 𝐈 𝐝𝐨
Bobur works with companies at different scales to build awareness, drive adoption, and engage with the community for developer-targeted products. He also creates inspiring content, design, and code use cases, projects, and demo apps to boost learning products.