- You have 2+ years of experience in software engineering.
- You have a firm knowledge of Linux-based systems (or similar), ideally in a server/headless environment, wielding your shell as a weapon that imposes fear on GUI users.
- You have some knowledge of computer networking.
- You have experience with Python, including knowing what it's great at and when it's time to search for a different tool.
Preferred Requirements (extra)
- Containers: You have some experience with Docker and containers, understand their purpose and how they can play a role in a modern infrastructure.
- Cloud: You understand the key concepts behind cloud platforms such as Amazon Web Services or Azure.
- Go: You have some experience with Go, or alternatively with Object-Oriented Programming (OOP) methodologies.
- (Bonus) Networking: Your knowledge of networking goes beyond the basics, understanding routing, VPNs, NAT, and other more complex networking concepts.
- (Bonus) Terraform: You have experience using Terraform.
- (Bonus) Configuration Management: You have experience using Ansible, Chef, Puppet, Saltstack, or similar.
- You are curious and won't stop searching until you find the answer.
- You work meticulously. People around you trust your work results, rightly so.
- You're pragmatic; you know when to trade off diving deep with quick fixes.
- You are able to communicate effectively with your peers.
Running a flexible Machine Learning engine at scale is hard. We must ingest and process large volumes of data uninterruptedly and store it in a scalable manner. The data needs to be prepared and served to hundreds of models constantly. All the predictions of the models, as well as other data pipelines, must be stored and reachable for our web application(s) to present the generated insights to our customers.
We work on the system that delivers this functionality and also allows the Machine Learning engineers to deliver new and improved models at ease, manage existing models, monitor these models, and many different interactions, all of which are crucial to day to day operations.
You will be working and interacting with a wide array of technologies that constitute Jungle's core systems (data handling/processing, serving ML models, etc...) and building the backend systems that provide access to all this functionality. You will have the possibility to work on and enhance the different stages of an end-to-end Machine Learning system at scale, with a focus on the initial steps of the entire pipeline: data ingestion.
Why do we need you?
- You’ll make use of modern open-source technologies in a practical use case to improve usability, performance and robustness of our internal systems.
- You’ll work together with the engineering team to maintain and improve existing systems, and overcome difficulties arising from scaling up our systems to more and more data.
- You’ll make architectural decisions on how to solve our engineering challenges and keep us future proof.
- You’ll integrate new systems with our existing data pipeline, with a focus on data ingress.
Jungle develops and applies Artificial Intelligence to increase the uptime and performance of renewable energy sources. Built on existing sensors and data streams, the company’s technology enables solar and wind energy owners to squeeze more out of their assets, accelerating the world’s transition to renewable energy sources.
Why: Operational complexity - such as the one of wind turbine performance - has reached a level that's beyond what our minds can grasp. Our tools enable the world to conquer this operational complexity and give back power to the people who manage it.
How: Through solidified Artificial Intelligence and Machine Learning expertise, the company’s technology leverages massive streams of data to understand the normal behaviour of an electro-mechanical asset, picking up on opportunities for better performance and risks of downtime, and continuously informing its users on what to prioritise.
What: Based in Lisbon, the Dutch-Portuguese deeptech company has productised its services into a web application - Canopy - and is continuously improving it to ensure that the best analyses and visualisations help its users get the maximum energy out of their assets. Jungle operates at large scale - billions of data points per day - providing always-on predictive models, alarms and metrics visualisations for some of the largest and most sophisticated customers in the global renewable energy space.
We hire remotely and globally (for candidates who are willing to work in the GMT+1/GMT+2 time zone)!