Full-Stack Data Engineer Energy Management
Role details
Job location
Tech stack
Job description
As a Data Engineer, you will play a crucial role in setting up and leading technical decisions for our cloud-based data platform. We are specifically looking for someone that will contribute to a combination of Cloud Infrastructure setup, maintain API server, and develop streaming / batch Data processing pipeline. You will be working on an exciting IOT product (smart thermostat, energy insight, smart charging) for our consumers., * Setting up projects and leading technical decisions involving real time time-series data in Databricks (scala) environments.
- Empowering other departments by making data accessible and usable, driving Eneco's digital innovations forward.
- Design and implement cloud solutions to handle product requirements.
- Shape the product by providing technical advice to the product manager or other team.
- Ensuring our solutions are robust, scalable, cost efficient and ready to meet future challenges
This is where you'll work
You will be working together with other Data Engineer, Machine Learning Engineers, Data Scientists and Data Analysts. Together, you will shape IOT products that will transform how our consumers use their energy. Within the team, we encourage learning, actively seek out collaboration, celebrate successes, and learn from failures.
Requirements
- Previous experience of REST API development (e.g. Spring or FastAPI).
- Understanding of streaming data ingestion and processing.
- Previous experience working with MPP data platforms such as Spark. Working experience using Databricks and Unity Catalog is a plus.
- Proficiency in programming languages (Java, Scala, and Python).
- Knowledge of software engineering best practices: code reviews, version control, testing, and CI/CD
- Genuine interest in DevOps/SRE principles for production deployment.
Nice to Have:
- Working experience with high volume time series data.
- Knowledge of data modeling and architecture patterns.
- Experience deploying applications to Kubernetes, with skills in monitoring (Grafana) and debugging.
- Knowledge with cloud provider (e.g. AWS). Infrastructure as code (IAC) is a plus.
- Experience with NoSQL databases (e.g. DynamoDB) and RDBMS (e.g. Postgres).
- Proficiency in SQL and DBT (Data Build Tool) with Snowflake.
- Familiarity or interest with MLOps and data science techniques.