(Senior) Data Engineer
Role details
Job location
Tech stack
Job description
As a Data Engineer at GALVANY, you'll architect and maintain the data infrastructure that powers our AI-based Operating System. You'll process real-time energy data from heat pumps and integrated systems, build pipelines that fuel AI models and analytics, and ensure data quality across our entire ecosystem - from individual homes to our Virtual Power Plant., * Design and build scalable data pipelines using Kafka, Benthos, Clickpipes and related tools.
- Ensure data quality, consistency, and freshness across the energy ecosystem and internal processes.
- Collaborate with ML engineers to deliver the right data in the right format for AI models.
- Build monitoring and validation into pipelines from the start.
- Translate business questions into data requirements and vice versa.
- Leverage LLMs and AI tools to innovate data solutions and accelerate development.
Tech Stack: Data tools including Kafka, Benthos (data pipeline), Python, SQL; Backend with GoLang; ML with Python, PyTorch, and LLMs; General tools including Azure, GitHub, Linear, Notion., * Outcome-Led. You focus on value over volume. You embrace iteration, adapt quickly to new information, and prioritize what moves the needle for business and user.
- Systems Thinker. You see the bigger picture. You understand how your work connects to the wider ecosystem - ensuring features contribute to a cohesive, scalable whole.
- Pragmatic. You choose the simplest effective path to solve problems - especially by leveraging AI tools.
- Customer Champion. You keep the end-user central in all decisions. You seek direct exposure to how customers experience the product.
- Data Quality Guardian. You obsess over accuracy, freshness, and consistency. You build validation and monitoring into pipelines from the start.
- Pipeline Architect. You design scalable data flows from source to insight. You balance real-time needs with batch efficiency.
Benefits
- Strong Growth & High Impact. A unique opportunity to join during a hypergrowth phase and actively contribute to company success.
- Compensation. Competitive salary and flexible perks (sports, mobility, learning) tailored to your needs.
- Real-World Impact. Your work drives decarbonization - measurable in CO savings, energy efficiency (kWh), and cost reductions (€).
- Office. Prime location in Berlin Charlottenburg, regular company events and all-hands. We value in-person collaboration and connection, while partial remote work remains an option.
- No Corporate Theater. Skip endless alignment meetings, politics and waiting for permission. You talk to the people who matter and ship.
Requirements
Do you have experience in SQL?, * Experience. 4+ years of experience in high-performance environments (e.g. top-tier consulting, fast-scaling startups, or similar).
- Track Record. Proven end-to-end responsibility in data engineering. From ideation to release.
- Background. Strong background in relevant fields, e.g. Computer Science, Mathematics, Data Engineering, or similar.
- Technical Skills. Strong coding skills with proficiency in Python and SQL. Experience with streaming architectures and data pipeline tooling. Fluent in spec-driven, AI-assisted development (e.g. Claude Code).
- Mindset. Self-driven problem-solving mindset - no need for micromanagement or specific tickets.
- Technical Aptitude. Technical mindset with a passion for understanding systems, data flows, and integrations, coupled with enthusiasm for continuous learning and problem solving.
- Language. Fluent in English; German is a plus.