aSenior Data Engineer
Role details
Job location
Tech stack
Job description
You will work with our EMEA and US customers to help them solve their data engineering, ML, ML Ops, and cloud migration puzzles. "As a Data Engineer at DATAPAO, you will have the unique opportunity to push the boundaries of technology and lead the way in how data transforms the world. We work with customers who want to innovate and change their industries. So it never gets boring! Sorting out a complex technical puzzle, setting up a new data architecture from scratch, or guiding our customers through adopting the latest technologies is our routine here. Hence, we'd expect you to be more than just a (data) engineer. We'd expect you to breathe and live data, to thrive for knowledge and excellence, and to be keen to build something enduring and impactful for our customers. We'd expect you to be a (data) Paoneer!"(Máté Gulyás, CEO) What will you do? As aSenior Data Engineer, you are expected to deliver (across industries) on some of our most complex projects - individually or by leading small delivery teams. Although we work for the biggest multinational companies where years-long behemoth projects are the norm, our projects are fast-paced, typically 2 to 4 months long. Most are delivered using Apache Spark/Databricks on AWS/Azure and require you to directly manage the customer relationship alone or in collaboration with a Project Manager. Additionally, at this seniority level, we expect you to compound your impact across the team and the organization - by supporting our pre-sales process, mentoring more junior engineers, and contributing to hiring best-in-class data talent. What does it take to fit the bill?
Requirements
You (ideally) have 5+ years of experience in Data Engineering, with a focus on cloud platforms (AWS, Azure, GCP); You have a proven track record working with Databricks (PySpark, SQL, Delta Lake, Unity Catalog); You have extensive experience in ETL/ELT development and data pipeline orchestration (e.g., Databricks Workflows, DLT, Airflow, ADF, Glue, and Step Functions.); You're proficient in SQL and Python, using them to transform and optimize data like a pro; You know your way around CI/CD pipelines and Infrastructure as Code (Terraform, CloudFormation, or Bicep); You've hands-on experience integrating Databricks with BI tools (Power BI, Tableau, Looker). Consulting & Client-Facing Skills Ideally, you bring a proven history in consulting, from scoping to gathering requirements, designing solutions, and communicating effectively with stakeholders. If you bring a product company background with you, that's also fine as long as you can show a good consulting mindset and desire to be customer-facing; You've successfully delivered projects like Data Lakehouse buildouts and cloud data migrations; You excel in explaining technical concepts to non-technical audiences and drive decision-making. Operational Readiness & Soft Skills You're (almost) ready to hit the ground running, immediately contributing to live client projects; In a fast-paced consulting environment, where no two days are the same, flexibility and problem-solving comes naturally to you. Furthermore, resilience and thriving on challenges are essential traits in our industry; Your communication skills are on point-whether you're writing, speaking, or collaborating with stakeholders, you know how to keep everyone in the loop.