Senior Data Engineer
Ultra Tendency
Municipality of Vigo, Spain
2 days ago
Role details
Contract type
Permanent contract Employment type
Full-time (> 32 hours) Working hours
Regular working hours Languages
English, SpanishJob location
Remote
Municipality of Vigo, Spain
Tech stack
Unity
Airflow
Amazon Web Services (AWS)
Data analysis
Apache HTTP Server
Azure
Computer Programming
Information Engineering
Data Systems
Distributed Systems
Github
Python
Machine Learning
Open Source Technology
Performance Tuning
Data Processing
Spark
Data Lake
PySpark
Machine Learning Operations
Terraform
Data Pipelines
Databricks
Job description
Our Engineering department is growing, and we're now looking for a Senior Data Engineer - Databricks (m/f/) to join our team in Spain, supporting our global growth. As Senior Data Engineer (m/f/), you design and optimize data processing algorithms on a talented, cross-functional team. You are familiar with the Apache open-source suite of technologies and want to contribute to the advancement of data engineering. What We Offer
- Flexible work options, including fully remote or hybrid arrangements (candidates must be located in Spain)
- A chance to accelerate your career and work with outstanding colleagues in a supportive learning community split across 3 continents
- Contribute your ideas to our unique projects and make an impact by turning them into reality
- Balance your work and personal life through our workflow organization and decide yourself if you work at home, in the office, or on a hybrid setup
- Annual performance review, and regular feedback cycles, generating distinct value by connecting colleagues through networks rather than hierarchies
- Individual development plan, professional development opportunities
- Educational resources such as paid certifications, unlimited access to Udemy Business, etc.
- Local, virtual, and global team events, in which UT colleagues become acquainted with one another
What You'll Do
- Design, implement, and maintain scalable data pipelines using Databricks Lakehouse Platform, with a strong focus on Apache Spark, Delta Lake, and Unity Catalog.
- Lead the development of batch and streaming data workflows that power analytics, machine learning, and business intelligence use cases.
- Collaborate with data scientists, architects, and business stakeholders to translate complex data requirements into robust, production-grade solutions.
- Optimize performance and cost-efficiency of Databricks clusters and jobs, leveraging tools like Photon, Auto Loader, and Job Workflows.
- Establish and enforce best practices for data quality, governance, and security within the Databricks environment.
- Mentor junior engineers and contribute to the evolution of the team's Databricks expertise.
Requirements
- Deep hands-on experience with Databricks on Azure, AWS, or GCP, including Spark (PySpark/Scala), Delta Lake, and MLflow.
- Strong programming skills in Python or Scala, and experience with CI/CD pipelines (e.g., GitHub Actions, Azure DevOps).
- Solid understanding of distributed computing, data modeling, and performance tuning in cloud-native environments.
- Familiarity with orchestration tools (e.g., Databricks Workflows, Airflow) and infrastructure-as-code (e.g., Terraform).
- A proactive mindset, strong communication skills, and a passion for building scalable, reliable data systems.
- Professional Spanish & English communication skills (C1-level, written and spoken).