Data Engineer
Role details
Job location
Tech stack
Job description
-
Design, develop, and optimize data pipelines using Azure Data Services (Azure Data Factory, Azure Data Lake Storage, Azure Synapse).
-
Build and maintain scalable ETL/ELT workflows using Databricks (Spark, PySpark, Delta Lake).
-
Implement and manage data orchestration and dependency management using Dagster or similar tools.
-
Partner with analytics, data science, and product teams to ensure reliable, high-quality data availability.
-
Optimize data models and storage strategies for performance, scalability, and cost efficiency.
-
Ensure data quality, observability, and reliability through monitoring, logging, and automated validation.
Requirements
-
3+ years of experience in data engineering or analytics engineering.
-
Strong hands-on experience with Databricks, including Spark-based data processing.
-
Experience building data pipelines in Microsoft Azure.
-
Proficiency with SQL and Python.
-
Experience with modern data orchestration tools (e.g., Dagster, Airflow, Prefect).
-
Familiarity with data warehousing concepts, dimensional modeling, and ELT patterns.
-
Experience working in Agile or DevOps-oriented environments., All applicants applying for U.S. job openings must be legally authorized to work in the United States. Benefits are available to contract/temporary professionals, including medical, vision, dental, and life and disability insurance. Hired contract/temporary professionals are also eligible to enroll in our company 401(k) plan. Visit roberthalf.gobenefits.net for more information.
Benefits & conditions
Robert Half works to put you in the best position to succeed. We provide access to top jobs, competitive compensation and benefits, and free online training. Stay on top of every opportunity - whenever you choose - even on the go. Download the Robert Half app (https://www.roberthalf.com/us/en/mobile-app) and get 1-tap apply, notifications of AI-matched jobs, and much more.