Data Engineer
Role details
Job location
Tech stack
Job description
We are looking for a Data Engineer to join a team focused on building reliable, scalable data solutions. In this role, you will create and enhance cloud-based data pipelines, organize data for analytics, and help ensure that business teams have access to trusted information. This position also partners closely with technical and non-technical stakeholders to turn reporting and data needs into practical engineering outcomes., * Create and support scalable data ingestion and transformation workflows using Azure Data Factory, Databricks, and PySpark.
-
Connect and consolidate data from enterprise platforms, operational databases, telematics feeds, APIs, and other internal or external sources.
-
Structure and manage data within Azure Data Lake and lakehouse environments to support performance, accessibility, and long-term maintainability.
-
Design curated datasets, data models, and schemas that improve usability for analytics, business intelligence, and downstream reporting.
-
Apply governance and lineage practices through Unity Catalog while promoting strong data quality, consistency, and security standards.
-
Work with business stakeholders and cross-functional teams to gather requirements, define technical specifications, and deliver data solutions aligned with operational needs.
-
Improve pipeline stability and efficiency by troubleshooting failures, resolving performance issues, and refining storage and query strategies.
-
Support Power BI reporting by preparing datasets, assisting with model improvements, and helping maintain reporting standards and governance practices.
-
Use GitHub-based development practices for version control, peer review, CI/CD, and disciplined deployment processes.
-
Mentor less-experienced engineers and contribute to a collaborative environment focused on continuous improvement and dependable delivery.
Requirements
Requirements * Hands-on experience with Azure Data Factory, Azure Databricks, and Azure Data Lake in a data engineering environment.
-
Strong programming ability in Python and PySpark for large-scale data processing and transformation.
-
Proficiency in SQL, including writing and optimizing queries for analytics and data integration workloads.
-
Experience building and maintaining ETL or ELT pipelines that combine data from multiple structured and semi-structured sources.
-
Familiarity with data modeling concepts, curated dataset design, and preparation of data for BI or analytics consumption.
-
Understanding of data governance, lineage, and security practices within modern cloud data platforms.
-
Experience using GitHub or similar tools for source control, code review, and deployment automation.
-
Ability to communicate effectively with business partners and translate functional needs into scalable technical solutions. Technology Doesn't Change the World, People Do.®, All applicants applying for U.S. job openings must be legally authorized to work in the United States. Benefits are available to contract/temporary professionals, including medical, vision, dental, and life and disability insurance. Hired contract/temporary professionals are also eligible to enroll in our company 401(k) plan. Visit roberthalf.gobenefits.net for more information.
Benefits & conditions
Robert Half works to put you in the best position to succeed. We provide access to top jobs, competitive compensation and benefits, and free online training. Stay on top of every opportunity - whenever you choose - even on the go. Download the Robert Half app (https://www.roberthalf.com/us/en/mobile-app) and get 1-tap apply, notifications of AI-matched jobs, and much more.