Data Engineer
Robert Half
Houston, United States of America
3 days ago
Role details
Contract type
Permanent contract Employment type
Full-time (> 32 hours) Working hours
Regular working hours Languages
EnglishJob location
Houston, United States of America
Tech stack
Artificial Intelligence
Amazon Web Services (AWS)
Batch Processing
Big Data
Cloud Engineering
Data Architecture
Information Engineering
Data Governance
Data Infrastructure
Data Integration
ETL
Data Transformation
Data Security
Meta-Data Management
Data Streaming
Enterprise Data Management
Data Lake
Data Lineage
Amazon Web Services (AWS)
Amazon Web Services (AWS)
Data Management
Data Pipelines
Databricks
Job description
We are seeking an experienced Data Engineer with strong expertise in AWS, Databricks, and data governance to design, build, and modernize enterprise data platforms. This role will focus on creating scalable data lake, lakehouse, and data mesh architectures, enabling trusted data access across the organization, and supporting advanced analytics and AI/ML initiatives., * Design and implement scalable data architectures, including Data Lakes, Lakehouse, and Data Mesh frameworks.
- Lead cloud migration and data modernization initiatives to improve performance, scalability, and reliability.
- Build, architect, and optimize batch and streaming data pipelines using AWS-native tools such as AWS Glue and related services.
- Partner with business leaders, data consumers, and technology teams to ensure solutions align with enterprise architecture and business goals.
- Establish and manage data governance practices, including metadata management, data lineage, cataloging, quality controls, and compliance frameworks.
- Develop and maintain data solutions in Databricks to support analytics, reporting, and large-scale data processing.
- Integrate enterprise data platforms with AI/ML ecosystems, including Amazon SageMaker, Amazon Bedrock, and Databricks.
- Ensure data platforms are secure, reliable, and optimized for both operational and analytical use cases.
- Promote best practices for data engineering, cloud architecture, governance, and platform scalability.
Requirements
- Proven experience as a Data Engineer in AWS cloud environments.
- Hands-on experience with Databricks and modern data platform design.
- Strong experience building and supporting ETL/ELT pipelines, batch processing, and streaming data workflows.
- Experience with AWS Glue and related AWS data services.
- Knowledge of data governance, metadata management, lineage, and compliance requirements.
- Experience supporting AI/ML data integration and enabling platforms such as SageMaker and Bedrock.
- Strong understanding of modern data architecture patterns, including Lakehouse and Data Mesh.
- Excellent stakeholder communication and cross-functional collaboration skills.