Data Engineer

Trebecon LLC
Denver, United States of America
yesterday

Role details

Contract type
Permanent contract
Employment type
Full-time (> 32 hours)
Working hours
Regular working hours
Languages
English
Experience level
Senior

Job location

Denver, United States of America

Tech stack

Airflow
Amazon Web Services (AWS)
Amazon Web Services (AWS)
Data analysis
Apache HTTP Server
Unit Testing
Software Quality
Continuous Integration
Information Engineering
Data Governance
Data Infrastructure
ETL
Data Warehousing
DevOps
Dimensional Modeling
Python
Open Source Technology
SQL Databases
Enterprise Data Management
Cloud Platform System
Real Time Systems
Sql Optimization
Snowflake
Spark
GIT
Event Driven Architecture
Containerization
Data Lake
Integration Tests
Infrastructure Automation Frameworks
Information Technology
Amazon Web Services (AWS)
Kafka
Video Streaming
Cloudwatch
Terraform
Stream Processing
Software Version Control
Data Pipelines
Redshift
Databricks

Job description

We are seeking a highly skilled Senior Data Engineer to design, build, and optimize scalable cloud-based data platforms and pipelines. The ideal candidate will have strong expertise in AWS data services, Databricks, Snowflake, and modern data engineering practices for enterprise-scale analytics, data warehousing, and real-time processing environments. This role requires hands-on experience developing robust ETL/ELT pipelines, implementing data lake and data warehouse architectures, and ensuring high standards for data quality, testing, and operational excellence., * Design, develop, and maintain scalable batch and real-time data pipelines on AWS and Databricks platforms. * Build and optimize enterprise data lake and data warehouse solutions using Redshift, Snowflake, Delta Lake, and Apache Iceberg. * Develop ETL/ELT workflows using Python, SQL, Spark, and cloud-native technologies. * Work with AWS services including S3, Step Functions, EventBridge, CloudWatch, Glue, Lambda, Kinesis, and EMR. * Implement and manage data governance and metadata solutions using Unity Catalog and Glue Catalog. * Create performant data models and dimensional schemas to support analytics and reporting needs. * Integrate streaming and event-driven architectures using Kafka and AWS streaming services. * Collaborate with cross-functional teams including Data Analysts, Architects, DevOps, and Business Stakeholders. * Ensure high code quality through unit testing, integration testing, and detailed testing documentation. * Build and maintain CI/CD pipelines and version control processes using Git and automation tools. * Support Infrastructure as Code (IaC) practices using Terraform or similar technologies. * Monitor, troubleshoot, and optimize data workflows for reliability, scalability, and performance. * Participate in architecture discussions and recommend best practices for modern data engineering solutions.

Requirements

Delta Lake * Apache Iceberg * Unity Catalog

Strong experience with Snowflake data platform.

Advanced SQL and Python programming skills.

Experience building batch and real-time data processing pipelines.

Strong understanding of data warehousing concepts and dimensional modeling.

Experience with CI/CD implementation and Git-based development workflows.

Familiarity with Infrastructure as Code tools such as Terraform.

Experience with orchestration and open-source tools such as Apache Airflow and dbt is a plus.

Knowledge of streaming technologies including Kafka/MSK is preferred. Soft Skills * Strong analytical and troubleshooting skills. * Ability to manage priorities across multiple projects simultaneously. * Excellent organizational and communication skills. * Strong focus on code quality, testing, and documentation. * Ability to work effectively in a collaborative onsite environment.

Apply for this position