AWS Data Engineer (Databricks & Snowflake

Trebecon LLC
Denver, United States of America
3 days ago

Role details

Contract type
Permanent contract
Employment type
Full-time (> 32 hours)
Working hours
Regular working hours
Languages
English
Experience level
Senior

Job location

Denver, United States of America

Tech stack

Airflow
Amazon Web Services (AWS)
Amazon Web Services (AWS)
Apache HTTP Server
Software Quality
Continuous Integration
Data as a Services
Information Engineering
Data Warehousing
Dimensional Modeling
Amazon DynamoDB
Python
Open Source Technology
Standard Sql
Data Streaming
Snowflake
Spark
AWS Lambda
GIT
Data Layers
Data Lake
Amazon Web Services (AWS)
Amazon Web Services (AWS)
Kafka
Cloudwatch
Terraform
Data Pipelines
Amazon Web Services (AWS)
Redshift
Databricks

Requirements

*AWS (Redshift, S3, Step Functions, Eventbridge, CloudWatch) *Databricks (Spark, Delta Lake, Apache Iceberg, Unity Catalog) *Snowflake *SQL *Python *CI/CD *Git *Familiarity with Infrastructure as Code (Terraform, or similar) *Solid understanding of data warehousing and dimensional modeling *Ability to write detailed and comprehensive testing documentation. *Strong focus on code quality with the ability to design and execute thorough tests. *Ability to manage work across multiple projects, good organizational skills

A data engineer with expertise in AWS toolset advises on, develops, and maintains data engineering solutions on the AWS Cloud ecosystem. They design, build, and operate batch and real-time data pipelines using AWS services such as AWS EMR, AWS Glue, Glue Catalog, and Kinesis. Additionally, they create data layers on AWS RedShift, Aurora, and DynamoDB. The data engineer also migrates data using AWS DMS and is proficient with various AWS Data Platform components, including S3, RedShift, RedShift Spectrum, AWS Glue with Spark, AWS Glue with Python, Lambda functions with Python, AWS Glue Catalog, and AWS Glue Databrew. They are experienced in developing batch and real-time data pipelines for Data Warehouse and Datalake, utilizing AWS Kinesis and Managed Streaming for Apache Kafka. They are also proficient in using open source technologies like Apache Airflow and dbt, Spark / Python or Spark / Scala on AWS Platform. The data engineer schedules and manages data services on the AWS Platform, ensuring seamless integration and operation of data engineering soluti--give me job tittle

Apply for this position