AWS Data Engineer (Databricks & Snowflake
Role details
Job location
Tech stack
Requirements
*AWS (Redshift, S3, Step Functions, Eventbridge, CloudWatch) *Databricks (Spark, Delta Lake, Apache Iceberg, Unity Catalog) *Snowflake *SQL *Python *CI/CD *Git *Familiarity with Infrastructure as Code (Terraform, or similar) *Solid understanding of data warehousing and dimensional modeling *Ability to write detailed and comprehensive testing documentation. *Strong focus on code quality with the ability to design and execute thorough tests. *Ability to manage work across multiple projects, good organizational skills
A data engineer with expertise in AWS toolset advises on, develops, and maintains data engineering solutions on the AWS Cloud ecosystem. They design, build, and operate batch and real-time data pipelines using AWS services such as AWS EMR, AWS Glue, Glue Catalog, and Kinesis. Additionally, they create data layers on AWS RedShift, Aurora, and DynamoDB. The data engineer also migrates data using AWS DMS and is proficient with various AWS Data Platform components, including S3, RedShift, RedShift Spectrum, AWS Glue with Spark, AWS Glue with Python, Lambda functions with Python, AWS Glue Catalog, and AWS Glue Databrew. They are experienced in developing batch and real-time data pipelines for Data Warehouse and Datalake, utilizing AWS Kinesis and Managed Streaming for Apache Kafka. They are also proficient in using open source technologies like Apache Airflow and dbt, Spark / Python or Spark / Scala on AWS Platform. The data engineer schedules and manages data services on the AWS Platform, ensuring seamless integration and operation of data engineering soluti--give me job tittle