Data Engineer (Data Lake to AWS Migration)

Systems, Inc
Dallas, United States of America
9 days ago

Role details

Contract type
Permanent contract
Employment type
Full-time (> 32 hours)
Working hours
Regular working hours
Languages
English
Experience level
Intermediate

Job location

Dallas, United States of America

Tech stack

Java
Apache HTTP Server
Computer Programming
Continuous Integration
Data Validation
Data Transmissions
Data Integrity
Hadoop
Hadoop Distributed File System
Hive
JSON
Job Scheduling
Python
SQL Databases
Usage Analysis
Parquet
Scripting (Bash/Python/Go/Ruby)
File Transfer Protocol (FTP)
Snowflake
Spark
SAP Sybase ASE
Ansi Sql
Kubernetes
Information Technology
Avro
Kafka
Code Restructuring

Job description

  1. Logic & Scheduling: Refactoring and migrating extraction logic and job scheduling from legacy frameworks to the new Lakehouse environment.

  2. Data Transfer: Executing the physical migration of underlying datasets while ensuring data integrity.

  3. Stakeholder Engagement: Acting as a technical liaison to internal clients, facilitating "hand-off and sign-off" conversations with data owners to ensure migrated assets meet business requirements.

  4. Consumption Pattern Migration:

  5. Code Conversion: Translating and optimizing legacy SQL and Spark-based consumption patterns (raw and modeled) for compatibility with Snowflake and Iceberg.

  6. Usage analysis: Understand usage patterns to deliver the required data products.

  7. Stakeholder Engagement: Acting as a technical liaison to internal clients, facilitating "hand-off and sign-off" conversations with data owners to ensure migrated assets meet business requirements.

  8. Data Reconciliation & Quality

  9. A rigorous approach to data validation is required. Candidates must work with reconciliation frameworks to build confidence that migrated data is functionally equivalent to that already used within production flows.

Requirements

  1. Education: Bachelor s or Masters in Computer Science, Applied Mathematics, Engineering, or a related quantitative field.
  2. Experience: Minimum of 3-5 years of professional "hands-on-keyboard" coding experience in a collaborative, team-based environment. Ability to trouble shoot (SQL) and basic scripting experience.
  3. Languages: Professional proficiency in Python or Java.
  4. Methodology: Deep familiarity with the full Software Development Life Cycle (SDLC) and CI/CD best practices & K8s deployment experience

Technical Stack Requirements: Kafka, ANSI SQL, FTP, Apache Spark, JSON, Avro, Parquet, Hadoop (HDFS/Hive), Snowflake, Apache Iceberg, Sybase IQ

Apply for this position