Data Engineer (AWS and ETL and PySpark

Hire IT People
Chicago, United States of America
1 month ago

Role details

Contract type
Temporary contract
Employment type
Full-time (> 32 hours)
Working hours
Regular working hours
Languages
English
Experience level
Senior

Job location

Chicago, United States of America

Tech stack

Amazon Web Services (AWS)
Amazon Web Services (AWS)
Amazon Web Services (AWS)
Big Data
Cloudera Impala
Computer Programming
Data Warehousing
Hadoop
Hive
Identity and Access Management
Oracle Applications
Cloudera
Amazon Web Services (AWS)
Data Lake
GraphQL
Functional Programming
Databricks

Requirements

  • BS degree in Computer Science or certification on software engineering.
  • Proficient in data analysis, data engineering, data modeling, and database management.
  • Strong understanding of RDBMS, NoSQL, big data, SQL, and ETL tools.
  • Experience programming with at least one modern language such as Java, Python, Unix Shell.
  • Proficiency in REST APIs, microservices, distributed systems and cloud (hybrid) computing .
  • Strong understanding of Agile methodologies with ability to work in at least one of the common frameworks.
  • Strong understanding of techniques such as CI/CD, TDD, cloud development, resiliency, and security.
  • Proven experience with business analysis, design, development, testing, deployment, maintenance, and improvement.

Preferred qualifications, capabilities, and skills:

  • Experience working with data intensive software (such as big data, data warehouses, data lakes).
  • AWS experience with developing on AWS, S3, Lambda, MSK, EC2, IAM, and related data products.
  • Experience with Databricks, Amazon RDS, Oracle, Hadoop/Cloudera, HUE, Hive, Impala.
  • Experience with GraphQL.

Apply for this position