AWS DATA ENGINEER

Hire IT People
Des Moines, United States of America
1 month ago

Role details

Contract type
Temporary contract
Employment type
Full-time (> 32 hours)
Working hours
Regular working hours
Languages
English
Experience level
Senior

Job location

Des Moines, United States of America

Tech stack

Java
API
Amazon Web Services (AWS)
Amazon Web Services (AWS)
Data analysis
Big Data
Databases
Data as a Services
Data Validation
Information Engineering
Data Governance
Data Integration
ETL
Data Transformation
Data Warehousing
Amazon DynamoDB
Hadoop
Python
PostgreSQL
MySQL
Oracle Applications
Performance Tuning
SQL Databases
Data Streaming
Technical Data Management Systems
Data Processing
Data Storage Technologies
Data Ingestion
Spark
AWS Lambda
Information Technology
Amazon Web Services (AWS)
Data Pipelines
Amazon Web Services (AWS)
Redshift
Programming Languages

Job description

  • The successful candidate will spend a good portion of their time in transitioning already developed AWS data pipelines and procedures that are built for client.
  • The candidate is also expected to work in concert with resident Data Engineers, Data Analysts and Report Developers to enhance, develop and automate recurring data requests and troubleshooting related issues.
  • This role will be primarily focused on backend development with AWS Data Integration and Storage Services tech stack (AWS Glue, AWS Lambda, AWS Spark, AWS Data Migration Services, AWS RDS, Amazon S3, Amazon Redshift, Amazon Dynamo).
  • The successful candidate will be required to follow standard practices for migrating changes to the test and production environments and provide postproduction support.
  • When not working on enhancement requests or problem reports, the candidate would concentrate on performance tuning.
  • Individual should work well in a team and independently as needed., * Design and implement scalable and efficient data pipelines and ETL processes using AWS services such as AWS Glue, AWS Lambda, and Apache Spark.
  • Develop and maintain data models, schemas, and data transformation logic to support data integration, data warehousing, and analytics needs.
  • Collaborate with stakeholders to understand business requirements and translate them into technical data solutions.
  • Implement data ingestion processes from various data sources such as databases, APIs, and streaming platforms into AWS data storage services like Amazon S3 or Amazon Redshift.
  • Optimize data pipelines for performance, scalability, and cost-efficiency, utilizing AWS services like Amazon EMR, AWS Glue, and AWS Athena.
  • Ensure data quality, integrity, and security by implementing appropriate data governance practices, data validation rules, and access controls.
  • Monitor and troubleshoot data pipelines, identifying and resolving issues related to data processing, data consistency, and performance bottlenecks.
  • Collaborate with data scientists, analysts, and other stakeholders to support data-driven initiatives and provide them with the necessary datasets and infrastructure.
  • Stay updated with the latest AWS data engineering trends, best practices, and technologies, and proactively identify opportunities for improvement.
  • Mentor and provide guidance to junior members of the data engineering team, fostering a culture of knowledge sharing and continuous learning.

Requirements

  • Bachelors or masters degree in computer science, Data Engineering, or a related field.
  • Minimum of 5 years of professional experience as a Data Engineer, with a focus on AWS data services and technologies.
  • Strong expertise in designing and implementing ETL processes using AWS Glue, AWS Lambda, Apache Spark, or similar technologies.
  • Proficient in programming languages such as Python, Scala, or Java, with experience in writing efficient and maintainable code for data processing and transformation.
  • Hands-on experience with AWS data storage services like Amazon S3, Amazon Redshift, or Amazon DynamoDB.
  • In-depth understanding of data modeling, data warehousing, and data integration concepts and best practices.
  • Familiarity with big data technologies such as Hadoop, Hive, or Presto is a plus.
  • Solid understanding of SQL and experience with database technologies like PostgreSQL, MySQL, or Oracle.
  • Excellent problem-solving skills, with the ability to analyze complex data requirements and design appropriate solutions.
  • Strong communication and collaboration skills, with the ability to work effectively in a team-oriented environment.

Apply for this position