Data Engineer

Fusion
Phoenix, United States of America
yesterday

Role details

Contract type
Permanent contract
Employment type
Full-time (> 32 hours)
Working hours
Regular working hours
Languages
English
Experience level
Junior

Job location

Phoenix, United States of America

Tech stack

Java
Agile Methodologies
Airflow
Amazon Web Services (AWS)
Azure
Big Data
Cloud Database
Computer Programming
Databases
Information Engineering
Data Files
Data Infrastructure
ETL
Data Transformation
Data Systems
Data Warehousing
Relational Databases
Hadoop
Monitoring of Systems
Python
Cloud Services
Standard Sql
SQL Databases
Data Streaming
Workflow Management Systems
Azure
Spark
Information Technology
Apache Flink
Google BigQuery
Kafka
Spark Streaming
Data Management
Data Delivery
Stream Processing
Data Pipelines
Redshift

Job description

  • Pipeline Development: Create and maintain optimal data pipeline architecture by designing, constructing, installing, and maintaining batch and real-time processing systems.
  • Data Transformation: Assemble large, complex data sets that meet functional / non-functional business requirements (ETL/ELT).
  • Infrastructure Optimization: Identify, design, and implement internal process improvements, including automating manual processes and optimizing data delivery.
  • Data Quality: Build the infrastructure required for optimal extraction, transformation, and loading of data from a wide variety of data sources using SQL and AWS/Azure 'big data' technologies.
  • Collaboration: Work with stakeholders including the Executive, Product, Data, and Design teams to assist with data-related technical issues and support their data infrastructure needs.

Requirements

We are seeking a detail-oriented and driven Junior Data Engineer to join our growing data platform team in Phoenix. In this role, you will be responsible for expanding and optimizing our data and data pipeline architecture, as well as optimizing data flow and collection for cross-functional teams.

The ideal candidate is an experienced data pipeline builder and data wrangler who enjoys optimizing data systems and building them from the ground up. You will support our software developers, database architects, and data analysts on data initiatives and will ensure optimal data delivery architecture is consistent throughout ongoing projects., * Experience: 3+ years of experience in a Data Engineering role.

  • SQL Mastery: Advanced working SQL knowledge and experience working with relational databases, query authoring (SQL), as well as working familiarity with a variety of databases.
  • Programming: Experience performing root cause analysis on internal and external data and processes to answer specific business questions (Python, Scala, or Java).
  • Big Data Tools: Experience with big data tools like Hadoop, Spark, or Kafka.
  • Cloud Services: Working knowledge of cloud-based data solutions (e.g., AWS Redshift/Glue, Azure Data Factory, or Google BigQuery).
  • Data Modeling: Experience with data modeling, data warehousing, and building pipeline monitoring tools., * Bachelor's degree in Computer Science or another quantitative field.
  • Experience with workflow management tools like Airflow or Azkaban.
  • Familiarity with Stream Processing tools like Spark Streaming or Flink.
  • Knowledge of Agile methodologies and CI/CD pipelines.

Apply for this position