Senior Data Engineer - Atlanta, GA

Cargill
Atlanta, United States of America
yesterday

Role details

Contract type
Permanent contract
Employment type
Full-time (> 32 hours)
Working hours
Regular working hours
Languages
English
Experience level
Senior

Job location

Atlanta, United States of America

Tech stack

Java
Airflow
Amazon Web Services (AWS)
Data analysis
Azure
Cloud Computing
Computer Programming
Continuous Integration
Data Architecture
Information Engineering
Data Governance
Data Infrastructure
Data Transformation
Data Stores
Data Systems
Data Warehousing
Software Debugging
DevOps
Python
Performance Tuning
Scala
SQL Databases
Data Streaming
Workflow Management Systems
Parquet
Data Processing
Google Cloud Platform
Cloud Platform System
Data Ingestion
Spark
Data Lake
Apache Flink
Deployment Automation
Amazon Web Services (AWS)
Integration Frameworks
Kafka
Software Version Control
Data Pipelines

Job description

Cargill's size and scale allows us to make a positive impact in the world. Our purpose is to nourish the world in a safe, responsible and sustainable way. We are a family company providing food, ingredients, agricultural solutions and industrial products that are vital for living. We connect farmers with markets so they can prosper. We connect customers with ingredients so they can make meals people love. And we connect families with daily essentials - from eggs to edible oils, salt to skincare, feed to alternative fuel. Our 160,000 colleagues, operating in 70 countries, make essential products that touch billions of lives each day. Join us and reach your higher purpose at Cargill.

Job Summary

The Senior Professional, Data Engineering job designs, builds and maintains complex data systems that enable data analysis and reporting. With minimal supervision, this job ensures that large sets of data are efficiently processed and made accessible for decision making.

Key Accountabilities

  • DATA INFRASTRUCTURE: Prepares data infrastructure to support the efficient storage and retrieval of data.
  • DATA FORMATS: Examines and resolves appropriate data formats to improve data usability and accessibility across the organization.
  • DATA & ANALYTICAL SOLUTIONS: Develops complex data products and solutions using advanced engineering and cloud-based technologies, ensuring they are designed and built to be scalable, sustainable and robust.
  • DATA PIPELINES: Develops and maintains streaming and batch data pipelines that facilitate the seamless ingestion of data from various data sources, transform the data into information and move to data stores like data lake, data warehouse and others.
  • DATA SYSTEMS: Reviews existing data systems and architectures to identify areas for improvement and optimization.
  • STAKEHOLDER MANAGEMENT: Collaborates with multi-functional data and advanced analytic teams to gain requirements and ensure that data solutions meet the functional and non-functional needs of various partners.
  • DATA FRAMEWORKS: Builds complex prototypes to test new concepts and implements data engineering frameworks and architectures that improve data processing capabilities and support advanced analytics initiatives.
  • AUTOMATED DEPLOYMENT PIPELINES: Develops automated deployment pipelines improving efficiency of code deployments with fit for purpose governance.
  • DATA MODELING: Performs complex data modeling in accordance to the datastore technology to ensure sustainable performance and accessibility.

Qualifications

Minimum requirement of 4 years of relevant work experience. Typically reflects 5 years or more of relevant experience.

Preferred Qualifications:

  • CLOUD ENVIRONMENTS: Experience developing data systems on major cloud platforms (AWS, GCP, Azure).
  • DATA ARCHITECTURE: Hands-on experience building modern data architectures, including data lakes, data lakehouses, and data hubs, along with related capabilities such as ingestion, governance, modeling, and observability.
  • DATA INGESTION: Demonstrated proficiency in data collection, ingestion tools (Kafka, AWS Glue), and storage formats (Iceberg, Parquet).
  • DATA STREAMING: Experience developing data pipelines with streaming architectures and tools (Kafka, Flink).
  • DATA MODELING: Expertise in data transformation and modeling using SQL-based frameworks and orchestration tools (dbt, AWS Glue, Airflow). Deep experience with modeling concepts like SCD and schema evolution.
  • DATA TRANSFORMATION: Strong background with using Spark for data transformation, including streaming, performance tuning, and debugging with Spark UI.
  • PROGRAMMING: Advanced programming skills in Python, Java, Scala, or similar languages. Expert-level proficiency in SQL for data manipulation and optimization.
  • DEVOPS: Demonstrated experience in DevOps practices, including code management, CI/CD, and deployment strategies.
  • DATA GOVERNANCE: Strong background in data governance principles, including data quality, privacy, and security considerations for data product development and consumption.

The business will not sponsor applicants for work visas for this position.

#LI-NS7

Equal Opportunity Employer, including Disability/Vet.

Requirements

Minimum requirement of 4 years of relevant work experience. Typically reflects 5 years or more of relevant experience., * CLOUD ENVIRONMENTS: Experience developing data systems on major cloud platforms (AWS, GCP, Azure).

  • DATA ARCHITECTURE: Hands-on experience building modern data architectures, including data lakes, data lakehouses, and data hubs, along with related capabilities such as ingestion, governance, modeling, and observability.
  • DATA INGESTION: Demonstrated proficiency in data collection, ingestion tools (Kafka, AWS Glue), and storage formats (Iceberg, Parquet).
  • DATA STREAMING: Experience developing data pipelines with streaming architectures and tools (Kafka, Flink).
  • DATA MODELING: Expertise in data transformation and modeling using SQL-based frameworks and orchestration tools (dbt, AWS Glue, Airflow). Deep experience with modeling concepts like SCD and schema evolution.
  • DATA TRANSFORMATION: Strong background with using Spark for data transformation, including streaming, performance tuning, and debugging with Spark UI.
  • PROGRAMMING: Advanced programming skills in Python, Java, Scala, or similar languages. Expert-level proficiency in SQL for data manipulation and optimization.
  • DEVOPS: Demonstrated experience in DevOps practices, including code management, CI/CD, and deployment strategies.
  • DATA GOVERNANCE: Strong background in data governance principles, including data quality, privacy, and security considerations for data product development and consumption.

About the company

Cargill's size and scale allows us to make a positive impact in the world. Our purpose is to nourish the world in a safe, responsible and sustainable way. We are a family company providing food, ingredients, agricultural solutions and industrial products that are vital for living. We connect farmers with markets so they can prosper. We connect customers with ingredients so they can make meals people love. And we connect families with daily essentials - from eggs to edible oils, salt to skincare, feed to alternative fuel. Our 160,000 colleagues, operating in 70 countries, make essential products that touch billions of lives each day. Join us and reach your higher purpose at Cargill.

Apply for this position