Data Engineer I, OTS - Data ANCHOR Team

Amazon.com, Inc.
Austin, United States of America
yesterday

Role details

Contract type
Temporary contract
Employment type
Full-time (> 32 hours)
Working hours
Regular working hours
Languages
English
Experience level
Junior
Compensation
$ 160K

Job location

Austin, United States of America

Tech stack

Artificial Intelligence
Amazon Web Services (AWS)
Amazon Web Services (AWS)
Data analysis
Big Data
Code Review
Information Systems
Computer Engineering
Data Definition Language
Information Engineering
Data Governance
Data Infrastructure
ETL
Data Security
Data Visualization
Query Languages
Hadoop
Hive
IBM Cognos Business Intelligence
Identity and Access Management
Information Management
Python
Korn Shell
MultiDimensional EXpressions
NoSQL
Oracle Applications
Role-Based Access Control
Scala
PL-SQL
SQL Databases
Tableau
Management of Software Versions
Scripting (Bash/Python/Go/Ruby)
Spark
Electronic Medical Records
Data Layers
Information Technology
Amazon Web Services (AWS)
Data Pipelines
Redshift

Job description

Join the OTS Data ANCHOR team to build strategic data infrastructure powering Amazon's Operations Technology ecosystem. Our team provides critical data infrastructure support for OpsTech IT, supporting Amazon's global customer commitment. You'll work at the intersection of large-scale data processing and real-world operational impact - creating intelligence that directly influences how Amazon fulfills millions of orders across fulfillment centers, Amazon Fresh, Prime Now, Lockers, Pantry, and Amazon Campus.

As a Data Engineer, you will collaborate to build and maintain scalable data pipelines and data infrastructure that power AI-driven operational insights and business insights, Science initiatives across Amazon's global fulfillment and maintenance networks. You will build and implement ETL/ELT pipelines, collaborate with Data Scientists, and BIEs to deliver data products that drive measurable business outcomes. You will contribute to AI ready data initiatives and improve our best data practices - including data versioning, pipeline monitoring, and model retraining data support - and help execute against our engineering best practices within the team. You will support day to day business and internal engineering teams by building curated, analysis-ready models and datasets and enabling self-service data access through well-governed data infrastructure. This role directly enables the team's mission to implement GenAI solutions for automated reporting, diagnostics, and predictive and prescriptive analytics across worldwide operations., Design, build, and maintain production-grade ETL/ELT pipelines and big data infrastructure supporting OTS operational intelligence.

  • Contribute to data governance and quality standards across analytical and ML data products.
  • Support implementation automated reporting, diagnostic, predictive, and prescriptive analytics.
  • Build and maintain semantic layers and dashboard data models that power worldwide operations business decisions.
  • Collaborate with Program Managers, BI teams, Data Engineers, Data Scientists, and operational stakeholders to prioritize work aligned with OTS business goals.
  • Follow and contribute to best practices for data engineering, including code reviews, testing, monitoring, and documentation.

Requirements

Currently has, or is in the process of obtaining, a Bachelor's degree or above in Computer Science, Computer Engineering, Information Management, Information Systems, or other related discipline

  • Experience as a data engineer or related specialty (e.g., software engineer, business intelligence engineer, data scientist) with a track record of manipulating, processing, and extracting value from large datasets
  • 1+ years of analyzing and interpreting data with Redshift, Oracle, NoSQL etc. experience
  • Experience with data modeling, warehousing and building ETL pipelines
  • Experience with one or more query language (e.g., SQL, PL/SQL, DDL, MDX, HiveQL, SparkSQL, Scala)
  • Experience with one or more scripting language (e.g., Python, KornShell)
  • Knowledge of general AI tools

Preferred Qualifications

  • Experience with big data technologies such as: Hadoop, Hive, Spark, EMR
  • Master's degree
  • Experience with AWS technologies like Redshift, S3, AWS Glue, EMR, Kinesis, FireHose, Lambda, and IAM roles and permissions
  • Knowledge of BI analytics, reporting or visualization tools like Tableau, AWS QuickSight, Cognos or other third-party tools

Benefits & conditions

The benefits that generally apply to regular, full-time employees include:

  • Medical, Dental, and Vision Coverage
  • Maternity and Parental Leave Options
  • Paid Time Off (PTO)
  • 401(k) Plan

If you are not sure that every qualification on the list above describes you exactly, we'd still love to hear from you!, The base salary range for this position is listed below. Your Amazon package will include sign-on payments and restricted stock units (RSUs). Final compensation will be determined based on factors including experience, qualifications, and location. Amazon also offers comprehensive benefits including health insurance (medical, dental, vision, prescription, Basic Life & AD&D insurance and option for Supplemental life plans, EAP, Mental Health Support, Medical Advice Line, Flexible Spending Accounts, Adoption and Surrogacy Reimbursement coverage), 401(k) matching, paid time off, and parental leave. Learn more about our benefits at https://amazon.jobs/en/benefits.

USA, TX, Austin - 101,300.00 - 160,000.00 USD annually

Apply for this position