Data Engineer - US Hybrid

Siemens AG
Alpharetta, United States of America
1 month ago

Role details

Contract type
Permanent contract
Employment type
Full-time (> 32 hours)
Working hours
Regular working hours
Languages
English
Experience level
Senior
Compensation
$ 198K

Job location

Remote
Alpharetta, United States of America

Tech stack

Query Performance
API
Airflow
Amazon Web Services (AWS)
Amazon Web Services (AWS)
Apache HTTP Server
Azure
Bioinformatics
Software as a Service
Cloud Computing
Databases
Continuous Delivery
Continuous Integration
Data as a Services
Data Architecture
Information Engineering
Data Governance
Data Infrastructure
ETL
Data Transformation
Data Security
Data Vault Modeling
Data Warehousing
Relational Databases
Fault Tolerance
Graph Database
Python
PostgreSQL
Machine Learning
Microsoft SQL Server
SQL Azure
MongoDB
MySQL
Neo4j
NoSQL
Performance Tuning
Queueing Systems
SQL Stored Procedures
SQL Databases
Data Processing
Google Cloud Platform
Data Storage Technologies
Azure
Spark
Microsoft Fabric
Amazon Web Services (AWS)
Data Lake
Information Technology
Cassandra
Amazon Web Services (AWS)
Data Analytics
Star Schema
Real Time Data
Kafka
Data Management
Machine Learning Operations
Terraform
Industrial Software
Data Pipelines
Legacy Systems
Databricks
Programming Languages

Job description

Seeking a highly skilled and experienced Data Engineer to join our growing data team. The ideal candidate will be a technical specialist who is passionate about designing, building, and optimizing scalable, reliable, and high-performance data infrastructure. This role is crucial in architecting our next-generation data platform to unify data warehousing and data lake capabilities. You will be responsible for creating robust data pipelines, managing diverse database technologies, and ensuring high data quality for our Data Scientists, Analysts, and business stakeholders., Data Engineering & Architecture

  • Design, implement, and optimize the overall data architecture, with a strong focus on the Lakehouse paradigm (e.g., using Databricks/Delta Lake, Microsoft Fabric, or equivalent cloud-native solutions).
  • Develop and manage data models (dimensional, relational, or NoSQL) for both transactional and analytical systems, ensuring efficiency and scalability.
  • Successfully migrate or integrate data from legacy systems and disparate sources into the modern Lakehouse environment.
  • Monitor, tune, and optimize data storage, compute costs, and query performance across the data platform.

Data Pipeline Development (ETL/ELT)

  • Design, build, and maintain robust, scalable, and fault-tolerant ETL/ELT data pipelines for batch and real-time data ingestion and transformation.
  • Integrate data from a variety of sources, including transactional databases, APIs, message queues (e.g., Kafka), and external SaaS platforms.
  • Implement data quality checks, validation rules, and data governance policies within the pipelines to ensure data reliability and compliance.
  • Use workflow orchestration tools (e.g., Apache Airflow, Azure Data Factory, AWS Glue) to automate and manage complex data workflows.

Database Management

  • Demonstrate strong working knowledge of and hands-on experience with various database management systems (DBMS).

  • Relational Databases (SQL): PostgreSQL, MySQL, SQL Server, or cloud-based relational services (e.g., AWS RDS, Azure SQL Database).

  • NoSQL Databases: Experience with one or more NoSQL types (e.g., Document-based like MongoDB/CosmosDB, Key-Value, Graph, or Columnar databases like Cassandra and Neo4J).

  • Optimize database schemas and write complex, efficient SQL queries and stored procedures for data manipulation and retrieval.

Collaboration & Operations

  • Collaborate closely with Data Scientists and Data Analysts to deliver high-quality, feature-rich datasets that support advanced analytics and Machine Learning (ML) models.
  • Establish and maintain Continuous Integration/Continuous Deployment (CI/CD) practices for all data-related infrastructure and code.
  • Develop comprehensive technical documentation on data pipelines, data models, and platform architecture.
  • Ensure data security, access control, and compliance with data privacy regulations (e.g., GDPR, HIPAA)., Working at Siemens Software means flexibility - Choosing between working at home and the office at other times is the norm here. We offer great benefits and rewards, as you'd expect from a world leader in industrial software. A collection of over 377,000 minds building the future, one day at a time in over 200 countries. We're dedicated to equality, and we welcome applications that reflect the diversity of the communities we work in. All employment decisions at Siemens are based on qualifications, merit, and business need. Bring your curiosity and creativity and help us shape tomorrow! Siemens Software. Transform the Everyday #LI-PLM #LI-HYBRID #SWSaaS You'll Benefit From Siemens offers a variety of health and wellness benefits to our employees. Details regarding our benefits can be found here: The pay range for this position is $109,800 - $197,700 annually with a target incentive of 5-8% of the base salary. The actual wage offered may be lower or higher depending on budget and candidate experience, knowledge, skills, qualifications, and premium geographic location. Equal Employment Opportunity Statement Siemens is an Equal Opportunity Employer encouraging inclusion in the workplace. All qualified applicants will receive consideration for employment without regard to their race, color, creed, religion, national origin, citizenship status, ancestry, sex, age, physical or mental disability unrelated to ability, marital status, family responsibilities, pregnancy, genetic information, sexual orientation, gender expression, gender identity, transgender, sex stereotyping, order of protection status, protected veteran or military status, or an unfavorable discharge from military service, and other categories protected by federal, state or local law. EEO is the Law Applicants and employees are protected from discrimination on the basis of race, color, religion, sex, national origin, or any characteristic protected by Federal or other applicable law. Reasonable Accommodations If you require a reasonable accommodation in completing a job application, interviewing, completing any pre-employment testing, or otherwise participating in the employee selection process, please fill out the accommodations form by clicking on this link . If you're unable to complete the form, you can reach out to our AskHR team for support at 1-866-743-6367. Please note our AskHR representatives do not have visibility of application or interview status. Pay Transparency Siemens follows Pay Transparency laws. California Privacy Notice California residents have the right to receive additional notices about their personal information. To learn more, click . Criminal History Qualified applications with arrest or conviction records will be considered for employment in accordance with applicable local and state laws.

Requirements

Experience Level: 8yrs enterprise data engineering, * At least Bachelors in Computer Science or equivalent

  • 6-8 years of hands-on experience in a dedicated Data Engineering role.
  • Expert-level proficiency in SQL and at least one high-level programming language, such as Python or Scala, used for data manipulation and engineering tasks.
  • Proven experience in designing and managing data platforms using a Lakehouse architecture (e.g., Databricks/Delta Lake, Apache Hudi, Apache Iceberg, or similar cloud-native lakehouse services).
  • Solid understanding of cloud platforms: Azure, AWS or GCP and their relevant data services (e.g., S3/ADLS/GCS for storage, Spark services) preferably Azure.
  • In-depth knowledge of database fundamentals, including schema design, performance tuning, and practical experience with both Relational and NoSQL databases.
  • Familiarity with distributed processing frameworks (e.g., Apache Spark) for handling large-scale data transformation.
  • Experience implementing and maintaining automated ETL/ELT data pipelines and utilizing data orchestration tools.
  • Strong understanding of data modeling techniques (e.g., Star Schema, Data Vault).
  • Familiarity with MLOps

Preferred Qualifications

  • Experience with real-time streaming technologies (e.g., Apache Kafka, Kinesis, Pub/Sub).
  • Familiarity with Infrastructure as Code (IaC) tools like Terraform.
  • Experience in MLOps and serving production-ready data to ML systems.
  • Relevant professional cloud certification (e.g., AWS Certified Data Analytics, Google Cloud Professional Data Engineer, Microsoft Certified Azure Data Engineer).
  • Experience with graph database and familiarity with knowledge graphs and semantic processing

NOTE: Applicants will not require employer sponsored work authorization now or in the future for employment in the USA. Applicants must be legally authorized for employment in the USA.

About the company

© 2026 Careerjet All rights reserved

Apply for this position