Senior Data Engineer
Role details
Job location
Tech stack
Job description
We are looking for a hands-on Senior Data Engineer to help build and evolve Tesla's next-generation enterprise analytics platform that powers business intelligence, operational intelligence, manufacturing insights, supply chain visibility, service telemetry, energy operations, and more - all while operating under strict SOX compliance and change management controls.
You will design, develop, and operate large-scale data infrastructure in a fast-paced, high-impact environment where decisions affect vehicle production, global delivery, battery lifecycle, Supercharger network, Full Self-Driving development, and energy grid optimization.
What You'll Do
- Architect, build, and maintain state-of-the-art Enterprise Data Warehouse / Lakehouse solutions that serve both batch and near-real-time analytics use cases
- Design and implement robust ETL / ELT pipelines using Python and Apache Airflow (or modern orchestration equivalents)
- Develop and operate real-time data streaming and processing platforms using open-source technologies such as Apache Kafka, Apache Spark Streaming / Structured Streaming, Flink, or equivalent
- Maintain platform health - Vertica, SQL Server, Airflow, Tableau etc
- Handle sensitive financial, production, and customer data systems while strictly adhering to SOX controls, segregation of duties, change management, and audit requirements
- Partner closely with business sponsors, product managers, manufacturing engineers, service operations, finance, and IT/security teams to gather requirements, scope projects, and deliver high-quality solutions quickly
- Communicate complex technical concepts and business impact effectively through written documentation, verbal discussions, architecture diagrams, and executive-level presentations (360-degree communication)
- Define, enforce, and continuously improve engineering standards, coding best practices, testing methodologies, CI/CD patterns, monitoring & alerting, and quality assurance processes
- Actively participate in design reviews, code walkthroughs, and pull request reviews across the team
- Stay current with evolving open-source technologies and recommend adoption when they provide meaningful differentiation or operational efficiency
- Provide global 24×7 data support on a rotating basis (on-call) and own ETL / streaming pipeline health monitoring, alerting, and incident resolution
Requirements
Do you have experience in Tableau?, * 6+ years of professional experience as a Data Engineer, Backend Engineer, or ETL developer building large-scale data platforms
- Skilled with SQL , Python for data engineering (pandas, PySpark, SQLAlchemy, API Scrapping etc.)
- Strong Proficiency with database systems like Vertica, MySQL, SQL Server, NoSQL, OpenSearch, etc. is required
- Deep hands-on experience designing and operating Airflow DAGs in production at scale
- Production experience with at least one distributed streaming system (Kafka, Kafka Streams, Spark Streaming, Flink, Pulsar, etc.)
- Solid understanding of data modeling for analytical workloads
- Experience building and operating systems under SOX compliance or similarly regulated environments (change control, audit trails, separation of duties, etc.)
- Strong SQL skills and understanding of distributed query engines
- Experience with containerization (Docker) and orchestration (Kubernetes / ECS) is required
- Excellent communication skills - able to explain technical trade-offs to engineers and business value to non-technical stakeholders