Data Engineer
Role details
Job location
Tech stack
Job description
operate real-time data streaming and processing platforms using open-source technologies such as Apache Kafka, Apache Spark Streaming / Structured Streaming, Flink, or equivalent * Maintain platform health - Vertica, SQL Server, Airflow, Tableau etc * Handle sensitive financial, production, and customer data systems while strictly adhering to SOX controls, segregation of duties, change management, and audit requirements * Partner closely with business sponsors, product managers, manufacturing engineers, service operations, finance, and IT/security teams to gather requirements, scope projects, and deliver high-quality solutions quickly * Communicate complex technical concepts and business impact effectively through written documentation, verbal discussions, architecture diagrams, and executive-level presentations (360-degree communication) * Define, enforce, and continuously improve engineering standards, coding best practices, testing methodologies, CI/CD
Requirements
patterns, monitoring & alerting, and quality assurance processes * Actively participate in design reviews, code walkthroughs, and pull request reviews across the team * Stay current with evolving open-source technologies and recommend adoption when they provide meaningful differentiation or operational efficiency * Provide global 24×7 data support on a rotating basis (on-call) and own ETL / streaming pipeline health monitoring, alerting, and incident resolution ### What You'll Bring * Extensive professional experience as a Data Engineer, Backend Engineer, or ETL developer building large-scale data platforms * Skilled with SQL , Python for data engineering (pandas, PySpark, SQLAlchemy, API Scrapping etc.) * Strong Proficiency with database systems like Vertica, MySQL, SQL Server, NoSQL, OpenSearch, etc. is required * Deep hands-on experience designing and operating Airflow DAGs in production at scale * Production experience with at least one distributed streaming system (Kafka, Kafka Streams, Spark Streaming, Flink, Pulsar, etc.) * Solid understanding of data modeling for analytical workloads * Experience building and operating systems under SOX compliance or similarly regulated environments (change control, audit trails, separation of duties, etc.) * Strong SQL skills and understanding of distributed query engines * Experience with containerization (Docker) and orchestration (Kubernetes / ECS) is required * Excellent communication skills - able to explain technical trade-offs to engineers and business value to non-technical stakeholders Tesla is an Equal Opportunity / Affirmative Action employer committed to diversity in the workplace. All qualified applicants will receive consideration for employment without regard to race, color, religion, sex, sexual orientation, age, national origin, disability, protected veteran status, gender identity or any other factor protected by applicable