Data Engineer
Role details
Job location
Tech stack
Job description
We are seeking a hands-on Data Engineer for a development-centric role. The position involves designing, implementing, and optimizing data ingestion, transformation, and orchestration workflows using Snowflake and Databricks. You will collaborate with data scientists, platform engineers, and product teams to deliver reliable, secure, and cost-efficient data products. This role requires managing all phases of the software development life cycle to deliver technical solutions., * Design, implement, and optimize scalable ETL/ELT pipelines across AWS, Google Cloud Platform, and Azure.
- Analyze business or technical requirements and design solutions to develop business-critical components.
- Work with technical and business partners to deliver solutions.
- Manage application design, development, delivery, and post-production support.
- Enhance the existing application stack and act as an application manager for key reference data applications.
- Deliver production-grade pipelines with comprehensive monitoring, alerting, and documentation.
Requirements
Experience: 5 to 8 years in data engineering or software engineering with hands-on pipeline development., * Production experience with at least one of the following cloud platforms: AWS, Google Cloud Platform, or Azure.
- Proficiency in Python and SQL.
- Experience with Snowflake, including ELT, performance tuning, and security.
- Experience with Databricks/Spark, including Delta Lake and structured streaming.
- Experience integrating and modeling data from MongoDB.
- Understanding of orchestration tools (Airflow/ADF/Cloud Composer), CI/CD, Git, and IaC (Terraform).
- Knowledge of data modeling, distributed systems, and file formats., * Experience with Kafka, Kinesis, or Event Hubs for streaming ingestion.
- Familiarity with dbt for transformation and testing in Snowflake or Databricks.
- Knowledge of security best practices, including encryption, KMS/Key Vault, tokenization, and network isolation.
- Experience with cost governance or FinOps for data platforms.
- Exposure to ML feature pipelines and feature stores.
- Bachelor's or Master's degree in Computer Science, Engineering, or a related field.