Data Engineer
Role details
Job location
Tech stack
Job description
You will be responsible for designing and building scalable data pipelines, Data Vault models/Dimension Model, and Snowflake/dbt workloads for cloud migration projects. ? Implement Data Vault 2.0 (Hubs, Links, Satellites) /Dimension Model on Snowflake. ? Build ELT pipelines using Snowflake, dbt, Python/PySpark. ? Develop ingestion from APIs, databases, streams. ? Optimize Snowflake warehouses, cost, and performance. ? Collaborate with architects, analysts, and DevOps. ? Maintain documentation, lineage, governance standards.
Requirements
We have an exciting opportunity now available with one of our sector-leading consultancy clients! They are currently looking for a skilled Snowflake Data Engineer to help on their cloud migration project., The ideal candidate will have the following: ? Strong SQL; Snowflake ELT; dbt experience. ? Python/PySpark, ETL/ELT design. ? Data Vault 2.0 or dimensional modeling. ? AWS services (S3, Glue, Lambda, Redshift) or GCP equivalents. ? Experience with CI/CD for data pipelines. Good to have skills Although not essential, the following skills are desired by the client: ? Kafka/Kinesis, Airflow, CodePipeline. ? BI tools (Power BI/Tableau). ? Docker/OpenShift; metadata driven pipelines. ? 3-8+ years Data Engineering experience. ? Cloud data engineering and Snowflake/dbt hands on exposure.