Data Engineer
Role details
Job location
Tech stack
Job description
As a Data Engineer (Consultant) at Slalom, you will design and deliver high-quality data solutions that power AI and generate measurable business value for our clients. You will build the pipelines, platforms, and feature stores that feed modern AI and analytics workloads, using AI tooling as a first-class part of your workflow.
You bring solid hands-on experience with Snowflake and Python, a working knowledge of at least one major cloud provider (AWS or Azure), and a genuine interest in the intersection of data engineering and AI. You collaborate effectively within teams, translate client needs into technical solutions, and contribute to the continued growth of Slalom's Data & AI capability.
What you'll do
Client delivery & technical execution
-
Build AI-ready data platforms. Design and implement the Snowflake data layers, feature stores, and pipelines that feed production AI, machine learning, and agentic workloads.
-
Deliver on Snowflake end-to-end.Ingestion, transformation, performance tuning, security, and role design, including native integration with Snowflake Cortex for in-platform AI, LLM, and embedding workloads.
-
Engineer in Python as a core craft. Use Python for data processing, orchestration, automation, and integration with AI/ML services and agentic frameworks, not just as a SQL alternative.
-
Work across the cloud. Implement data architectures on AWS (S3, Glue, Lambda, Redshift) or Azure (Data Lake, Data Factory, Functions, Synapse), with Databricks and Microsoft Fabric experience a strong plus.
-
Use AI natively in how you build. Apply AI-assisted development (code generation, test creation, documentation, pipeline design) as the default way of working. The outcome is measurably faster delivery at higher quality.
-
Partner across disciplines. Collaborate with AI/ML engineers, data scientists, and DevOps teams on integrated solutions: feature pipelines, model-scoring workflows, retrieval layers for LLM applications, and analytics-ready datasets.
-
Apply platform best practice.Security, performance, cost optimisation, and operational excellence across Snowflake and cloud environments.
-
Translate business into technical.Work with architects, analysts, and business stakeholders to turn requirements into implementations.
-
Contribute to consulting delivery.Planning, estimation, and delivery as part of a project team.
Client collaboration & communication
-
Participate in client workshops, requirements-gathering sessions, and solution design discussions.
-
Communicate technical concepts clearly to both technical and non-technical audiences, including how AI capabilities shape data design choices.
-
Build positive working relationships with client stakeholders through reliable delivery and transparent communication.
Practice development & knowledge sharing
-
Stay current with the Snowflake, Python, and AI engineering ecosystems, including Cortex, agentic frameworks, retrieval-augmented generation, and AI-native data patterns.
-
Contribute to internal accelerators, templates, and reusable components for AI-enabled data engineering.
-
Share knowledge through documentation, demos, and informal mentoring of peers, including how you're using AI tooling to work better.
-
Participate actively in Slalom's learning culture and help shape our approach to agentic data workflows and AI-driven automation across the data lifecycle.
Requirements
-
3 to 5 years of experience in data engineering, with hands-on work on cloud data platforms.
-
Strong practical experience with Snowflake: data modelling, ingestion, transformations, performance tuning, security, and roles.
-
Working knowledge of Snowflake Cortex (or strong curiosity and appetite to build it) for in-platform AI, LLM, and vector workloads.
-
Strong proficiency in Python and SQL for data processing, transformation, and orchestration.
-
Experience with AWS or Azure data services (e.g. S3/Glue/Lambda/Redshift/Athena/EMR or Data Lake/ADF/Functions/Synapse).
-
Experience designing and implementing ETL/ELT pipelines, data integration patterns, and workflow orchestration (e.g. Airflow, dbt, Step Functions, Azure Data Factory).
-
Solid understanding of data modelling concepts (dimensional modelling, Data Vault, normalised schemas) and when to apply them.
AI-native mindset (core, not a nice-to-have)
-
Active user of AI-assisted development tools (e.g. Snowflake Cortex Code, Claude Code, GitHub Copilot) as part of your day-to-day engineering workflow.
-
Curiosity about, and ideally some hands-on exposure to, agentic AI, retrieval-augmented generation, or production ML pipelines.
-
An understanding of how data engineering decisions shape what AI and ML teams can deliver, and an interest in designing for that context.
Good to have
-
Experience with Databricks (on AWS or Azure): notebooks, Delta Lake, and collaborative development.
-
Exposure to Microsoft Fabric.
-
Awareness of data governance, data quality, and metadata management principles.
-
Familiarity with CI/CD and Infrastructure as Code tools (CloudFormation, Terraform, ARM/Bicep, or similar).
-
Relevant cloud certifications (AWS, Azure, or Snowflake).
How you work
-
Strong problem-solving skills and the ability to work in fast-paced consulting environments.
-
Solid communication and interpersonal skills, with the ability to collaborate effectively in diverse teams.
-
Prior client-facing experience is preferred but not required. A strong interest in consulting and working directly with clients is important.