Data Scientist III
Role details
Job location
Tech stack
Job description
We are seeking a Data Scientist III who is a strong Data Science Generalist. The ideal candidate is comfortable working across GenAI, traditional machine learning, analytics, data engineering, cloud platforms, and enterprise system integrations.
In this role, you will help design, build, and deploy AI and ML solutions that support key business functions across Product, Sales, Finance, Marketing, Customer Success, and Engineering. You will contribute across the full solution lifecycle, including problem framing, data preparation, modelling, experimentation, prompt engineering, deployment, monitoring, and stakeholder communication.
This position is ideal for a versatile data scientist who enjoys solving diverse problems, working across multiple systems, and contributing to measurable business impact.
Responsibilities:
-
Build GenAI applications using OpenAI APIs, embeddings, vector search, and RAG.
-
Apply prompt engineering and help define evaluation approaches for GenAI outputs.
-
Develop and deploy ML models (e.g., churn, propensity-to-buy, sentiment/feedback, lead scoring, customer intelligence).
-
Own the full ML lifecycle: data prep, experimentation, deployment, and monitoring.
-
Build and optimise feature pipelines and model scoring jobs with Python, Databricks, Spark, and Delta Lake.
-
Use AWS (S3, Redshift, Lambda) for data automation and orchestration.
-
Improve pipeline data quality, observability, lineage, and documentation.
-
Integrate models/data with enterprise platforms (Salesforce, Oracle Fusion/Service Cloud/Peoplesoft).
-
Deliver real-time and batch workflows to improve CRM, sales, service, and marketing operations.
-
Partner cross-functionally to define KPIs, generate actionable insights, communicate clearly, and drive adoption via demos/docs/training.
Requirements
-
Strong Python programming skills.
-
Experience with OpenAI APIs, LLM workflows, and prompt engineering.
-
Solid machine learning fundamentals, including supervised learning, NLP, and feature engineering.
-
Experience with Databricks, Spark, and Delta Lake.
-
Strong SQL skills with experience working on large datasets.
-
Experience with AWS, including S3 and Lambda.
-
Familiarity with Redshift, Snowflake, or other cloud data warehouses.
-
Experience working with behavioural or business datasets.
-
Ability to work across machine learning, analytics, data engineering, and integrations.
-
Ability to contribute to end-to-end solutions spanning data, models, APIs, and automation workflows.