Enterprise Data & Analytics
Role details
Job location
Tech stack
Job description
The Enterprise Data & Analytics team reports to the Chief Data Officer and supports decision-making across Fitch Group through insights, analytics, and data-driven solutions. The team operates globally, with colleagues in London and New York, and builds capabilities that help business units drive revenue growth, product innovation, productivity gains, and stronger business performance.
Within this global function, you will be embedded in a small, high-impact team responsible for researching, prototyping, and productionising next-generation analytics capabilities for Fitch Group. We're looking for someone who can work with a high degree of autonomy, bring structure to ambiguous problems, and deliver working prototypes with light-touch direction.
What We Offer
- High ownership from day one, with clear goals and access to experienced colleagues for targeted support and review (rather than intensive day-to-day training).
- The opportunity to contribute to high-impact analytics and reporting initiatives that support decision-making across Fitch Group.
- A supportive, collaborative environment with access to experienced colleagues across a global function.
- Exposure to a broad range of stakeholders, data sources, and business questions-with clear progression toward greater ownership.
What You'll Work On (indicative, Not Exhaustive)
- Agentic AI Workflows - Help build and iterate on autonomous and semi-autonomous agent workflows that retrieve, reason over, and act on enterprise data (with guidance on architecture and safety).
- Semantic Layers & Ontologies - Contribute to semantic models (metrics, definitions, and metadata) that make it easier for business users to query data consistently.
- LLM-Powered Analytics - Prototype LLM-enabled features for summarisation, insight narration, and conversational analytics, and help evaluate what is production-ready.
- Rapid Prototyping - Turn ideas into working proofs-of-concept quickly, learning through iteration and feedback.
- Data Pipeline & Integration Design - Work with the team to connect prototypes to enterprise data sources using modern tooling, with an emphasis on quality, reliability, and security.
- Evaluation & Benchmarking - Help define tests and evaluation criteria (accuracy, latency, cost, and user impact) to support decisions on what to scale.
Requirements
This role suits an early-career technology enthusiast who can be productive quickly, someone with strong fundamentals and some hands-on experience (through industry roles, internships, or substantial project work) and the ambition to take ownership in a fast-moving analytics and AI environment., * A degree in Computer Science, Data Science, Mathematics, or a related discipline (or equivalent practical experience), plus evidence you can deliver in real-world settings (e.g., 1-3 years experience, internships, or substantial end-to-end projects).
- Strong programming skills in Python, with the ability to write readable, testable code and ship working solutions.
- Genuine curiosity about AI, LLMs, and the evolving analytics landscape, evidenced by personal projects, writing, contributions, or experimentation.
- Familiarity with some of: SQL, REST APIs, cloud platforms (AWS preferred), and version control (Git).
- Ability to work independently: you can scope tasks, manage your time, and make pragmatic trade-offs to deliver value with minimal supervision.
- Strong communication skills, including the ability to explain complex technical concepts to non-technical stakeholders.
What Would Make You Stand Out
- Experience with LLM orchestration frameworks (e.g., LangChain, LangGraph, CrewAI, AutoGen, Semantic Kernel).
- Exposure to semantic layer technologies (e.g., dbt metrics layer, Cube, AtScale) or knowledge graph tooling.
- Familiarity with BI platforms (e.g., Qlik, Power BI, Tableau) and their extensibility models.
- Understanding of vector databases, RAG architectures, and embedding-based retrieval.
- Experience with containerisation (Docker) and/or CI/CD pipelines.
- Exposure to data modelling (including dimensional modelling) and/or metadata management.