Manager and Responsible AI Solution Architect

PwC
Charing Cross, United Kingdom
2 days ago

Role details

Contract type
Permanent contract
Employment type
Full-time (> 32 hours)
Working hours
Regular working hours
Languages
English
Experience level
Senior

Job location

Charing Cross, United Kingdom

Tech stack

Artificial Intelligence
Amazon Web Services (AWS)
Azure
Databases
Continuous Integration
Data Governance
Identity and Access Management
Key Management
Systems Development Life Cycle
Data Logging
Large Language Models
Data Management
Cloudwatch
Api Management

Job description

PwC is rapidly expanding its market leading Responsible AI (RAI) practice in the UK to meet fast growing client demand across all sectors. As a Senior Manager and Responsible AI Solution Architect, you will shape and deliver PwC's Responsible AI, Ethics, Security and Trust agenda. You will lead the architecture and delivery of trusted AI solutions across GenAI and agentic platforms, working at the intersection of engineering, cyber, data, model risk and regulatory compliance. You will help clients move from principles to production by designing secure, governed and observable AI systems. This role offers the opportunity to influence how some of the UK's largest organisations adopt AI responsibly through technical design leadership, assurance-by-design, and scalable governance patterns.

What your days will look like:

As a Senior Manager and Responsible AI Solution Architect, you will lead the design and delivery of trusted, secure and compliant AI systems across GenAI and emerging agentic platforms. Working across engineering, cyber, data governance and model risk, you will translate Responsible AI principles and regulatory requirements into production ready architectures. This is a hands on technical leadership role shaping how major organisations build, validate and scale AI safely.

  • Architect end-to-end AI / GenAI / agentic solutions (from data and identity through model integration, orchestration, deployment, monitoring and incident response), embedding PwC's "Trust by Design" architecture patterns to ensure systems are secure, governed, transparent and resilient.
  • Develop and mature PwC's "Trust by Design" reference architectures and patterns for GenAI and agentic AI, embedding: safety and policy controls (guardrails, content safety, prompt-hardening),
  • transparency and auditability (logging, traceability, lineage), privacy and security controls (data minimisation, encryption, key management), and human-in-the-loop and escalation mechanisms.
  • Translate regulatory, policy, privacy and ethical requirements into concrete technical controls embedded across: SDLC and secure-by-design practices, data lifecycle and data governance, model lifecycle (training, fine-tuning, evaluation, release, monitoring).
  • Partner with cyber and resilience specialists to advance AI threat modelling, prompt injection and data exfiltration mitigations, adversarial testing, and model assurance approaches.
  • Define and lead AI assurance strategies across testing, validation, monitoring and control effectiveness across classical ML and GenAI.

Requirements

  • Deep expertise in Responsible AI principles and operating models, including design or assessment of governance/control frameworks aligned to recognised standards (e.g., NIST AI RMF, ISO/IEC 42001).
  • Strong understanding of UK/EU AI regulatory landscape (including EU AI Act), data protection, model risk concepts, and AI ethics.
  • Proven experience as a solution architect / technical architect designing and implementing enterprise-grade AI systems.
  • Strong understanding of GenAI architectures (RAG, tool use, function calling, agents, orchestration patterns), including failure modes and risk controls.
  • Hands-on or architecture-level experience with cloud AI platforms and services, ideally across Azure (e.g., Azure AI / Azure OpenAI, Prompt Flow, AML, Purview, Sentinel), AWS (e.g., Bedrock, SageMaker, IAM/KMS, CloudWatch), and other common data platforms, API management, and identity/access patterns.
  • Familiarity with AI enabling technologies and ecosystems: vector databases, embedding pipelines, feature stores, model registries, prompt/trace observability, CI/CD for ML/LLM systems.
  • Demonstrated ability to design and lead testing and validation approaches for AI systems, including GenAI safety testing, adversarial testing/red teaming, monitoring and incident management.
  • Working knowledge of emerging methods such as fine-tuning (e.g. parameter-efficient approaches), evaluation harnesses, and measurement of risk/quality.

About the company

PwC provides services to 420 out of 500 Fortune 500 companies. The firm was formed in 1998 by a merger between Coopers & Lybrand and Price Waterhouse.

Apply for this position