Agentic Software Engineer

EF Education First
Charing Cross, United Kingdom
2 days ago

Role details

Contract type
Permanent contract
Employment type
Full-time (> 32 hours)
Working hours
Regular working hours
Languages
English
Experience level
Senior

Job location

Charing Cross, United Kingdom

Tech stack

API
Artificial Intelligence
Amazon Web Services (AWS)
Continuous Integration
Cursor (Graphical User Interface Elements)
Python
Machine Learning
Node.js
Pair Programming
Salesforce
Software Engineering
Systems Integration
TypeScript
Datadog
React
Snowflake
AWS Lambda
Gatsby
GraphQL
Data Management
Automation Anywhere

Job description

  • Raise the floor for the whole team - build reusable agent configurations, write the internal documentation that makes AI workflows reproducible, and pair with other engineers to help them move from "AI helps me type faster" to "AI does the implementation while I focus on the problem"
  • Close the gap between request and delivery - you'll work directly with product owners and stakeholders, turning ideas into working product fast enough that iteration feels like dialogue, not a ticket queue, * A project where you've directed agents to build something non-trivial - not a demo, something that ran against real constraints
  • Your approach to extending how long agents can work without intervention - what you've tried, what worked, what didn't
  • Why automating engineering work excites you rather than threatens you

What we're not looking for

  • Someone who wants to build ML models
  • A "prompt engineer" without real software engineering depth
  • Someone who thinks AI adoption means pair-programming with Copilot and typing faster - that's a speed boost, not a paradigm shift

Requirements

  • 5+ years of software engineering experience, but your recent work looks fundamentally different - you direct AI agents, review their output, and spend your time on specs, prompts, and validation rather than implementation
  • You can write a spec that an agent can execute without hand-holding - you know how to decompose problems, define acceptance criteria, and create the context documents that prevent agents from going off-piste
  • You build verification systems, not just features - you've thought seriously about how you know AI-generated code works if you didn't write it and didn't review it
  • Real infrastructure experience with at least some of: AWS services, CI/CD pipelines, data platforms, APIs, or enterprise SaaS integrations
  • Clear communication across technical and non-technical stakeholders - you can explain what agents are doing and why to product owners, leadership, and fellow engineers

What would make you stand out

  • You've used Claude Code, Cursor, Codex, or similar agentic coding tools in anger - not just demos
  • You've built custom skills, MCP servers, or agent tooling
  • You've worked in an environment where AI writes the majority of the code
  • You've thought about (or implemented) cost controls for token spend
  • You can point to something non-trivial built primarily by agents under your direction
  • Experience with our stack: AWS Lambda, Salesforce, Snowflake, Datadog, Gatsby, Node.js, Python, React, TypeScript, GraphQL

About the company

Hult is a global business school that teaches a Computer Science for Business degree. The engineering team doesn't sit adjacent to that mission - it's part of it. How we build software, how we adopt new tools, how we think about automation - it all feeds back into what we teach. That creates a different kind of engineering culture, and it tends to attract people who care about more than just shipping. We give engineers real ownership and the support to make it count. You'll pick up open-ended problems, shape your own approach, and have the backing of a team that trusts you to land it. We ship fast and iterate constantly. Great products are built through rapid refinement, not lengthy planning cycles. We leverage AI as a force multiplier. We're a pragmatic engineering team that's seen real productivity gains from AI-assisted development and wants to push further, faster, and more systematically. What you'll do * Build the scaffolding that lets agents run unsupervised - writing project context files, agent skills, and prompt architectures that give AI enough context to stay on track, and designing the validation harnesses that tell you whether the output is good without you reading every line * Systematically remove yourself from the loop - not just from code review, but from the entire cycle: planning, implementation, testing, deployment, monitoring. Every stage where you're still the bottleneck is a stage to automate next * Enable parallel workstreams - the goal isn't one agent working faster, it's multiple agents working simultaneously on different problems, converging on tested, shippable output. You'll design the workflows, guardrails, and feedback loops that make this possible * Work across a real-world enterprise stack - CRM workflows, data pipelines, cloud infrastructure, monitoring, and marketing platforms, all of which need to keep running while you're improving how they're built. You'll need enough engineering judgement to know when an agent's output is production-ready and when it's confidently wrong

Apply for this position