AI - Enabled Software Engineer
Role details
Job location
Tech stack
Requirements
We're looking for an experienced software engineer who is eager to build smarter systems by pairing technical expertise with emerging AI tools. You don't need to be an AI researcher-but you do need hands-on experience applying generative AI tools to real-world development workflows.
You should have:
-
5+ years of professional software engineering experience
-
Proficiency with at least part of our stack: Java, TypeScript + React, and MySQL
-
Extensive applied experience with AI-assisted development tooling-whether it's Cursor, Windsurf, LLM APIs, codegen platforms, vector databases, agentic frameworks like LangChain, or custom-built systems
-
A strong product mindset and interest in building for real-world impact
-
A bias toward experimentation, iteration, and continuous learning
-
Comfort operating in ambiguity and helping define best practices in a rapidly evolving space
We're flexible on which tools you've used-we care more about your ability to learn, adapt, and creatively apply AI in practical settings than checking a specific tech box.
People Who Thrive on Our Team Also Tend to Be:
-
Humble, open, and curious. You ask questions and seek to understand before making assumptions
-
Collaborative by default. You believe the best outcomes come from shared knowledge and ownership
-
Mission-driven. You care about building tech that serves the public good
-
Comfortable with uncertainty. You're energized by open problems and emerging technologies
-
Growth-oriented. You embrace feedback, learn from setbacks, and look for ways to get better every day
Benefits & conditions
This isn't just about using AI to autocomplete code-it's about designing and orchestrating systems of AI agents that can plan, write, review, test, and deploy software collaboratively. In essence, you'll play a role akin to a tech lead for a team of intelligent coding agents.
You will:
-
Design multi-agent systems with coding-focused agents (e.g., code writer, reviewer, tester, deployer)
-
Write the prompts, logic, and scaffolding that guide each agent's behavior
-
Handle tool use, like enabling agents to access the file system, test runners, version control, and internal APIs
-
Evaluate and refine agents' output, performance, collaboration patterns, and feedback loops
If you were on the team last week, you might have:
-
Prototyped a new coding assistant workflow using open-source LLMs and internal knowledge bases
-
Led an architecture discussion on agentic build pipelines or automated PR generation
-
Collaborated with a cross-functional team to build a fast, AI-powered interface for internal tooling
-
Helped define the evaluation framework for AI contributions-accuracy, speed, and impact
-
Mentored a teammate on combining TypeScript and AI tools to accelerate UI prototyping
-
Explored best practices for safely and securely integrating generative AI into a public sector codebase