Software Engineering

eightfold.ai
1 month ago

Role details

Contract type
Permanent contract
Employment type
Full-time (> 32 hours)
Working hours
Regular working hours
Languages
English
Experience level
Junior

Job location

Tech stack

JavaScript
Artificial Intelligence
Automation of Tests
Data Validation
Software Debugging
DevOps
Python
Salesforce
SAP Applications
Selenium
Software Engineering
Test Execution Engine
TypeScript
Data Logging
Enterprise Software Applications
Performance Testing
Large Language Models
Prompt Engineering
Kubernetes
Information Technology
Playwright
REST
Webhooks
Software Version Control
Api Management
Workday
Docker
SDET
ServiceNow

Job description

Eightfold AI leverages AI to help organizations hire, retain, and grow their workforce, focusing on enterprise-grade scale, trust, and outcomes. We are advancing our quality assurance through AI-assisted and agentic approaches. We are seeking an AI SDET Engineer to architect, build, and scale production-grade testing infrastructure powered by AI agents, prompts, and intelligent automation. This infrastructure will enable teams to validate enterprise-grade products with high speed and reliability. This is a Software Engineering role centered on test automation infrastructure. It requires deep quality engineering expertise combined with software craftsmanship to design reliable, debuggable, and maintainable AI-agent-based testing systems that scale across complex enterprise environments.- What You'll Do

  1. AI-Driven Test Automation Architecture & Framework Design
  • Design and implement scalable, maintainable testing frameworks utilizing AI agents for high-fidelity, enterprise-grade testing.
  • Architect prompt-driven testing systems that abstract complex test logic into composable, reusable workflows.
  • Develop infrastructure for orchestrating AI agents, managing state, parallelizing test scenarios, and ensuring failure recovery.
  • Build observability and debugging tools (logging, tracing) for transparent AI-driven test execution and root cause analysis.
  • Establish integration patterns for UI, REST APIs, webhooks, and custom tools with AI agents to achieve deterministic, high-signal testing.
  1. Prompt Engineering at Scale
  • Author and evolve prompt libraries and templates that encode testing best practices for consistency and rapid iteration.
  • Develop reusable prompt modules for common enterprise scenarios: authentication, role-based access, data validation, multi-tenant workflows, and complex transactions.
  • Create evaluation harnesses to measure prompt quality: consistency, defect detection rate, and agent reliability metrics.
  • Implement version control, testing, and deployment pipelines for prompts, treating them as first-class code artifacts.
  1. Enterprise-Grade Quality Validation
  • Design and execute end-to-end test workflows for complex enterprise features, including integrations, multi-step configurations, and role-based access.
  • Build frameworks to validate the AI QA platform itself, ensuring agent behavior is reliable, human-like, and compliant.
  • Develop data-driven feedback loops to analyze test results, identify gaps, and continuously improve prompt libraries and agent capabilities.

Requirements

  • Bachelor's degree in Computer Science, Engineering, or a related field (or equivalent professional experience).
  • 1+ years of hands-on experience designing and building test automation frameworks for enterprise applications.
  • 1+ years of experience with production automation infrastructure: system design, reliability, maintainability, and scalability.
  • Deep understanding of test design principles: boundary analysis, equivalence classes, and end-to-end workflow validation.
  • Strong fundamentals in modern browser automation: DOM interaction, selectors, state management, and handling flakiness/race conditions.
  • Proficiency in Python or TypeScript/JavaScript (preferred) within test automation ecosystems.

Preferred Qualifications

  • Hands-on experience with Playwright (strongly preferred), Selenium, or similar modern browser automation frameworks.
  • Experience building or significantly architecting test automation frameworks from scratch.
  • Familiarity with LLM-driven workflows, prompt engineering, or evaluation frameworks (e.g., LangChain, Anthropic SDK).
  • Experience testing or validating complex enterprise platforms (e.g., Salesforce, ServiceNow, Workday, SAP).
  • Knowledge of CI/CD pipelines, observability tools (logging, tracing, metrics), and test result analytics.
  • Background in specialized quality domains like API testing or performance testing.
  • Experience with DevOps-adjacent technologies: infrastructure-as-code, Docker, Kubernetes.
  • Relevant enterprise platform or test automation framework certifications.

Apply for this position