SRE IaaS F/H
Role details
Job location
Tech stack
Job description
In our DELMIA INDIA R&D organization we are looking for a suitable candidate to be part of our passionate Software Quality Engineering team to deliver world class Digital Manufacturing applications to our customers and fulfilling our brand promise. As part of our Software Quality Engineering team, you will be involved in challenging and exciting projects, supporting the team in the creation of outstanding enterprise solutions used by our customers. You will work closely with the extended DELMIA R&D team across multiple geographies and also our technical sales team to better understand our customer requirements.
DELMIA delivers solutions to address the most challenging situations manufacturers experience today. We connect the virtual and real worlds to empower our customers worldwide to collaborate, model, optimize, and execute supply chains, manufacturing, logistics, and service to achieve strategic business results
The current position will be based in Bengaluru, INDIA
Role & Responsibilities
- Perform quality assessment of DS DELMIA applications, validating both AI-driven and rule-based features against DS quality standards and real customer usage.
- Define and execute test strategies for AI/ML capabilities, covering:
- Functional correctness
- Model behavior consistency
- Data dependency and sensitivity
- Edge cases and failure modes
- Model Behavioral Testing: Design test suites to evaluate model performance (Precision, Recall, F1-score) specifically within applications designed for Digital Manufacturing
- Agentic Evaluation :
Metric-Based Validation: Define and implement unit tests for LLM outputs using metrics such as Faithfulness, Answer Relevancy, Contextual Precision, and Hallucination scores.
- Tool-Calling Accuracy: Rigorously test the agent's ability to select and execute the correct "tools" or APIs within the application environment based on user intent
- Regression Testing for LLMs: Use "Golden Datasets" to ensure that updating the underlying model does not degrade the agent's reasoning or domain-specific knowledge
- Distinguish between model limitations, data issues, and software defects, and communicate findings clearly to software and data engineering teams.
- Leverage customer usage data, telemetry, and feedback to continuously refine AI test coverage and improve quality effectiveness.
- Statistical Evaluation: Statistical way to present Test results for AI features, multiple-pass evaluations to find a confidence interval.
Cross-Functional Collaboration
- Work closely with AI/ML engineers, data stewards, and product teams to understand model intent, assumptions, and limitations.
- Contribute to defining AI quality metrics, acceptance criteria, and certification guidelines aligned with DS standards.
Requirements
Do you have experience in TypeScript?, Do you have a Bachelor's degree?, * Bachelor's\ Masters in Engineering - Preferably from Mechanical\Industrial or related streams,
- Knowledge of any PLM, MES or Digital Manufacturing applications is a plus.
- Strong foundation in software testing and test automation
- Working knowledge of AI/ML concepts and Data Science basics (Pandas, NumPy, Pytest, )
- Knowledge of Tools like DeepEval, Ragas, Giskard, LangChain, LangGraph, Ollama, ChromaDB.
- Ability to reason about risk, impact, and confidence, not just pass/fail
- Expert level knowledge in Test Automation using Tools like Playwright, Selenium with Programming / scripting in Typescript\Javascript or Python
- Excellent analytical and stakeholder communication skills