Essential Software Inc ยท 1 month ago
Senior QA Engineer (AI Systems & Automation)
Essential Software Inc. is a trusted partner to federal agencies, delivering secure, cloud-based platforms that support large-scale cancer data and biomedical research. As a Senior QA Engineer (AI Systems & Automation), you will lead quality strategy and test automation for critical data platforms and AI-powered experiences, ensuring reliability and safety in a federal environment.
Information TechnologySoftwareTechnical Support
Responsibilities
Own end-to-end quality for complex web, API, data, and AI/ML-powered features
Design AI-aware test strategies and automation that leverage GenAI and agentic frameworks
Mentor QA engineers and collaborate closely with cross-functional teams and government partners
Develop and maintain test plans, test cases, traceability, and test data for product and AI features
Execute manual and automated tests for web applications, RESTful APIs, data workflows, and AI/ML features
Own automated regression suites, release readiness criteria, and provide clear go / no-go quality signals
Participate in agile ceremonies, validate end-to-end functionality, and ensure user stories (including AI features) meet acceptance criteria
Manage the full defect lifecycle, including triage, prioritization, root cause analysis, and verification of fixes
Maintain QA documentation, runbooks, and quality dashboards
Design and execute test strategies for AI/LLM-powered capabilities, including virtual agents, chatbots, copilots, and RAG-based systems
Use LLM-powered tools (e.g., ChatGPT, Claude, Copilot) to accelerate test design, data generation, exploratory testing, and script authoring
Build and refine QA-focused AI agents that can:
Scrape UI and verify DOM structures
Validate data against backend or ground-truth sources
Auto-generate and maintain test scripts
Run self-correcting / autonomous test flows
Evaluate and integrate agentic frameworks (e.g., OpenAI Assistants API, AWS Bedrock Agents, LangGraph, MCP) into QA workflows
Define and monitor AI-specific quality metrics (accuracy vs. ground truth, hallucination and error rates, safety / policy adherence)
Ensure AI and virtual agent experiences are accurate, consistent, and high quality in a federal context
Plan and execute performance, load, and scalability testing (e.g., JMeter or equivalent)
Validate data integrity and transformation quality across complex biomedical data pipelines and AI-enhanced workflows
Partner with engineers and data scientists to ensure AI/ML models and integrations are testable, observable, and measurable post-deployment
Mentor QA team members in both traditional and AI-augmented QA practices
Collaborate with development, DevOps, product, UX, and data teams to improve testability, shift-left quality, and increase automated coverage
Integrate automation into CI/CD (e.g., GitHub Actions, Jenkins, Azure DevOps, GitLab CI), monitor test health and flakiness, and address coverage gaps
Communicate quality risks, trends, and mitigation plans to technical and non-technical stakeholders, including government partners
Qualification
Required
Bachelor's degree in computer science, Information Technology, Engineering, or related field
5+ years of software QA experience (manual and automation) in production environments
2+ years providing technical or process leadership (e.g., lead QA, primary product QA owner, mentor, or manager)
Strong experience with UI automation tools (Selenium WebDriver, Playwright, or Cypress)
Experience testing RESTful APIs and microservices architectures
Hands-on experience integrating automated tests into CI/CD pipelines (GitHub Actions, Jenkins, Azure DevOps, or GitLab CI)
Professional proficiency in Python or JavaScript for test automation
Hands-on use of GenAI tools (e.g., ChatGPT, Claude, Copilot) for QA tasks such as test-case generation, data creation, and exploratory testing
Understanding of AI/agentic concepts: Tool-calling / function invocation, Multi-step / chain-of-thought workflows, Autonomous / self-healing test flows, AI-driven data comparison and validation
Experience with performance / load testing (e.g., JMeter or equivalent)
Proficiency with Jira or similar issue tracking tools
Strong written and verbal communication skills, including the ability to explain AI-related quality risks to stakeholders
Ability to prioritize, multitask, and operate effectively in complex, mission-driven environments
Preferred
AWS Cloud Practitioner certification
Experience with modern automation stacks (Playwright or Cypress) and API testing tools (Postman, REST-assured, pytest, or similar)
Experience testing AI/ML-powered features (LLM applications, RAG systems, agents, recommendation engines, or chatbots)
Experience with one or more: LangChain or LangGraph, AWS Bedrock Agents or OpenAI Assistants API, MCP (Multi-Context Protocol) or similar orchestration frameworks
Experience designing or testing internal QA copilots or automation bots for test authoring or execution
Familiarity with test management tools (e.g., TestRail, Zephyr)
Knowledge of accessibility standards (WCAG) and basic security testing practices
Prior QA experience in healthcare, life sciences, biomedical informatics, or other regulated data environments
ISTQB or similar certification
Benefits
Competitive benefits
Professional development opportunities
A collaborative, supportive culture
Company
Essential Software Inc
Essential Software Inc. helps customers make scientific breakthroughs faster.
H1B Sponsorship
Essential Software Inc has a track record of offering H1B sponsorships. Please note that this does not
guarantee sponsorship for this specific role. Below presents additional info for your
reference. (Data Powered by US Department of Labor)
Distribution of Different Job Fields Receiving Sponsorship
Represents job field similar to this job
Trends of Total Sponsorships
2025 (2)
2023 (8)
2022 (1)
2021 (1)
2020 (5)
Funding
Current Stage
Growth StageCompany data provided by crunchbase