SIGN IN
Senior AI Engineer jobs in United States
cer-icon
Apply on Employer Site
company-logo

Assail · 16 hours ago

Senior AI Engineer

Assail is a company focused on enhancing security through advanced AI solutions. They are seeking a Senior AI Engineer to own and evolve their proprietary LLM, manage the fine-tuning lifecycle, and design workflows for autonomous security testing.
Artificial Intelligence (AI)Cloud Data ServicesDeveloper APIsSoftware
Hiring Manager
Alissa Valentina Knight
linkedin

Responsibilities

Own and evolve our proprietary 14B+ parameter security-focused LLM, built using LoRA/QLoRA adapters on an open-source foundation model
Manage the full fine-tuning lifecycle: dataset curation, training runs, adapter merging, and deployment
Expand model capabilities across mobile security testing, API penetration testing, and code deobfuscation domains
Optimize inference performance using vLLM for high-throughput, low-latency serving with multi-model routing
Design and extend our gRPC-based agent swarm architecture ("Polemos" orchestrator) for autonomous security testing
Build agents capable of controlled tool use for multi-step penetration testing workflows
Enforce strict tenant isolation, execution boundaries, and audit trails to prevent misuse or data exfiltration
Work with Kafka pub/sub for real-time agent coordination and event streaming
Architect and maintain retrieval pipelines using Qdrant for vector similarity search and Neo4j for episodic memory and relationship graphs
Design context injection strategies for grounding model outputs in OWASP standards (MASWE, MASTG, MASVS, API Top 10)
Prevent data leakage and unintended retrieval in adversarial or malformed inputs
Build and curate domain-specific training datasets (we have 340k+ deobfuscation pairs and growing)
Standardize prompt management and versioning across our inference endpoints
Enforce structured outputs using schemas and validators to ensure deterministic downstream behavior
Build and maintain golden datasets for automated regression testing of model outputs
Design evaluations that measure:
Accuracy and recall on security findings
False positives / hallucination rates
Adversarial and misuse scenarios
Red-team model behavior against prompt injection, tool injection, and malformed inputs
Ensure AI-generated findings are evidence-backed, reproducible, and auditable
Design outputs suitable for security reports, including reproduction steps and supporting artifacts
Integrate findings with compliance frameworks (OWASP MASWE/MASTG mapping)

Qualification

PythonPyTorchGRPCApache KafkaHugging Face TransformersQdrantNeo4jFastAPISQLAdversarial MLCybersecurityLeadershipCommunication

Required

6+ years professional software engineering experience
3+ years building and operating AI/ML systems in production
Demonstrated ownership of LLM-backed features from design → deployment → iteration
Own and evolve our proprietary 14B+ parameter security-focused LLM, built using LoRA/QLoRA adapters on an open-source foundation model
Manage the full fine-tuning lifecycle: dataset curation, training runs, adapter merging, and deployment
Expand model capabilities across mobile security testing, API penetration testing, and code deobfuscation domains
Optimize inference performance using vLLM for high-throughput, low-latency serving with multi-model routing
Design and extend our gRPC-based agent swarm architecture ('Polemos' orchestrator) for autonomous security testing
Build agents capable of controlled tool use for multi-step penetration testing workflows
Enforce strict tenant isolation, execution boundaries, and audit trails to prevent misuse or data exfiltration
Work with Kafka pub/sub for real-time agent coordination and event streaming
Architect and maintain retrieval pipelines using Qdrant for vector similarity search and Neo4j for episodic memory and relationship graphs
Design context injection strategies for grounding model outputs in OWASP standards (MASWE, MASTG, MASVS, API Top 10)
Prevent data leakage and unintended retrieval in adversarial or malformed inputs
Build and curate domain-specific training datasets (we have 340k+ deobfuscation pairs and growing)
Standardize prompt management and versioning across our inference endpoints
Enforce structured outputs using schemas and validators to ensure deterministic downstream behavior
Build and maintain golden datasets for automated regression testing of model outputs
Design evaluations that measure: Accuracy and recall on security findings, False positives / hallucination rates, Adversarial and misuse scenarios, Red-team model behavior against prompt injection, tool injection, and malformed inputs
Ensure AI-generated findings are evidence-backed, reproducible, and auditable
Design outputs suitable for security reports, including reproduction steps and supporting artifacts
Integrate findings with compliance frameworks (OWASP MASWE/MASTG mapping)
Python (Expert)
SQL (for analysis, evaluation datasets, and metrics)
PyTorch (required)
Hugging Face Transformers and PEFT for LoRA/QLoRA fine-tuning
vLLM for production inference serving
Experience with BitsAndBytes quantization (INT4/INT8) for memory-efficient training
Familiarity with training optimization tools (Unsloth, DeepSpeed, or similar)
gRPC and Protobuf for service communication
Apache Kafka for event streaming and agent coordination
FastAPI for API development
Vector databases: Qdrant (primary) or equivalent (Milvus, Pinecone)
Graph databases: Neo4j for relationship modeling
Experience debugging embedding quality and retrieval failures
Prometheus metrics and custom tracing for model observability
Experiment tracking with Weights & Biases or equivalent
Ability to instrument and debug hallucinations, retrieval misses, and tool failures
Defending against prompt injection and tool injection
Preventing data exfiltration through RAG or tool misuse
Designing AI systems for adversarial user behavior
Understanding the implications of handling sensitive security data (PII, credentials, logs)
Multi-tenant isolation patterns in AI systems

Preferred

Prior work in cybersecurity, mobile security, or adversarial ML environments
Familiarity with OWASP standards (MASWE, MASTG, MASVS, API Security Top 10)
Experience with Android code deobfuscation (ProGuard/R8 patterns)
Background in API penetration testing methodologies
Demonstrated leadership in AI system design or ownership of critical production features

Company

Assail

twittertwitter
company-logo
Assail AI develops Ares, an agentic AI platform for autonomous offensive security testing targeting mobile apps, APIs, web infrastructure.

Funding

Current Stage
Early Stage
Total Funding
$0.25M
Key Investors
Squared Circle Ventures
2026-01-13Pre Seed· $0.25M

Leadership Team

leader-logo
Melissa Knight
Co-Founder, Chief Revenue Officer & Customer Champion
linkedin
Company data provided by crunchbase