AddSource · 6 hours ago
AI Security Engineer
AddSource is focused on AI Security Engineering, and they are seeking an AI Security Engineer to design, evaluate, and implement secure architectures for Large Language Model and Agentic AI ecosystems. The role involves ensuring robust data protection, model governance, and compliance within various AI platforms.
Responsibilities
Engineer secure environments for enterprise LLM platforms (ChatGPT, Claude, Gemini, Azure OpenAI)
Design zero-trust architectures for AI ecosystems, including MCP servers/clients and agentic workflows
Secure LLM model lifecycle: training, fine-tuning, evaluation, deployment, inference endpoints
Define agent-to-agent (A2A) trust boundaries, cryptographic trust chains, message integrity controls
Establish guardrails for Retrieval-Augmented Generation (RAG), tool use, plugins, function calling, enterprise embeddings, contextual memory
Implement runtime sandboxing, prompt firewalling, data path isolation, interaction filtering
Apply frameworks: NIST AI RMF, MAESTRO, OWASP Top 10 for LLM & Agentic AI, MITRE ATLAS, ISO/IEC 23894 & 42001, Google SAIF, Microsoft Responsible AI Standard
Establish model governance, evaluation criteria, audit logs, chain-of-thought protection, policy configuration
Conduct threat modeling using: LLM-specific, Agentic AI Self-Propagation & Tool Abuse, RAG Architecture Security, A2A Trust Exploitation, MCP Supply-Chain & Man-in-the-Middle models
Define adversarial defenses: prompt injection mitigation, jailbreak prevention, indirect prompt poisoning, model exfiltration protection, data poisoning countermeasures, model inversion & membership inference prevention
Design secure Azure OpenAI & Azure AI Foundry deployments: private endpoints, VNet isolation, mTLS/encryption, model filtering, enterprise data security
Secure Gemini Enterprise & Google LM Notebooks: VPC Service Controls, IAM conditional access, DLP, context filtering, confidential computing
Govern MCP tools, input/output sanitization, policy-guarded capability authorization
Define secure orchestration and oversight for multi-agent LLM systems: autonomy limits, escalation rules, tool use governance
Implement Secure MLOps: dataset lineage, provenance, quality checks, differential privacy, secure gradient computation, adversarial training, signed/documented model artifacts
Secure confidential training data, prevent leakage to public models
Enable runtime protection, anomaly detection, exploit signal monitoring
Build AI-specific incident playbooks: hallucination incidents, governance policy drift, unauthorized agent actions, emergent harmful behavior
Qualification
Required
6–10 years in cybersecurity, including 2+ years in AI/ML security or LLM platform engineering
Deep understanding of generative AI security: LLM jailbreak defense, guardrails engineering, AI alignment, content filtering, advanced prompt-level security
Knowledge of LLM tool ecosystems (functions, plugins, RAG)
Security configurations for ChatGPT Enterprise, Claude Enterprise, Gemini Enterprise, Google LM Notebooks, OpenAI on Azure, Azure AI Foundry
Zero-trust architectures, KMS/HSM/secrets management, API/function calling security, encryption controls, network/IAM/private routing, DSPM, CASB, CSPM, AIRS tools
Preferred
Python, TypeScript/Node.js, Terraform/IaC for secure AI deployments
Agentic AI frameworks: LangChain, LangGraph, OpenAI Agents, CrewAI, AutoGen. ADK
Company
AddSource
Welcome to AddSource (DBA name for VeeRteq Solutions Inc.) – Your Trusted Workforce Solution Partner! As a women-owned staffing and HR consulting firm with a robust presence in both Canada and the US, AddSource is dedicated to helping businesses thrive by connecting them with exceptional talent and providing holistic HR solutions.
Funding
Current Stage
Early StageCompany data provided by crunchbase