BNY · 9 hours ago
AI Security Architect
BNY is a leading global financial services company that influences a significant portion of the world’s investible assets. They are seeking an AI Security Architect to lead the design, implementation, and governance of security controls for AI/ML systems across the enterprise, ensuring resilient and trustworthy AI solutions.
Financial Services
Responsibilities
Define enterprise AI security architecture: develop reference architectures, guardrails, and standards for secure data pipelines, model training/inference, and AI-integrated applications across on-prem and cloud
Secure MLOps/ML platforms: architect identity, secrets management, network segmentation, and least-privilege access for feature stores, model registries, orchestration, and deployment pipelines
Data protection by design: establish controls for sensitive data ingestion, anonymization/pseudonymization, encryption (at rest/in transit), tokenization, and lineage across AI workflows
Adversarial ML defense: design controls and tests for model poisoning, evasion, model theft/exfiltration, prompt injection, jailbreaking, data leakage, and output manipulation
AI supply chain security: govern third-party models, APIs, and datasets; enforce SBOMs for AI components; evaluate provenance, licensing, and dependency risk
Policy and governance integration: translate AI security requirements into actionable standards and control evidence; align with enterprise risk, compliance, and model governance processes
Threat modeling and security testing: lead threat modeling for AI systems; design red-teaming and secure evaluation methods for models and agents; integrate chaos/resilience testing
Secure development lifecycle: embed AI-specific security checks (static/dynamic scans, IaC policy-as-code, data quality gates, bias/robustness checks) into CI/CD and change management
Runtime protection: implementing guardrails, content filters, output validation, rate limiting, anomaly detection, and monitoring for AI services and agentic workflows
Observability and incident response: define logging/telemetry (model inputs/outputs, drift, performance, safety events); integrate AI-specific playbooks into SOC operations
Zero Trust for AI: design identity-aware access, micro-segmentation, and continuous verification for data scientists, services, and agents
Privacy and ethics controls: partner with privacy and legal to operationalize consent, minimization, purpose limitation, and responsible AI guardrails, including human-in-the-loop where appropriate
Resilience and continuity: design disaster recovery, backup/restore, model reproducibility, and contingency plans for AI platforms and critical use cases
Vendor/platform assessments: evaluate cloud AI services, open-source frameworks, and commercial tools for security posture, compliance, and fit-for-purpose
Risk management: lead control testing and risk assessments for AI initiatives; document residual risks and remediation plans; support audits and regulatory queries
Reference implementations: deliver secure patterns, sample code, and automation (e.g., reusable Terraform/Policy-as-Code, secrets patterns, logging schemas) to accelerate adoption
Stakeholder leadership: partner with platform engineering, data science, enterprise architecture, cyber operations, and product teams to drive end-to-end secure outcomes
Coaching and enablement: build education and guidance for architects, data scientists, and engineers on secure AI practices, design patterns, and common pitfalls
Continuous improvement: track emerging threats, standards, and best practices; lead updates to architecture and controls; measure effectiveness via KPIs and control health
Qualification
Required
12+ years in cybersecurity/enterprise security architecture with 3+ years focused on AI/ML or data platform security at scale
Expertise in cloud security (AWS/Azure/GCP) including identity, secrets management, key management (KMS/HSM), network segmentation, and policy-as-code
Strong knowledge of AI/ML workflows: data ingestion/feature engineering, model training/inference, MLOps tooling (model registry, orchestrators, serving)
Practical experience with adversarial ML concepts and defenses; familiarity with model robustness, prompt injection risks, and secure evaluation methods
Proficiency in designing observability/telemetry for AI systems (e.g., logging prompts/outputs, drift/quality metrics, safety events) with SIEM/SOAR integration
Hands-on with infrastructure-as-code (Terraform/CloudFormation), CI/CD, and secure SDLC practices tailored to data/ML systems
Deep understanding of data protection (encryption, tokenization, anonymization), privacy by design, and secure data lifecycle management
Strong stakeholder management and communication skills; ability to convert complex risks into clear architecture decisions and implementation guidance
Preferred
Experience architecting secure AI agents and LLM applications including guardrails, content filters, and output validation
Familiarity with standards and frameworks relevant to AI and data (e.g., NIST AI RMF, cloud CIS benchmarks, OWASP for ML/LLM, privacy controls)
Background in model governance and risk management (e.g., testing for drift, bias, stability, and explainability) and integration with enterprise control frameworks
Programming/scripting proficiency (Python preferred) for reference implementations, automation, and security tooling integrations
Experience with container security, Kubernetes, service mesh, and microservices patterns in AI platforms
Prior leadership in enterprise-scale transformations, enabling secure adoption of AI across multiple business lines
Benefits
Generous paid leaves, including paid volunteer time
Flexible global resources and tools for your life’s journey
Focus on your health
Foster your personal resilience
Reach your financial goals
Company
BNY
We help make money work for the world — managing it, moving it and keeping it safe.
Funding
Current Stage
Late StageLeadership Team
Recent News
PR Newswire
2024-11-01
Company data provided by crunchbase