Avalara · 16 hours ago
Principal Security Engineer
Avalara is seeking a Principal Security Engineer to serve as a technical authority for enterprise security architecture, risk, and governance, with a primary focus on the safe and responsible use of AI across the business. The role involves defining and enforcing security guardrails for AI usage, providing independent security judgment, and collaborating with various teams to ensure application and cloud security.
Artificial Intelligence (AI)AccountingFinanceLegalSoftwareComplianceFinancial ServicesTax Preparation
Responsibilities
Design and implement AI-powered security frameworks to enable adaptive, intelligent detection, prevention, and response capabilities across applications, cloud environments, and infrastructure
Integrate machine learning and behavior analytics into threat detection pipelines to automate identification of anomalies, insider threats, and unknown attack patterns
Lead development of predictive risk scoring engines using contextual telemetry, identity signals, and threat intel to prioritize and automate responses
Architect autonomous security workflows using SOAR, LLM agents, and API integrations for a variety of use cases, particularly those that are considered "AI for Security orgs."
Prototype use cases for generative AI, such as automated threat summaries, vulnerability triage, security policy generation, and chatbot assistants for security engineering
Provide principal-level application and AI security guidance to non-engineering teams, including IT, HR, Legal, Finance/Accounting and other business functions helping them understand and manage application and AI-related risk
Partner with Avalara’s Product Security organization to adopt, support, and reinforce existing secure SDLC standards, tooling, and processes
Perform independent risk analysis and threat modeling for applications, platforms, and AI-enabled workflows that fall outside normal Engineering activities or require cross-domain review
Serve as an escalation and second-line advisory resource for high-impact application and AI security risks, providing risk-based recommendations
Advise on secure design patterns for authentication, authorization, API security, and data protection, aligning recommendations with established practices and technology choices
Support security assessments of AI-enabled product and internal features, contributing expertise in LLM threat modeling, abuse-case analysis, and emerging AI-specific risks, in coordination with Product Security and Engineering teams
Define and review cloud security reference architectures across AWS, Azure, and GCP, with an emphasis on zero-trust principles and identity-driven access controls
Partner with platform and infrastructure teams to harden preventive controls against cloud misconfiguration and drift
Evaluate cloud security tooling and platforms, including AI-assisted capabilities, to improve visibility, prioritization, and operational efficiency while maintaining auditability and control
Serve as an escalation point for complex or high-impact cloud security risks, influencing remediation strategies and risk acceptance decisions
Mentor non-engineering teams on AppSec best practices and AI safety principles
Define security metrics and dashboards to track effectiveness of AI and AppSec initiatives
Contribute to Avalara’s broader AI governance efforts, ensuring responsible and secure use of AI in both platform and enterprise environments
Qualification
Required
Bachelor's degree in Cybersecurity, Computer Science, AI/ML, or a related technical field
10+ years of experience in security engineering, security architecture, or software engineering, with at least 5+ years in Application Security
Demonstrable experience applying AI/ML in cybersecurity is preferred
Expertise in AppSec tools (Checkmarx, Veracode, Snyk, SonarQube, etc.) and integrating them into modern CI/CD workflows
Hands-on experience building or integrating AI/ML pipelines for use in threat detection, anomaly detection, or predictive risk modeling
Strong background in secure coding, microservices architecture, and defending APIs, web apps, and serverless environments
Proficiency in Python or similar languages for scripting, data processing, and automation
Familiarity with LLMs and generative AI platforms (e.g., OpenAI, Claude, Gemini) and their security implications
Deep understanding of cloud-native technologies (Kubernetes, containers, serverless) and corresponding security controls
Ability to translate complex security and AI concepts to stakeholders across technical and non-technical roles
Preferred
Master's degree preferred
Certified Information Systems Security Professional (CISSP)
Certified Secure Software Lifecycle Professional (CSSLP)
Certified Cloud Security Professional (CCSP)
GIAC Cloud Security Automation (GCSA)
GIAC Web Application Penetration Tester (GWAPT)
GIAC Machine Learning & Artificial Intelligence (GMLE) (or equivalent)
Benefits
Paid time off
Paid parental leave
Private medical
Life, and disability insurance
Company
Avalara
Avalara is a cloud-based platform that provides tax compliance software and automated solutions.
H1B Sponsorship
Avalara has a track record of offering H1B sponsorships. Please note that this does not
guarantee sponsorship for this specific role. Below presents additional info for your
reference. (Data Powered by US Department of Labor)
Distribution of Different Job Fields Receiving Sponsorship
Represents job field similar to this job
Trends of Total Sponsorships
2025 (26)
2024 (34)
2023 (36)
2022 (37)
2021 (39)
2020 (26)
Funding
Current Stage
Public CompanyTotal Funding
$841.01MKey Investors
BlackRockSusquehanna Growth EquityWarburg Pincus
2025-11-11Private Equity· $500M
2023-01-01Private Equity
2022-10-19Post Ipo Debt· $0.04M
Recent News
2026-02-05
2026-02-02
2026-01-09
Company data provided by crunchbase