AI Applications Engineer jobs in United States
cer-icon
Apply on Employer Site
company-logo

Inside Higher Ed · 2 days ago

AI Applications Engineer

Stanford University is seeking an experienced AI Applications Engineer to join their Enterprise Technology team. The role involves designing, implementing, and supporting AI solutions to enhance workflow efficiency and decision-making across university use cases.

Digital MediaEducationHigher EducationJournalismRecruiting

Responsibilities

Translate requirements into well-engineered components (pipelines, vector stores, prompt/agent logic, evaluation hooks) and implement them in partnership with the platform/architecture team
Build and maintain LLM-based agents/services that securely call enterprise tools (ServiceNow, Salesforce, Oracle, etc.) using approved APIs and tool-calling frameworks. Create lightweight internal SDKs/utilities where needed
Configure and optimize RAG workflows (chunking, embeddings, metadata filters) and integrate with existing search/vector infrastructure—escalating architecture changes to designated architects
Follow and improve team standards for CI/CD, testing, prompt/model versioning, and observability. Own feature delivery through dev/test/prod, coordinating with release managers
Apply established guardrails (PII redaction, policy checks, access controls). Partner with InfoSec and architects to close gaps; document decisions and risks
Instrument services with KPIs (latency, cost, accuracy/quality) and build lightweight dashboards. Deep BI/reporting not primary
Write clear technical docs (APIs, workflows, runbooks), user stories, and acceptance criteria. Support and sometimes lead UAT/test activities
Facilitate working sessions with stakeholders; mentor junior engineers through code reviews and pair programming; provide concise updates and risk flags

Qualification

AI/GenAI EngineeringLLM Agent DevelopmentMLOps PracticesProgramming ExpertiseCloud AI StacksData Design/ArchitectureSDLC UnderstandingOpen Source ExperienceProblem-Solving SkillsCommunication SkillsCollaboration SkillsMentorship Skills

Required

Bachelor's degree and eight years of relevant experience or a combination of education and relevant experience
Agent/Agentic Framework Experience: Built and shipped at least one production LLM agent or agentic workflow using frameworks such as LangGraph, LangChain, CrewAI/AutoGen, Google Agent Builder/Vertex AI Agents (or equivalent). Able to explain tool selection, orchestration logic, and post‑deployment support
Proven Delivery: Implemented 3+ AI/ML projects and 2+ GenAI/LLM projects in production, with operational support (monitoring, tuning, incident response). Projects should serve sizable user populations and demonstrate measurable efficiency gains
Strong understanding of AI/ML concepts (LLMs/transformers and classical ML) and experience designing, developing, testing, and deploying AI-driven applications
Programming Expertise: Python (primary) plus experience with Node.js/Next.js/React/TypeScript and Java; demonstrated ability to quickly learn new tools/frameworks
Experience with cloud AI stacks (e.g., Google Vertex AI, AWS Bedrock, Azure OpenAI) and vector/search technologies (Pinecone, Elastic/OpenSearch, FAISS, Milvus, etc.)
Knowledge of data design/architecture, relational and NoSQL databases, and data modeling
Thorough understanding of SDLC, MLOps, and quality control practices
Ability to define/solve logical problems for highly technical applications; strong problem-solving and systematic troubleshooting skills
Excellent communication, listening, negotiation, and conflict resolution skills; ability to bridge functional and technical resources

Preferred

MLOps Tooling: MLflow, Kubeflow, Vertex Pipelines, SageMaker Pipelines; LangSmith/PromptLayer/Weights & Biases
Open Source Savvy: Experience working with, customizing, and improving open-source solutions; comfortable contributing fixes/features upstream
Rapid Tech Adoption: Demonstrated ability to pick up a new technology/framework quickly and deliver production value with it
GenAI Frameworks: LangChain, LlamaIndex, DSPy, Haystack, LangGraph, Agent Engine, Google ADK, AWS AgentCore, CrewAI/AutoGen. Security & Governance: Implementing AI guardrails, red-teaming, and policy enforcement frameworks
Enterprise Integrations: ServiceNow, Salesforce, Oracle Financials, or others
UI Development: React/Next.js/Tailwind for internal tools
Prompt engineering at scale: Structured prompts (JSON/function-calling), templates, version control; automated/offline & online evals (rubrics, hallucination/bias checks, A/B tests, golden sets)
Parameter‑efficient fine‑tuning (LoRA/QLoRA/adapters), supervised instruction tuning; hosting open‑weight models (Llama/Mistral/Qwen) with vLLM/TGI/Ollama
Safety/guardrails frameworks (Guardrails.ai, NeMo Guardrails, Azure/AWS safety filters) and jailbreak/drift detection
Hybrid search & reranking (BM25+dense, Cohere/Voyage/Jina rerankers), synthetic data generation, provenance/watermarking
Telemetry & governance: prompt/model drift monitoring, policy‑as‑code, audit logging, red‑teaming playbooks
One of (or equivalent experience with): Google/AWS/Azure ML/AI certifications or strong demonstrable portfolio of production AI systems

Benefits

Career development programs
Tuition reimbursement
Superb retirement plans
Generous time-off
Family care resources
Excellent health care benefits
Free commuter programs
Ridesharing incentives
Discounts

Company

Inside Higher Ed

twittertwittertwitter
company-logo
Inside Higher Ed is the online source for news, opinion, and jobs related to higher education.

Funding

Current Stage
Growth Stage
Total Funding
unknown
2022-01-10Acquired
2006-08-31Series Unknown

Leadership Team

leader-logo
Stephanie Shweiki
Director, Foundation Partnerships
linkedin
Company data provided by crunchbase