Member of Technical Staff, Backend, LLM Applications at Inception – San Mateo, California
Inception
San Mateo, California, 94401, United States
Posted on
Updated on
Job Function:Information Technology
Explore Related Opportunities
Miscellaneous Computer Occupations jobs in CaliforniaJobs in CaliforniaMiscellaneous Computer Occupations jobs
About This Position
Inception creates the world’s fastest, most efficient AI models. Our Mercury model is the world’s fastest reasoning LLM and first commercially available diffusion LLM, delivering 5x greater speed and efficiency than today’s LLMs, with best-in-class quality.
We are the AI researchers and engineers behind such breakthrough AI technologies as diffusion models, flash attention, and DPO.
The RoleWe seek experienced backend engineers to own the systems that serve our diffusion LLMs in production. You'll build and operate infrastructure that handles billions of inference requests — optimizing for latency, throughput, cost, and reliability. This role sits at the intersection of ML systems and backend infrastructure.
Key Responsibilities
Qualifications
Preferred Skills
Why Join Inception
Perks & Benefits
About UsInception creates the world’s fastest, most efficient AI models. Today’s autoregressive LLMs generate tokens sequentially, which makes them painfully slow and expensive. Inception’s diffusion-based LLMs (dLLMs) generate answers in parallel. They are 5x faster and more efficient, while delivering best-in-class quality.
Inception was co-founded by Stanford professor Stefano Ermon, who co-invented such breakthrough AI technologies as diffusion models, flash attention, and DPO, UCLA professor Aditya Grover, who co-invented node2vec, decision transformers, and d1 reasoning, and Cornell professor and Afresh co-founder Volodymyr Kuleshov, who co-invented MDLM and Block Diffusion.
We pioneered the application of diffusion to language, with world’s first (and only) commercially available dLLM, Mercury. We are currently deploying our large-scale diffusion LLMs at Fortune 500 companies. Diffusion is the technology behind today’s image and video AI, and we’re making it the standard for LLMs as well.
Our team includes engineers from Google DeepMind, Meta AI, Microsoft AI, and OpenAI. Based in Palo Alto, CA, we are backed by A-list venture capitalists, including Menlo Ventures, Mayfield, M12 (Microsoft’s venture fund), Snowflake Ventures, Databricks, and Innovation Endeavors, and by tech luminaries such as Andrew Ng, Andrej Karpathy, and Eric Schmidt.
If you are talented, innovative, and ambitious, come help us invent the future of AI.We are an equal opportunity employer and encourage candidates of all backgrounds to apply.
We are the AI researchers and engineers behind such breakthrough AI technologies as diffusion models, flash attention, and DPO.
The RoleWe seek experienced backend engineers to own the systems that serve our diffusion LLMs in production. You'll build and operate infrastructure that handles billions of inference requests — optimizing for latency, throughput, cost, and reliability. This role sits at the intersection of ML systems and backend infrastructure.
Key Responsibilities
- Design, build, and operate scalable backend services and model serving infrastructure for our diffusion LLMs.
- Implement and manage load balancing, autoscaling, and traffic routing for model endpoints.
- Build systems for model versioning, canary deployments, and zero-downtime rollouts.
- Develop monitoring, alerting, and observability tooling to ensure SLA compliance and rapid incident response.
- Benchmark and evaluate serving frameworks and hardware configurations to inform infrastructure decisions.
Qualifications
- BS/MS/PhD in Computer Science or a related field (or equivalent experience).
- 5+ years of experience building production backend systems.
- Strong proficiency in Python, including async programming and concurrent systems.
- Solid understanding of distributed systems, networking, and load balancing at scale.
- Familiarity with Kubernetes, CI/CD pipelines, and cloud infra (AWS and/or Azure).
Preferred Skills
- Experience serving LLMs or other large generative models in production at scale.
- Experience with cloud infrastructure (AWS, Azure), including GPU instance management and cost optimization.
- Experience with infrastructure as code tools (Terraform) and deployment automation.
- Experience with monitoring and observability tools (Prometheus, Grafana).
- Familiarity with model serving frameworks (vLLM, Triton Inference Server, TensorRT-LLM).
Why Join Inception
- Work with World-Class Talent: Collaborate with the inventors of diffusion models and leading AI researchers
- Shape Foundational Technology: Your decisions will influence how the next generation of AI products are built and used
- Immediate Impact: Join at the ground floor where your contributions directly shape product direction and company trajectory
Perks & Benefits
- Competitive salary and equity in a rapidly growing startup
- Flexible vacation and paid time off (PTO)
- Health, dental, and vision insurance
- Catered meals (breakfast, lunch, & dinner)
- Commuter subsidies
- A collaborative and inclusive culture
About UsInception creates the world’s fastest, most efficient AI models. Today’s autoregressive LLMs generate tokens sequentially, which makes them painfully slow and expensive. Inception’s diffusion-based LLMs (dLLMs) generate answers in parallel. They are 5x faster and more efficient, while delivering best-in-class quality.
Inception was co-founded by Stanford professor Stefano Ermon, who co-invented such breakthrough AI technologies as diffusion models, flash attention, and DPO, UCLA professor Aditya Grover, who co-invented node2vec, decision transformers, and d1 reasoning, and Cornell professor and Afresh co-founder Volodymyr Kuleshov, who co-invented MDLM and Block Diffusion.
We pioneered the application of diffusion to language, with world’s first (and only) commercially available dLLM, Mercury. We are currently deploying our large-scale diffusion LLMs at Fortune 500 companies. Diffusion is the technology behind today’s image and video AI, and we’re making it the standard for LLMs as well.
Our team includes engineers from Google DeepMind, Meta AI, Microsoft AI, and OpenAI. Based in Palo Alto, CA, we are backed by A-list venture capitalists, including Menlo Ventures, Mayfield, M12 (Microsoft’s venture fund), Snowflake Ventures, Databricks, and Innovation Endeavors, and by tech luminaries such as Andrew Ng, Andrej Karpathy, and Eric Schmidt.
If you are talented, innovative, and ambitious, come help us invent the future of AI.We are an equal opportunity employer and encourage candidates of all backgrounds to apply.
Scan to Apply
Just scan this QR code to apply from your phone.
Job Location
San Mateo, California, 94401, United States
Frequently asked questions about this position
Continue to apply
Enter your email to continue. You’ll be redirected to the employer’s application.By clicking Continue, you understand and agree to JobTarget's Terms of Use and Privacy Policy.