Machine Learning Systems Engineer at Inception – San Mateo, California
Inception
San Mateo, California, 94401, United States
Posted on
Updated on
Job Function:Engineering
Explore Related Opportunities
Miscellaneous Computer Occupations jobs in CaliforniaJobs in CaliforniaMiscellaneous Computer Occupations jobs
About This Position
Description Inception creates the world’s fastest, most efficient AI models. Our Mercury model is the world’s fastest reasoning LLM and first commercially available diffusion LLM, delivering 5x greater speed and efficiency than today’s LLMs, with best-in-class quality.
We are the AI researchers and engineers behind such breakthrough AI technologies as diffusion models, flash attention, and DPO.
We are looking for ML Systems Engineers who will design and implement the infrastructure that powers our ML training and inference systems. You will collaborate with ML researchers and engineers to build efficient, reliable, and scalable systems that enable the development and deployment of state-of-the-art diffusion LLMs.
Key Responsibilities:
Qualifications:
Preferred Skills:
Why Join Inception
Perks & Benefits
About UsInception creates the world’s fastest, most efficient AI models. Today’s autoregressive LLMs generate tokens sequentially, which makes them painfully slow and expensive. Inception’s diffusion-based LLMs (dLLMs) generate answers in parallel. They are 5x faster and more efficient, while delivering best-in-class quality.
Inception was co-founded by Stanford professor Stefano Ermon, who co-invented such breakthrough AI technologies as diffusion models, flash attention, and DPO, UCLA professor Aditya Grover, who co-invented node2vec, decision transformers, and d1 reasoning, and Cornell professor and Afresh co-founder Volodymyr Kuleshov, who co-invented MDLM and Block Diffusion.
We pioneered the application of diffusion to language, launching the world’s first commercially available dLLM, Mercury, in early 2025. We are currently deploying our large-scale diffusion LLMs at Fortune 500 companies. Diffusion is the technology behind today’s image and video AI, and we’re making it the standard for LLMs as well.
Our team includes engineers from Google DeepMind, Meta AI, Microsoft AI, and OpenAI. Based in Palo Alto, CA, we are backed by A-list venture capitalists, including Menlo Ventures, Mayfield, M12 (Microsoft’s venture fund), Snowflake Ventures, Databricks, and Innovation Endeavors, and by tech luminaries such as Andrew Ng, Andrej Karpathy, and Eric Schmidt.
If you are talented, innovative, and ambitious, come help us invent the future of AI.
We are an equal opportunity employer and encourage candidates of all backgrounds to apply.
We are the AI researchers and engineers behind such breakthrough AI technologies as diffusion models, flash attention, and DPO.
We are looking for ML Systems Engineers who will design and implement the infrastructure that powers our ML training and inference systems. You will collaborate with ML researchers and engineers to build efficient, reliable, and scalable systems that enable the development and deployment of state-of-the-art diffusion LLMs.
Key Responsibilities:
- Build and optimize high-performance model serving systems for low-latency inference for diffusion LLMs
- Design and implement distributed training infrastructure for large-scale machine learning models
- Create monitoring and observability solutions for ML systems in production
- Optimize infrastructure costs and resource utilization across GPU clusters
- Design and implement efficient data storage and retrieval systems for ML workloads
- Collaborate with ML researchers to translate theoretical requirements into practical system designs
Qualifications:
- BS/MS/PhD in Computer Science, Engineering, or a related field (or equivalent experience)
- Knowledge of ML serving frameworks (vLLM, TensorRT, ONNX Runtime)
- Understanding of ML frameworks (PyTorch, TensorFlow) from a systems perspective
- Familiarity with high-performance computing and GPU programming (CUDA)
- Familiarity with distributed training techniques (data parallel, model parallel, pipeline parallel)
- Proficiency in Python and at least one systems programming language (C++/Rust/Go)
- Experience with containerization (Docker), orchestration (Kubernetes), and CI/CD pipelines
Preferred Skills:
- Experience building and maintaining large-scale ML training clusters
- Experience with distributed systems and cloud computing platforms (AWS/GCP/Azure)
- Experience with ML workflow orchestration tools (Kubeflow, Airflow)
- Background in performance optimization and profiling of ML systems
- Knowledge of ML-specific infrastructure challenges (checkpointing, resource scheduling, etc.)
- Experience with MLOps practices and tooling
Why Join Inception
- Work with World-Class Talent: Collaborate with the inventors of diffusion models and leading AI researchers
- Shape Foundational Technology: Your decisions will influence how the next generation of AI products are built and used
- Immediate Impact: Join at the ground floor where your contributions directly shape product direction and company trajectory
Perks & Benefits
- Competitive salary and equity in a rapidly growing startup.
- Access to the latest GPU hardware and cloud resources
- Flexible vacation and paid time off (PTO).
- Health, dental, and vision insurance.
- A collaborative and inclusive culture
About UsInception creates the world’s fastest, most efficient AI models. Today’s autoregressive LLMs generate tokens sequentially, which makes them painfully slow and expensive. Inception’s diffusion-based LLMs (dLLMs) generate answers in parallel. They are 5x faster and more efficient, while delivering best-in-class quality.
Inception was co-founded by Stanford professor Stefano Ermon, who co-invented such breakthrough AI technologies as diffusion models, flash attention, and DPO, UCLA professor Aditya Grover, who co-invented node2vec, decision transformers, and d1 reasoning, and Cornell professor and Afresh co-founder Volodymyr Kuleshov, who co-invented MDLM and Block Diffusion.
We pioneered the application of diffusion to language, launching the world’s first commercially available dLLM, Mercury, in early 2025. We are currently deploying our large-scale diffusion LLMs at Fortune 500 companies. Diffusion is the technology behind today’s image and video AI, and we’re making it the standard for LLMs as well.
Our team includes engineers from Google DeepMind, Meta AI, Microsoft AI, and OpenAI. Based in Palo Alto, CA, we are backed by A-list venture capitalists, including Menlo Ventures, Mayfield, M12 (Microsoft’s venture fund), Snowflake Ventures, Databricks, and Innovation Endeavors, and by tech luminaries such as Andrew Ng, Andrej Karpathy, and Eric Schmidt.
If you are talented, innovative, and ambitious, come help us invent the future of AI.
We are an equal opportunity employer and encourage candidates of all backgrounds to apply.
Scan to Apply
Just scan this QR code to apply from your phone.
Job Location
San Mateo, California, 94401, United States
Frequently asked questions about this position
Latest Job Openings in California
Therapeutic Behavioral Specialist
Hope Services
San Jose, CA
Lead Dentist
Harbor Community Clinic
San Pedro, CA
Traveling Foreman Electrician - San Diego, CA
Alamon Inc.
San Diego, CA
Shuttle Bus Operator
Vivalon
San Francisco, CA
Order Selector - PM Dry/Chill - Up to $32/hour including Incentive & Shift Premium
Coastal Pacific Food Distributors
Ontario, CA
Continue to apply
Enter your email to continue. You’ll be redirected to the employer’s application.By clicking Continue, you understand and agree to JobTarget's Terms of Service and Privacy Policy.
Apply Now