JobTarget Logo

Software Engineering Intern, Data & Machine Learning in Glendale, California at Moon

New
Moon
Glendale, California, 91203, United States
Posted on
New job! Apply early to increase your chances of getting hired.

Explore Related Opportunities

Job Description

About Moon

An ambitious and independent stealth SaaS company incubated by Home Organizers, a market leader with decades of proven success in designing and delivering exceptional, innovative home organization solutions through its subsidiaries Closet World, Closets by Design, Brio Water Technology, and others. Backed by their deep industry experience and a commitment to be Home Organizer’s critical SaaS provider for its 6000+ employees, our team is building innovative solutions to solve universal problems that most businesses face — yet are not addressed by a single, unified tool.

Our mission is to transform the entrepreneurial experience and deliver operational excellence for businesses across the world through a unified platform supercharged with proprietary AI agents. We want to unleash the creativity of billions and inspire the world to dream big and build fast. We’re a rapidly growing team of forward-thinking and, most importantly, committed builders. We are driven by the opportunity to push boundaries, reimagine the foundations of human work, and shape tools that power the next generation of “business operations.” The way the world views and does business is changing, and we are committed to leading this change responsibly.


Role Overview

Python is central to Moon’s roadmap. Our data and ML layer powers core home services workflows— surfacing operational insights for service company owners and enabling predictive features that help users make better decisions. The data is real operational data at meaningful scale; the problems are genuinely interesting, and mistakes have real downstream consequences.

This is not a data science internship where you run notebooks in isolation. You’ll ship code that connects to a real backend and reaches real users. The year-round track is intentional: meaningful data and ML work takes time to build, validate, and integrate into a production product. You’ll go deeper here than you could in 12 weeks.

About the role

You’ll join the data and ML engineering track with a dedicated mentor who works across data

engineering and applied ML — weekly 1:1s, pipeline reviews, and structured ramp milestones.

 The code quality bar is the same as the rest of the engineering team. Mentorship is how we help

you get there — not a reason to lower it.

 You’ll collaborate directly with the .NET team on data contracts between systems — the work

does not exist in isolation.

 We expect you to be 3 days on-site in Glendale, with flexibility around your academic schedule.

Fully remote is not offered.

 AI-assisted development is the default here — across EDA, pipeline development, debugging, and documentation. You’re expected to come in already working this way.

What you'll do

Data Engineering & Pipelines

 Build and maintain Python ETL pipelines: ingestion, transformation, validation, and reporting.

 Write data validation and quality checks — bad data in production is a customer-facing problem,

not a technical inconvenience.

 Instrument and monitor data pipelines; silent failures are often worse than loud ones.

 Collaborate with the .NET team on data contracts between systems.

 Write tests for pipeline outputs and model behavior; data pipelines have bugs just like

application code does — they’re just harder to find.

Applied ML & AI Integration

 Prototype and develop ML features in production or active development — applied to home

services operational data.

 Integrate LLM capabilities into application features using LangChain, direct API calls, or agent

orchestration patterns.

 Use AI tools actively across the whole workflow: EDA, code generation, debugging,

documentation, and multi-step automated pipelines. AI-assisted development is your default

mode, not an occasional tool.

 Document data models and transformation logic as part of the definition of done.

Qualifications

Required

 Solid Python — functions, classes, error handling, and code that someone else can read.

 Data manipulation with pandas, polars, or equivalent — load a dataset, clean it, answer

questions from it without fighting the tools.

 SQL — non-trivial queries and a real understanding of what a join is doing.

 AI tool usage that is habitual and specific: you’ve used LLMs to accelerate EDA, write boilerplate,

or debug data issues, and you can describe exactly how. This is evaluated explicitly.

 Genuine intellectual curiosity about data — you want to know why a number looks wrong, not

just make the error go away.

Nice to Have

 ML library exposure: scikit-learn, PyTorch, or similar. You don’t need production model

experience, but you should know what a train/test split is and why it matters.

 Data pipeline tooling: Airflow, Prefect, dbt, or similar.

 LangChain, OpenAI/Anthropic API integration, or agent workflow experience.

 Cloud data services on Azure, AWS, or GCP.

 FastAPI or Python-based API experience.

 Statistics coursework — not required, but genuinely useful for the ML work.

What You’ll Get

 Competitive hourly compensation, tiered by experience (undergraduate and graduate rates;

details shared during the process).

 A dedicated mentor working across data engineering and applied ML — enough runway to see

features go from prototype to production over a 6–12 month engagement.

 Work that ships — features you build will go to production users during the internship.

 Real code review under the same standards applied to the full-time team — not the kind that

approves everything.

 AI tooling stipend (Cursor Pro, Claude Pro, or equivalent) — the AI-native expectation is real; we

remove the financial barrier to getting there.

 Priority consideration for full-time roles upon graduation.

 Access to real-world home services operational data — the problems are genuine, not synthetic.

Location & Hybrid Policy

This role is based in Glendale, CA. We expect 3 days on-site per week, with flexibility around

academic schedules communicated in advance. Fully remote arrangements are not offered.

Candidates who cannot commit to regular on-site presence in Glendale are not a fit for this program.

How to Apply

Send your resume. A notebook, a project, or any data work you can share is ideal — include a link

and a brief note on what you built and why. No shareable work? Describe the most interesting data

problem you’ve tackled: the question, your approach, and what you found. Applications are

reviewed on a rolling basis. We recruit year-round for this track.

Moon is committed to building a diverse and inclusive team. We encourage applications from

candidates of all backgrounds, institutions, and experience levels. We evaluate based on

demonstrated ability, not credentials.


The pay range for this role is:
25 - 35 USD per hour(Moon HQ)

Job Location

Glendale, California, 91203, United States

Frequently asked questions about this position

Similar Jobs In Glendale, California

Principal Embedded Software Engineer

Lynx Software Technologies, Inc.
Palmdale, California
New

Quality/Release Engineer

Sabel Systems Technology Solutions, LLC
El Segundo, California

QA Automation Engineer Intern

Canon Virginia Inc
Irvine, California

Software Intern

Canon U.S.A., Inc.
Irvine, California

Senior Full Stack Software Engineer

DevAltus
Los Angeles, California

Apply NowYour application goes straight to the hiring team