Senior Data Engineer, Data Platform at Jobgether – United States
Explore Related Opportunities
About This Position
This position is posted by Jobgether on behalf of a partner company. We are currently looking for a Senior Data Engineer, Data Platform in United States.
This role offers the opportunity to shape and scale a modern data platform within a fast-paced, data-driven environment in the healthcare technology space.
You will design and build core platform services that power large-scale data ecosystems used to improve patient and provider experiences.
The position focuses on developing robust automation, observability, and integration capabilities across complex data workflows.
You will work on high-impact systems that enhance pipeline reliability, scalability, and developer productivity.
Collaboration is central, as you will partner closely with data engineering, DevOps, and product teams.
This is a fully remote role within the contiguous United States, requiring strong ownership and a platform-first engineering mindset.
- Design and build scalable backend services, APIs, and internal tools that automate data platform workflows such as onboarding, validation, orchestration, schema management, and data quality monitoring.
- Develop and integrate data pipelines and platform capabilities using technologies such as Airflow, Spark, dbt, Kafka, Snowflake, and related ecosystems.
- Build observability and governance solutions including dashboards, lineage tracking, job monitoring, and data quality metrics.
- Collaborate cross-functionally with engineering, DevOps, and product teams to translate requirements into scalable end-to-end solutions.
- Contribute to improving platform reliability, developer experience, and system performance across the data ecosystem.
- 7+ years of experience in data engineering or software development, including at least 5 years building production-grade data platforms or services.
- Strong programming skills in Python and SQL, with hands-on experience in Snowflake, BigQuery, Redshift, or similar platforms.
- Deep experience with distributed systems, streaming, or large-scale data processing frameworks such as Spark, Kafka, or Iceberg/Delta/Hudi.
- Experience building data tooling for schema evolution, data contracts, and self-service developer platforms.
- Strong understanding of data orchestration tools such as Airflow, Dagster, or Prefect.
- Solid experience with CI/CD pipelines, Docker, and AWS (minimum 2 years).
- Experience working with dbt for data transformation workflows.
- Strong knowledge of metadata systems, schema governance, and data quality frameworks.
- Nice to have: experience with data observability tools, data catalogs, healthcare data standards (X12, FHIR), data migration projects, internal developer platforms, and authentication/authorization systems (OAuth2, JWT, SSO).
- Must be authorized to work in the contiguous United States as an independent contractor; visa sponsorship is not available (including H1B, OPT, CPT, or similar statuses).
- Fully remote work within the contiguous United States.
- Long-term, stable independent contractor engagement (40 hours per week).
- Structured work schedule aligned with US Eastern Time business hours.
- Opportunity to work on high-impact, large-scale healthcare data systems.
- Collaboration with highly skilled, global engineering teams.
- Exposure to modern data platform technologies and complex distributed systems.