Data Analytics 3 at TSPi
Explore Related Opportunities
About This Position
The World at TSPi
Solving the world’s most pressing issues and improving the quality of life for people worldwide is what we do every day at TSPi. Creating a more equitable world is no small task, but we are driven by big challenges.
Only by sharing our commitment, energy, and innovation do we affect change and push the boundaries of what is possible. We welcome diverse ideas, backgrounds, and viewpoints – joining TSPi means access to exceptional thinkers at the top of their game.
To thrive at TSPi is to embrace flexibility and collaboration. Our open culture allows you to balance your work and personal life as needed to optimize personal well-being. Creating a more equitable world starts from within – we look after people around the world, and we will do the same for you.
Ready to embrace rewarding and meaningful work? Now’s your chance.
Position Summary
The Data Engineer will design, develop, and maintain scalable data pipelines and analytics workflows supporting CMS and Medicaid data processing. This role will leverage cloud-based data analytics platforms, specifically Databricks, to build reliable, efficient, and secure data solutions that support analytics, reporting, and operational needs.
Key Responsibilities
- Design and implement scalable data pipelines for ingesting, transforming, and processing healthcare data.
- Develop ETL/ELT workflows supporting CMS and Medicaid data integration.
- Build and maintain data processing workflows using Databricks and distributed computing technologies.
- Develop data transformation and analytics solutions using Python, PySpark, and SQL.
- Ensure data quality, consistency, and governance across systems.
- Optimize performance of data pipelines and large-scale data processing workloads.
- Collaborate with analytics, engineering, and DevOps teams to support data platform operations.
- Maintain documentation for data architecture, data models, and pipeline processes.
- Support compliance with healthcare data security and privacy requirements.
Required Qualifications
- Experience working with CMS systems and/or Medicaid data.
- Experience with cloud-based data analytics platforms, specifically Databricks.
- Strong programming experience in:
- Python
- PySpark
- SQL
- Experience building distributed data processing pipelines.
- Experience working with large healthcare datasets.
- Strong understanding of ETL/ELT pipeline development.
Preferred Qualifications
- Experience implementing data lake or lakehouse architectures.
- Familiarity with healthcare interoperability standards.
Scan to Apply
Job Location
Job Location
This job is located in the United States region.