Data Engineer, Analytics Data Engineering at Jobgether – United States
Explore Related Opportunities
About This Position
This position is posted by Jobgether on behalf of a partner company. We are currently looking for a Data Engineer, Analytics Data Engineering in United States.
This role offers the opportunity to design and build large-scale, modern analytics pipelines from the ground up, leveraging the latest Big Data technologies. You will be responsible for creating scalable data architectures that support multiple products and domains, enabling actionable insights across business and engineering teams. The position requires strong technical expertise, strategic thinking, and collaboration with cross-functional teams to deliver reliable, efficient, and high-impact data solutions. You will conceptualize, implement, and optimize data models, pipelines, and visualizations, ensuring high quality and operational excellence. The role is ideal for someone who thrives in a fast-paced, innovative environment and enjoys solving complex data challenges without being constrained by legacy systems. Success in this role will directly influence decision-making and the strategic use of data across the organization.
- Design, develop, and maintain scalable analytics pipelines using modern Big Data technologies such as Spark, Databricks, and data lakes
- Define company data assets, data models, and pipelines, ensuring high data quality and effective integration across systems
- Collaborate with engineering, product, and data science teams to translate business requirements into data solutions and actionable insights
- Conceptualize and own data architecture for multiple large-scale projects, evaluating design trade-offs for performance, scalability, and operational efficiency
- Optimize pipelines, dashboards, and frameworks to enable streamlined development of data artifacts
- Implement data quality frameworks and evaluate tools for lineage, monitoring, and operational reliability
- Participate in on-call rotations as needed to support data services and system reliability
Requirements:
- 5+ years of experience in Spark, Python, Java, C++, or Scala development
- 5+ years of SQL experience and expertise in schema design, dimensional modeling, and medallion architectures
- Experience with large-scale data processing and analytics platforms, including Databricks and data lake architectures
- Demonstrated ability to design, build, and maintain complex data processing systems
- Strong product strategic thinking and the ability to communicate insights to influence cross-functional teams
- BS degree in Computer Science, Mathematics, Physics, or equivalent technical experience
- Experience designing and implementing data integrations, pipelines, and quality monitoring frameworks
- Preferred: 7+ years of SQL and modeling experience, Airflow or similar orchestration frameworks, and data quality tools like MonteCarlo
Benefits:
- Competitive salary depending on US zone: $132,600 – $201,800 USD
- Flexible remote work options and supportive engineering environment
- Generous paid time off including vacation and holidays
- Comprehensive medical, dental, and vision coverage
- Retirement plan options with company contribution
- Professional development and education allowances
- Opportunities to work with cutting-edge Big Data technologies in a collaborative environment