Microsoft Fabric Data Engineer at Jobgether – United States
Explore Related Opportunities
About This Position
This position is posted by Jobgether on behalf of a partner company. We are currently looking for a Microsoft Fabric Data Engineer in United States.
We are looking for a skilled Microsoft Fabric Data Engineer to design, build, and optimize enterprise-scale data solutions that enable data-driven decision-making. In this role, you will develop scalable data pipelines, implement Lakehouse architectures, and integrate diverse data sources across cloud platforms. You will work closely with IT, analytics, and business teams to translate complex requirements into robust data solutions. This position requires strong technical expertise, problem-solving skills, and the ability to mentor junior engineers while delivering high-quality, reliable data services. The environment is collaborative, fast-paced, and focused on modern data engineering practices, including automation, real-time processing, and advanced analytics. This is a remote role with milestone-based travel requirements.
Design, build, and maintain distributed, scalable data pipelines using Microsoft Fabric and Apache Spark to process structured and unstructured data.
Integrate data from multiple internal and external systems, ensuring consistency, reliability, and proper lineage.
Optimize ETL/ELT workloads to improve throughput, cost efficiency, and performance of large-scale analytics environments.
Implement and enforce data quality, metadata management, governance, and compliance standards.
Collaborate with data scientists, analysts, architects, and business stakeholders to deliver insights and integrate analytical models.
Document pipeline architectures, workflows, schemas, and operational processes, while troubleshooting and ensuring enterprise-grade reliability.
Explore emerging technologies to enhance data engineering practices, including Lakehouse architecture, real-time processing, and automation.
Requirements:
Bachelor’s or Master’s degree in Computer Science, Engineering, Information Systems, or a related field.
7+ years of experience in data engineering, data architecture, or large-scale data platform development.
Strong expertise in Apache Spark for batch and streaming processing.
Hands-on experience with Microsoft Fabric, Data Factory, pipelines, and Lakehouse implementations.
Advanced proficiency in SQL, Python, and/or Scala.
Experience with cloud platforms such as Azure, AWS, or GCP.
Solid understanding of distributed systems, lakehouse architecture, and data modeling.
Proven ability to design and optimize complex ETL/ELT pipelines.
Strong communication, leadership, and mentoring skills.
Preferred: Certifications in Azure Data Engineering, Apache Spark, or Microsoft Fabric; experience with real-time streaming technologies (Kafka, Azure Event Hubs); DevOps practices including CI/CD and Infrastructure as Code; knowledge of Power BI or Tableau.
Benefits:
Competitive salary and performance-based incentives.
Flexible remote work with milestone-based travel opportunities.
Comprehensive healthcare and retirement plans.
Opportunities for professional development and skill growth.
Collaborative and innovative technology environment.
Access to cutting-edge data engineering tools and cloud technologies.