JobTarget Logo

Senior Data Quality Engineer at Sidley Austin LLP – Chicago, Illinois

Sidley Austin LLP
Chicago, Illinois, 60603, United States
Posted on
NewHybridIndustries:LegalJob Function:Engineering
New job! Apply early to increase your chances of getting hired.

Explore Related Opportunities

About This Position

Senior Data Quality Engineer

US-IL-Chicago

Department: Data and AI

Summary

The Senior Data Quality Engineer is a hands-on technical role responsible for designing and building robust, scalable, end-to-end testing frameworks for modern data pipelines. This role focuses on ensuring data quality across ingestion (APIs, SQL Server, flat files) and transformation using Medallion Architecture (Bronze, Silver, Gold).

The ideal candidate has strong experience in data quality engineering, Python and SQL, and building automated validation frameworks using tools such as Great Expectations. Experience leveraging AI-assisted development tools (e.g., Claude Code) and prompt engineering to accelerate test generation and standardization is highly valued.

This role is critical to ensuring reliable, high-quality data products that power downstream analytics in tools such as Power BI and Tableau.



Duties and Responsibilities
  • Design and implement scalable end-to-end testing frameworks for data pipelines
  • Validate ingestion from APIs, SQL Server, and flat files
  • Ensure data quality across Medallion architecture layers (Bronze, Silver, Gold)
  • Build automated checks for schema validation, data integrity, and transformations
  • Develop reusable validation patterns using Great Expectations or similar frameworks
  • Leverage Claude Code and prompt engineering to accelerate test generation and standardization
  • Create reusable AI-driven testing assets (.md skills, templates) and workflows for AI-assisted coding and testing
  • Integrate testing into CI/CD pipelines (Azure DevOps, GitHub Actions)
  • Collaborate with data engineers, Product, and DevOps to define data quality standards and acceptance criteria
  • Monitor and improve data reliability, observability, and test coverage
  • Investigate data quality issues and drive root-cause resolution

Salaries vary by location and are based on numerous factors, including, but not limited to, the relevant market, skills, experience, and education of the selected candidate. If an estimated salary range for this role is available, it will be provided in our Target Salary Range section. Our compensation package also includes bonus eligibility and a comprehensive benefits program. Benefits information can be found at Sidley.com/Benefits.



Target Salary Range
$100,000 - $150,000 if located in Illinois

Qualifications

To perform this job successfully, an individual must be able to perform the Duties and Responsibilities (Duties) above satisfactorily and meet the requirements below. The requirements listed below are representative of the minimum knowledge, skill, and/or ability required. Reasonable accommodations will be made to enable individuals with disabilities to perform the essential functions of the job. If you need such an accommodation, please email sidleytalentacquisition@sidley.com (current employees should contact Human Resources).

Education and/or Experience:

Required:

  • Bachelor’s degree in Computer Science, Information Technology, or related field (or equivalent experience)
  • A minimum of 5 years of experience in data quality engineering, data testing, or data platform engineering
  • Strong experience building automated testing frameworks for data pipelines
  • Hands-on experience with data validation tools (Great Expectations, DQX – Databricks Data Quality Framework, dbt tests)
  • Proficiency in Python and strong SQL skills
  • Experience with AI-assisted development tools (e.g., Claude Code, Copilot)
  • Strong understanding of prompt engineering and reusable AI workflows
  • Experience validating data ingestion from APIs, relational databases, and flat files
  • Deep understanding of data transformation validation and Medallion Architecture
  • Experience integrating testing into CI/CD pipelines (Azure DevOps, GitHub Actions, etc.)
  • Familiarity with data observability and monitoring practices

Preferred:

  • Experience with Azure Databricks and Apache Spark (PySpark)
  • Familiarity with Delta Lake and Unity Catalog
  • Experience building reusable AI prompt libraries or skills (.md or similar formats)
  • Hands-on experience using Claude Code for automation or framework generation
  • Knowledge of data contracts and schema evolution strategies
  • Experience with test data management and synthetic data generation
  • Familiarity with infrastructure as code (Terraform, Bicep)
  • Experience with streaming data pipelines (Kafka, Event Hubs)

Other Skills and Abilities:

The following will also be required of the successful candidate:

  • Strong organizational skills
  • Strong attention to detail
  • Good judgment
  • Strong interpersonal communication skills
  • Strong analytical and problem-solving skills
  • Able to work harmoniously and effectively with others
  • Able to preserve confidentiality and exercise discretion
  • Able to work under pressure
  • Able to manage multiple projects with competing deadlines and priorities

Sidley Austin LLP is an Equal Opportunity Employer

#LI-Hybrid

#LI-OE1

Job Location

Chicago, Illinois, 60603, United States

Frequently asked questions about this position

Continue to apply
Enter your email to continue. You’ll be redirected to the employer’s application.
By clicking Continue, you understand and agree to JobTarget's Terms of Service and Privacy Policy.