Data Engineer (Python) in Brazil, Indiana at Jobgether
Explore Related Opportunities
Job Description
This position is posted by Jobgether on behalf of a partner company. We are currently looking for a Data Engineer (Python) in Brazil.
This role is an opportunity to join a high-impact engineering environment focused on building intelligent, data-driven systems that power modern AI-enabled user experiences. You will work at the intersection of data engineering, backend development, and scalable architecture, helping transform raw data into reliable, production-ready pipelines and services. The position involves designing and maintaining distributed data systems that support both application logic and AI-driven features. You will collaborate closely with cross-functional, globally distributed teams in a fully remote Agile setup aligned with North American time zones. This is a hands-on engineering role where you will contribute to the evolution of data infrastructure supporting advanced digital products. It is ideal for engineers who enjoy solving complex data challenges in fast-paced, innovation-driven environments.
- Design, build, and maintain scalable data pipelines using Python and dbt to support reliable ETL/ELT workflows.
- Develop and optimize backend microservices using Java, Spring Boot, and Gradle within a distributed architecture.
- Design and implement high-performance RESTful APIs connecting frontend systems with backend data platforms.
- Manage and optimize data storage and processing across SQL and NoSQL systems, including platforms such as Snowflake or Redshift.
- Orchestrate complex data workflows using Airflow, ensuring reliability, scheduling, and dependency management.
- Support the integration of AI-driven features by evolving backend systems to accommodate LLM-based and intelligent workflows.
- Participate in system troubleshooting, performance optimization, and architectural design discussions.
- Contribute to code reviews and ensure engineering best practices across distributed teams.
Requirements:
- 5–7 years of professional experience with Python for data engineering and Java/Spring Boot for backend services.
- Strong experience building production-grade data pipelines using dbt and Airflow.
- 3+ years of experience working with big data technologies such as Snowflake, Redshift, Spark, or Kafka.
- Solid understanding of SQL and NoSQL databases, including optimization and data modeling practices.
- Strong experience designing REST APIs and working with microservices architecture.
- Proficiency with distributed systems and backend development tools such as Gradle.
- Bachelor’s or Master’s degree in Computer Science, Engineering, Mathematics, or a related field.
- Experience working in fully remote, Agile, and globally distributed teams.
- Fluent English communication skills, both written and verbal.
- Nice to have: exposure to AI/LLM integrations, cloud platforms such as AWS or GCP, and a proactive, problem-solving mindset.
Benefits:
- Fully remote position based in LATAM with alignment to Central Time (CT) working hours.
- Long-term contract opportunity within a global engineering environment.
- Exposure to cutting-edge AI-driven product development and data ecosystems.
- Opportunity to work with modern data stack technologies such as dbt, Airflow, and Snowflake.
- Collaborative, international team culture with strong engineering standards.
- Participation in impactful projects shaping AI-powered user experiences.
- Continuous learning environment with exposure to advanced backend and data architectures.