Senior Analytics Engineer at Watu – Nairobi
Explore Related Opportunities
About This Position
Senior Analytics Engineer
Reporting to-Analytics Engineering Senior Manager
Department- Data Analytics
Location- Nairobi
Job Summary:
Our business is growing rapidly, and we are looking for a highly technical Senior Analytics Engineer to join our Data Analytics department. In this leadership-level technical role, you will be a primary architect of our data foundations, defining the best practices, pipeline architectures, and technical standards that will scale with our organization. You’ll work with a premier modern stack — including Google Cloud Platform (GCP), Datastream, Microsoft Fabric, Spark, Datastream, dbt and others — to build high-quality, scalable, and trusted data solutions.
Beyond traditional analytics engineering, you will bridge the gap between data engineering, AI and Data Science / Machine Learning. This is a unique opportunity to own the end-to-end lifecycle of our from designing robust ingestion pipelines and managing Data Warehouse administration to deploying our first generation of production-ready ML models, AI agents, and intelligent workflows.
We are looking for a proactive technical lead who can stabilize and develop core systems while simultaneously engineering automated AI and ML solutions. If you have mastery in Advanced SQL and Python, and a proven track record of moving models from a notebook into a reliable production environment, you will play a defining role in our technical legacy.
Key Responsibilities:
- Lead Data Architecture & Pipelines: Architect, deploy, and oversee robust end-to-end ingestion frameworks. You will ensure raw data from diverse sources is reliably integrated into our Data Warehouse using cloud-native tools (Datastream, Fabric, Spark).
- Advanced Data Transformation: Own the modeling layer by designing complex, performant transformation logic using dbt and SQL. You will establish the standards for how raw data is turned into clean, version-controlled, and analyst-ready datasets.
- MLOps & Model Productionalization: Bridge the gap between research and production by engineering the pipelines required to deploy Machine Learning models and AI agents. You will transition models from notebooks into stable, automated, and monitorable production workflows.
- Data Quality & Technical Governance: Act as the primary guardian of data integrity. You will define and enforce strict security protocols, data quality tests, and governance rules to ensure consistency and compliance across the global organization.
- Warehouse & Infrastructure Excellence: Manage the administration of the Data Warehouse (BigQuery/Fabric), optimizing for performance, cost-efficiency, and scalability. You will select and maintain the Analytics Engineering tooling that empowers the broader data team.
- Intelligent Automation: Lead the development of internal AI tools and "intelligent workflows." You will leverage our data foundations to build custom agents and automated processes that solve complex business logic challenges.
- Technical Leadership & Mentorship: Define the "gold standard" for engineering practices (CI/CD, documentation, modularity) within the department, providing guidance to other team members and assisting in the long-term evolution of our data strategy.
Requirements
Knowledge, Skills, and Experience
- Experience: At least 5 years of proven experience working in a Data Engineering or Back-end Engineering role.
- Core Languages: Advanced proficiency in SQL and strong coding skills in Python (specifically for data manipulation, automation, and API integrations).
- Data Architecture: Deep expertise in modern Data Warehouse design (BigQuery, Microsoft Fabric), dimensional modeling, and implementing scalable architectural patterns.
- Modern Data Stack: Extensive hands-on experience with the ELT/ETL lifecycle, including advanced transformation workflows using dbt (Data Build Tool) and orchestration.
- Cloud Infrastructure: Strong proficiency with Google Cloud Platform (GCP)—specifically BigQuery, Datastream, and Dataflow—and experience with Spark for large-scale data processing.
- Software Excellence: A strong proponent of engineering best practices, including Git version control, CI/CD pipelines, and writing clean, maintainable, and well-tested code.
The following technical knowledge would be a plus
- ML & AI Engineering: Practical experience moving Machine Learning models from development to production. You should be familiar with MLOps concepts, LLM integration (RAG, prompt engineering, and AI agents), and data management for model training and inference.
- Cloud Stack: Hands-on experience with Google Cloud solutions, specifically Cloud Storage, BigQuery, Datastream, and Dataflow.
- Modern Transformation: Practical experience with dbt (data build tool).
- AI Engineering Concepts: Familiarity with modern AI patterns, specifically Vector Databases for retrieval, managing context (RAG), and equipping LLMs with tools to perform actions and interact with external APIs.
- Big Data: Experience working with non-relational databases or Big Data technologies.
- Data Streaming: Familiarity with data streaming analytics and real-time data processing.
- Polyglot: Knowledge of other coding languages beyond Python and SQL.
Non-Technical & Soft Skills
- Precision: Rigorous attention to detail, with a high standard for data quality and accuracy.
- Autonomy: The ability to work independently and proactively; you anticipate problems before they happen.
- Drive: You are a self-starter and target-oriented, capable of managing your own roadmap to meet delivery goals.
- Collaboration: A dedicated team player and good communicator, able to bridge the gap between technical complexity and business needs.
- Be a part of an international, dynamic and driven team that has set their aspirations high and work hard to achieve those
- Opportunities to learn and grow together with us
- Competitive compensation package
- Health benefits
Scan to Apply
Job Location
Job Location
This job is located in the Nairobi, 00200, Kenya region.