About the role
Join our team to design and build scalable data infrastructure that processes vast volumes of data daily, enabling real-time analytics and ML models that directly impact business decisions across growth and product teams. The role is based at our Cape Town or Randburg offices.
What you'll do
- Design and maintain high-performance data pipelines using Python, SQL, and modern cloud technologies
- Build real-time streaming data systems
- Collaborate with ML Engineers and Analytics teams to deliver reliable, high-quality datasets
- Implement Infrastructure as Code using Terraform and CI / CD pipelines
- Optimize data costs and performance across cloud platforms
- Mentor junior engineers and contribute to technical roadmap planning
Technical requirements
4+ years experience with Python and advanced SQLHands-on experience with cloud platforms (GCP preferred, AWS / Azure acceptable)Experience with modern data stack : dbt, Airflow, Snowflake / BigQueryStreaming technologies : Kafka, Apache Beam, or similarInfrastructure as Code : Terraform, CloudFormationContainer orchestration with KubernetesExperience with data modeling and warehouse designQualifications and Experience
BSc Hons in Computer Science3-5 years development experience working with PythonData skills (Traditional SQL and No-SQL)Large scale ETLHigh scale Restful ServicesCloud experience (Google Cloud Platform, Azure, or AWS)GitCloud certifications (GCP Professional Data Engineer, AWS Data Analytics)Experience with ML model deployment and monitoringGiven our employment equity guidelines, preference will be given to suitable candidates from the designated group
#J-18808-Ljbffr