Empowering Africa’s tomorrow, together…one story at a time.
With over 100 years of rich history and strongly positioned as a local bank with regional and international expertise, a career with our family offers the opportunity to be part of this exciting growth journey, to reset our future and shape our destiny as a proudly African group.
My Career Development Portal : Wherever you are in your career, we are here for you. Design your future. Discover leading-edge guidance, tools and support to unlock your potential. You are Absa. You are possibility.
Job Summary
We are looking for a highly skilled Data Warehouse Developer to join our data engineering team. The successful candidate will be responsible for designing, building, and maintaining scalable data warehouse solutions and ETL pipelines across AWS environments. This role is central to enabling high-quality, reliable data for reporting, analytics, and advanced business insights.
Job Description
Design, develop, and maintain data warehouse solutions in Amazon Redshift , ensuring optimal schema design, distribution / sort keys, and performance tuning.
Build and optimize ETL workflows using AWS Glue and PySpark , integrating diverse data sources into centralized platforms.
Manage data ingestion pipelines from relational databases, APIs, flat files, and streaming systems into Hadoop and AWS ecosystems .
Leverage AWS services (S3, Lambda, Step Functions, CloudWatch, Athena) to design scalable and cost-efficient data solutions.
Implement data models (star, snowflake, and 3NF) to support analytics and BI reporting.
Ensure data quality, reliability, and governance , applying best practices for security, compliance, and metadata management.
Collaborate with analysts, BI developers, and data scientists to deliver well-structured, accessible datasets.
Document processes, data models, and workflows to ensure maintainability and knowledge sharing.
Required Qualifications
Proven experience as a Data Warehouse Developer or Data Engineer .
Strong proficiency in SQL (query optimization, complex joins, window functions).
Hands‑on experience with Amazon Redshift (schema design, workload management, tuning).
Strong knowledge of AWS, Spark for ETL development.
Understanding of data modeling principles (Kimball, Inmon).
Proficiency with AWS cloud services (S3, IAM, Lambda, CloudFormation / Terraform).
Familiarity with version control (Git) and CI / CD pipelines.
Bachelor’s degree in Computer Science, Information Systems, or related field.
Preferred Qualifications
Experience with real‑time streaming frameworks (Kafka, Kinesis).
Knowledge of data cataloging and metadata management (Glue Data Catalog, Apache Atlas).
Understanding of data security (encryption, IAM, GDPR / POPIA compliance).
Exposure to BI / visualization tools (Tableau, Amazon QuickSight, Power BI).
Soft Skills
Strong problem-solving and analytical skills.
Excellent communication and collaboration abilities.
Ability to work in agile, fast-paced environments.
Attention to detail and commitment to data accuracy .
Education
Bachelor's Degree : Information Technology
#J-18808-Ljbffr
Data Warehouse Developer • WorkFromHome, Gauteng, South Africa