A Technology and Business Consulting Firm that was founded through a combination of technology, data, financial and actuarial science principles. is looking for a highly motivate Intermediate or Senior AWS Data Engineer with strong expertise in building scalable data pipelines on AWS. You will work with major financial institutions, designing and implementing modern cloud-based data solutions. This is a hands-on role requiring a solid foundation in Python or C#, AWS Glue (PySpark), and cloud-based ETL systems.
Responsibilities :
- Design, build, and optimize robust data pipelines and architectures on AWS.
- Lead the implementation of scalable and secure data solutions.
- Ingest data into AWS S3 and transform / load into RDS / Redshift.
- Build AWS Glue Jobs using PySpark or Glue Spark .
- Use AWS Lambda (Python / C#) for event-driven data transformation.
- Collaborate on migration and deployment automation (Dev to Prod).
- Support data governance, lineage, and best practices in security.
- Deliver data insights through well-structured models and pipelines.
- Work with batch, real-time (Kafka), and streaming architectures.
- Interact with stakeholders and communicate technical concepts clearly.
Qualifications & Experience :
Proficiency in SQL and data modelling principles.Deep experience with AWS services : Glue, Lambda, S3, RDS, Redshift, DynamoDB, Kinesis, SQS / SNS, IAM.CI / CD, DevOps and scripting (PowerShell, Bash, Python, etc.).Familiarity with RDBMS systems : PostgreSQL, MySQL, SQL Server.Agile / Scrum methodology and full SDLC experience.Kafka and real-time data ingestion experience (advantageous).Strong Python or C# programming, including OOP and libraries for data engineering.The Reference Number for this position is NG60600 which is a Permanent, Hybrid position in Johannesburg offering a salary of R600k up to R900k per annum negotiable based on experience. E-mail Nokuthula