Functieomschrijving
Intermediate Data Engineer, Data Processing and Archiving, SQL, Big Data Analytics, Big Data This is a great opportunity to work with an industry leading company in the finance sector. The role will be based at our headquarters in London. The ideal candidate will have strong experience of working on large scale data processing projects within the financial services sector. You'll need to be comfortable using Python and Java programming languages for both developing new products as well as supporting existing ones through enhancements or upgrades. Experience of MongoDB is also desirable but not essential. You should ideally have gained substantial experience building scalable solutions that leverage Apache Hadoop & Spark technology for managing massive volumes of data across different platforms (e.g., AWS). This includes your own original
Requirements
- Extensive understanding and application of Python Development and building complex solutions and applications.
- Extensive understanding and application of Python and data processing and transformation
- Strong understanding of Microsoft SQL Server and T-SQL Development
- Strong Understanding of Code Optimization
- Understanding of Real-Time Data Processing and Streaming
- Understanding of Apache Beam
- Advantageous
- Understanding of Apache NiFi and building complex workflows.
- Understanding of opensource streaming technologies (Apache Spark Streaming, Flink)
- Understanding of Apache Kafka
- Understanding of Apache Spark
- Understanding of Apache Hadoop
- Competency in Python and developing Spark models.
- Fundamental understanding of MongoDB
- Fundamental understanding of Redis
- Fundamental understanding of Databricks
Fundamental understanding and optimisation of Linux and cloud environments.
Key Performance Areas :
Effectively conceptualise, design and create high quality, custom workflows and analytics solutions.Develop, test, and implement big data solution designs.Understand client requirements and establish knowledge of data for accurate design, analysis andretrieval.Pull data from various data sources and combine it to store it in a datastore for analysis andretrieval.Collaborating with end users on standardized and best practice approaches.Making suggestions and enhancements on existing solutions.Providing regular and timely feedback to clients on design and build statusEducating requestors as to appropriate and desirable parameters to ensure they get the informationthey needEnsure all tasks are updated on agile boards in a timely mannerAssist Project Managers and Change and Training Managers with any project and training relatedadministration tasksActively Upskill in relevant technologies as prescribed by team leadershipIntegration and execution with Machine-learning models and AI in-flow.Documentation and Design of SolutionsJob Related Experience :
Minimum 4+ years work experience with exposure to data pipeline development and solutionsarchitecture as well as project management / coordination experience.Behavioural Competencies :
Good Communication SkillsGood Presentation SkillsGood AdaptabilityMust take InitiativeGood at Planning and organisingGood at TeamworkGood at InfluencingGood at Problem SolvingMust have Attention to DetailMust be good at Analytical ThinkingMust have a desire for InnovationMust be able to Conceptualise ideasQualifications
B Degree – Computer Science / Engineering
Hortonworks Certified