Roles and Responsibilites:
- Developing and optimising ETL (Extract, Transform, Load) processes to ingest and transform large volumes of data from multiple sources.
- Must have experience in investment banking, payment and transaction banking domains.
- Developing and deploying data processing applications using Big Data frameworks such as Hadoop, Spark, Kafka, or similar technologies.
- Proficiency in programming languages and scripting (e.g., Java, Scala, Python, SQL) for data processing and analysis.
- Experience with cloud platforms and services for Big Data (e.g., AWS, Azure, Google Cloud)
Requirements:
Primary Skills:
- Designing, building, and maintaining systems that handle large volumes of data, enabling businesses to extract valuable insights and make data-driven decisions.
- Creating scalable and efficient data pipelines, implementing data models, and integrating various data sources.
- Developing and deploying data processing applications using Big Data frameworks such as Hadoop, Spark, Kafka
- Write efficient and optimised code in programming languages like Java, Scala, Python to manipulate and analyse data
- Creating scalable and efficient data pipelines, implementing data models, and integrating diverse data sources to enable businesses to extract valuable insights
Secondary Skills:
- Designing, developing, and implementing scalable and efficient data processing pipelines using Big Data technologies.
- Implementing a Kafka-based pipeline to feed event-driven data into a dynamic pricing model, enabling real-time pricing adjustments based on market conditions and customer
- Conduct testing and validation of data pipelines and analytical solutions to ensure accuracy, reliability, and performance.
- Strong experience in Spring Boot and microservices architecture.
- Strong experience in distributed computing principles and Big Data ecosystem components (e.g., Hadoop, Spark, Hive, HBase).
- More than 8 years of working experience in IT industry
- More than 5 years of relevant experience