Search by job, company or skills

  • Posted 4 days ago
  • Be among the first 10 applicants
Early Applicant

Job Description

Role Overview

We are looking for a Senior Data Engineer to design, build, and optimize data pipelines and data integration solutions across our regional data platforms. This role will work closely with business, project teams, and technical stakeholders to turn complex data requirements into scalable engineering solutions using modern Big Data and cloud technologies.

Key Responsibilities

Data Engineering & Pipeline Development

  • Design, develop, and maintain end-to-end data pipelines for batch and real-time processing.
  • Build and optimize ETL/ELT workflows across various data sources.
  • Perform data profiling, cleansing, transformation, mapping, and lineage documentation.

Data Quality & Optimization

  • Monitor, troubleshoot, and enhance pipeline performance, reliability, and efficiency.
  • Implement and enforce best practices for data quality, error handling, auditing, and lifecycle management.

Technical Delivery & Collaboration

  • Analyse business and technical requirements and translate them into scalable engineering solutions.
  • Work closely with infrastructure, security, and project teams across group and local markets.
  • Support testing activities: prepare test data, create test scripts, and participate in UAT cycles.

Innovation & Continuous Improvement

  • Evaluate and adopt new data engineering tools, cloud solutions, and emerging technologies (Azure, AWS, Spark, Databricks, Python, etc.).
  • Contribute to the development of internal frameworks, coding standards, and engineering best practices.

Key Performance Indicators

  • Quality and reliability of data pipelines and transformations delivered.
  • Level of technical proficiency in Big Data and cloud platforms.
  • Ability to convert business requirements into usable, scalable solutions.
  • Adoption of new tools and technologies that strengthen the overall data platform.

Who You'll Work With

  • Group Infrastructure, Security & Operations teams
  • Group and Local IT project teams
  • Business users across multiple countries
  • External vendors and service providers

Requirements

Education & Experience

  • Bachelor's degree in Computer Science, IT, Engineering, or related field.
  • 5+ years experience in data engineering, data warehousing, or large-scale data environments.
  • 35 years hands-on experience with Big Data technologies such as:
  • Azure or AWS Big Data stack
  • Hadoop, Hive, HBase, Spark, Sqoop, Kafka, Spark Streaming

Technical Skills

  • Strong ETL/ELT development experience using Python, Scala, Java, SQL, or R.
  • Solid understanding of data modelling (relational, data warehouse, data mart).
  • Experience with Azure Databricks for ETL/ELT and big data analytics is highly preferred.
  • Familiarity with data management frameworks, data lifecycle processes, and modular development principles.

More Info

Job Type:
Industry:
Employment Type:

Job ID: 135188085

Similar Jobs