Search by job, company or skills

DFI Retail Group

Senior Martech Data & Integrations Specialist

new job description bg glownew job description bg glownew job description bg svg
  • Posted 8 days ago
  • Be among the first 10 applicants
Early Applicant

Job Description

About Us

DFI Retail Group is a leading pan-Asian retailer and operates across four broad formats: Food (including Supermarkets, Hypermarkets, and Convenience stores), Health & Beauty, Home Furnishings, and Restaurants. The Group has operations in 12 markets and operates multiple formats in most markets to satisfy different customer segments and trades under well-recognized brands.

About The Role

We are building a unified martech ecosystem across the DFI retail group, with some markets migrating from existing platforms and others being designed and implemented from the ground up.

This role will be responsible for designing, building, and supporting scalable data pipelines that power core Martech capabilities, including customer Data platform, loyalty, campaign & journey (CMS + Marketing Automation), survey & feedback systems.

You will own real-time and batch data ingestion, data modeling, migration from legacy systems, and ongoing engineering-level change requests. Post go-live, the role provides L2 support for data pipelines, ensuring data quality, reliability, and performance, while partnering with solution architect, martech operations, and platform vendor for L3 issues and continuous improvement.

Key Responsibilities:

Event and Batch Data Ingestion

  • Design and build scalable real-time (event-based) and batch ingestion pipelines to onboard customer, transactional, behavioural, and consent data into unified martech stack
  • Integrate data through API gateways such as Apigee, and in-house java spring microservices running on kubernetiks based cloud platforms
  • Engineer API, webhook, and file-based integrations with internal systems and external platforms, ensuring schema consistency, data validation, and fault tolerance
  • Implement robust ingestion patterns including idempotency, replay/backfill mechanisms, error handling, and monitoring to support migrations and ongoing change requests.
  • Optimize data latency and freshness SLAs, balancing real-time and batch processing requirements across multiple markets and use cases.
  • Partner with solution architect and QA to validate end-to-end data flows, ensuring ingestion pipelines reliably support downstream segmentation, journeys, and activation

Data Modelling and Storage:

  • Design and maintain scalable data models for customer profiles, identity attributes, behavioural events, transactions, loyalty constructs, and consent & preference data to support activation use cases
  • Implement efficient storage and partitioning strategies to balance performance, cost, and data retention requirements across real-time and batch workloads
  • Design data architectures and collaborate in data migrations in cloud environments
  • Define and enforce technical standards and best practices around data modelling, orchestration, CI/CD, observability, and performance

Data Quality, Reliability & Monitoring

  • Drive naming conventions, event standards, and API hygiene across markets
  • Establish data contracts and schema registries, implement data quality checks (validations, anomalies, completeness), and enforce data quality gates with alerting across ingestion and activation pipelines
  • Ensure production-grade data quality, freshness, and resilience through validation rules, monitoring, alerting, reconciliation, and replay mechanisms

Change Requests & Enhancements

  • Design and deliver engineering-level post deployment, change requests, including new data sources, schema changes, performance optimisations, and new activation feeds
  • Lead data migration and onboarding efforts for new markets and legacy platform transitions, including historical loads, cutovers, and post-migration validation
  • Provide L2 support for data pipelines and integrations, performing root cause analysis, fixes, and recovery while partnering with partner vendor for L3 issues.

About You

  • 8+ years in data engineering with handson delivery of realtime streaming and batch pipelines
  • Bachelor's degree in engineering, computer science, or a related technical field, or equivalent hands-on experience building enterprise data and integration platforms
  • Proven experience integrating via API gateways (Apigee) and microservices (Java Spring on GKE/Kubernetes).
  • Proficiency in SQL and programming languages (Java, Python), with the ability to debug APIs, services, and data pipelines
  • Solid experience in data modelling and storage design for customer-centric use cases, including profile unification, identity attributes, event schemas, and activation-ready datasets.
  • Experience supporting platform migrations and multi-market rollouts, including historical data loads, cutover strategies, and post-migration validation
  • Hands-on experience providing L2 support for data pipelines and integrations, including root cause analysis, data reprocessing, and production issue resolution.

More Info

Job Type:
Industry:
Function:
Employment Type:

About Company

Job ID: 143296469