Search by job, company or skills

  • Posted 2 days ago
  • Be among the first 10 applicants
Early Applicant
Quick Apply

Job Description

Key Responsibilities

Design, develop, and maintain data pipelines using Azure Data Factory (ADF)

Build scalable data processing solutions using Azure Databricks (PySpark / Spark)

Perform data ingestion from multiple sources (APIs, databases, flat files, streaming sources)

Implement ETL/ELT processes to transform raw data into curated datasets

Optimize data workflows and performance for large-scale datasets

Integrate data solutions with Azure Data Lake, Synapse Analytics, or SQL Database

Ensure data quality, governance, and security best practices are followed

Collaborate with cross-functional teams including Business Analysts and Data Scientists

Monitor, troubleshoot, and resolve data pipeline issues

Requirements

Bachelor's degree in Computer Science, IT, or related field

38 years of experience in data engineering or related roles

Hands-on experience with:

Azure Data Factory (ADF)

Azure Databricks (PySpark / Spark SQL)

Azure Data Lake / Blob Storage

Strong SQL and data modeling skills

Experience in ETL/ELT pipeline development

Familiarity with CI/CD tools (Azure DevOps preferred)

Understanding of data warehousing concepts (Kimball/Inmon)

Experience with APIs and data integration

Soft Skills

Strong problem-solving and analytical thinking

Good communication skills (stakeholder engagement is key in MY market)

Ability to work independently and in a team

Adaptable in a fast-paced environment

More Info

Job Type:
Function:
Open to candidates from:
Malaysian

About Company

Job ID: 145084763

Similar Jobs