Search by job, company or skills

Unison Group

Senior Data Engineer(Databricks and Azure)

new job description bg glownew job description bg glownew job description bg svg
  • Posted 2 days ago
  • Be among the first 10 applicants
Early Applicant

Job Description

Overview

We are looking for a Senior Data Engineer with strong expertise in Databricks, PySpark, Delta Lake, and cloud-based data pipelines. The ideal candidate will design and build scalable ETL/ELT solutions, implement Lakehouse/Medallion architectures, and integrate data from multiple internal and external systems. This role requires strong technical leadership and hands-on architecture experience.

Key Responsibilities

  • Design, build, and optimize data ingestion and transformation pipelines using Databricks, PySpark, and Python
  • Implement Delta Lake and Medallion architecture for scalable enterprise data platforms
  • Develop ingestion frameworks for data from SFTP, REST APIs, SharePoint/Graph API, AWS, and Azure sources
  • Automate workflows using Databricks Workflows, ADF, Azure Functions, and CI/CD pipelines
  • Optimize Spark jobs for performance, reliability, and cost efficiency
  • Implement data validation, quality checks, and monitoring with automated alerts and retries
  • Design secure and governed datasets using Unity Catalog and cloud security best practices
  • Collaborate with analysts, business users, and cross-functional teams to deliver curated datasets for reporting and analytics
  • Provide technical leadership and guidance to junior team members

Required Skills

  • 5-8+ years of experience in Data Engineering
  • Strong hands-on experience with Databricks, PySpark, Delta Lake, SQL, Python
  • Experience with Azure Data Lake, ADF, Azure Functions, or AWS equivalents (S3, Lambda)
  • Experience integrating data from APIs, SFTP servers, vendor data providers, and cloud storage
  • Knowledge of ETL/ELT concepts, Lakehouse/Meddalion architecture, and distributed processing
  • Strong experience with Git, Azure DevOps CI/CD, and YAML pipelines
  • Ability to optimize Spark workloads (partitioning, caching, Z-ordering, performance tuning)

Good to Have

  • Exposure to Oil & Gas or trading analytics (SPARTA, KPLER, IIR, OPEC)
  • Knowledge of Power BI or data visualization concepts
  • Familiarity with Terraform, Scala, or PostgreSQL
  • Experience with SharePoint development or .NET (optional)

More Info

Job Type:
Industry:
Employment Type:

About Company

Job ID: 135468893