Search by job, company or skills

R

Data Engineer - (Onsite/Remote/Hybrid)

3-8 Years
Save
new job description bg glownew job description bg glownew job description bg svg
  • Posted 9 hours ago
  • Be among the first 30 applicants
Early Applicant
Quick Apply

Job Description

We are seeking a hands-on Data Engineer to design, build, and maintain the data pipelines and models that power operational workflows and analytics. The ideal candidate will thrive in large-scale data systems, own end-to-end pipelines, and ensure high performance for both OLTP operations and OLAP reporting.

Key Responsibilities

  • Design and implement data models (conceptual → logical → physical) for operational and analytics workflows.
  • Define schemas for Operational (OLTP) and Serving (OLAP) systems, maintaining naming conventions, keys, and referential integrity.
  • Optimize databases with indexes, partitioning, query tuning, and materialized views.
  • Develop and maintain ETL/ELT pipelines for structured and unstructured data sources.
  • Implement batch and near real-time ingestion workflows.
  • Manage data promotion flows between Raw → Staging → Serving, including retention and reprocessing.
  • Ensure pipelines are fault-tolerant, idempotent, observable, and production-ready.
  • Conduct data validation and quality checks (reconciliation, duplication detection, completeness).
  • Maintain data lineage from source to final outputs.
  • Produce structured operational logs and metrics for job status, throughput, lag, and failures.
  • Support compliance-driven needs, including auditability, traceability, and access logging.
  • Collaborate with solution architects, backend engineers, DevOps, security, and product teams to meet performance and compliance standards.

Skills & Requirements

  • 5+ years of experience in data engineering or backend data systems with production pipeline ownership.
  • Strong SQL and PostgreSQL skills (schema design + performance tuning).
  • Proven experience in data modeling (normalized OLTP + reporting models).
  • Proficient in Python (or equivalent) for pipeline development and automation.
  • Hands-on experience with object storage (MinIO/S3) and metadata/blob-pointer patterns.
  • Understanding of partitioning, indexing, and materialized views.
  • Familiarity with monitoring, logging, and reliability practices in production.

Preferred Qualifications:

  • Bachelor's Degree in Computer Science, IT, Data Engineering, Software Engineering, or related field.
  • Master's Degree in Data Engineering, Data Science, or related discipline (preferred).
  • Familiarity with OpenSearch/Elasticsearch or search engines.
  • Exposure to AI/ML data workflows.
  • Experience in secure or regulated environments.

Masters/ Post Graduate, Bachelors/ Degree

More Info

Job ID: 146065929