Search by job, company or skills

Ironbook AI

Senior Data Engineer

new job description bg glownew job description bg glownew job description bg svg
  • Posted a day ago
  • Be among the first 10 applicants
Early Applicant

Job Description

Role Summary

We are looking for a Senior Data Engineer to design, build, and optimize high-performance data pipelines and data systems. The ideal candidate will have strong experience with cloud platforms, modern ETL/ELT tools, and deep technical skills in Python, SQL, and distributed data frameworks.

Key Responsibilities

  • Design, develop, and maintain scalable and reliable ETL/ELT pipelines.
  • Build data models, data marts, and data warehouses to support analytics and reporting needs.
  • Work with large-scale structured and unstructured datasets across multiple sources.
  • Implement data quality, validation, and monitoring frameworks.
  • Optimize existing pipelines for better performance, cost efficiency, and maintainability.
  • Collaborate with Data Science, BI, Product, and Engineering teams to gather and implement data requirements.
  • Write optimized SQL queries and well-structured, maintainable Python code.
  • Support production pipelines by troubleshooting issues and performing root cause analysis.
  • Use version control, CI/CD, and automated deployment tools for data engineering tasks.
  • Ensure compliance with data security, privacy, and governance standards.

Required Skills & Experience

  • 5+ years of hands-on experience in Data Engineering.
  • Strong programming skills in Python and SQL.
  • Experience with ETL frameworks (Airflow, dbt, Spark, Glue, Dataflow, etc.).
  • Expertise in cloud data platforms (AWS, Azure, or GCP).
  • Strong understanding of data modelling and data warehouse/lakehouse architectures.
  • Experience working with big data tools (Spark, Kafka, Hadoop) is a plus.
  • Experience with source control (Git) and CI/CD pipelines.
  • Good problem-solving skills and ability to work in cross-functional teams.

Preferred / Nice-to-Have Skills

  • Experience with Snowflake, Redshift, BigQuery, Databricks, or Synapse.
  • Exposure to event-driven or real-time data streaming architectures.
  • Experience integrating with ML pipelines or supporting Data Science workloads.
  • Familiarity with containerization (Docker, Kubernetes).

More Info

Job Type:
Industry:
Employment Type:

About Company

Job ID: 134809001

Similar Jobs