About the Role:
We are seeking a skilled Data Engineer to design, build, and optimize scalable data pipelines and infrastructure to support analytics, BI, and AI/ML initiatives.
Key Responsibilities:
- Build and maintain scalable ETL/ELT pipelines.
- Optimize database and query performance.
- Manage data integration across systems and cloud platforms.
- Automate workflows using tools like Airflow or Step Functions.
- Ensure data quality, governance, and lineage.
- Support BI and AI/ML teams with clean, reliable datasets.
Requirements:
- Proficient in Python, SQL, and data pipeline tools (e.g., Spark, AWS Glue, dbt).
- Experience with AWS (S3, Lambda, Redshift, Glue, EMR).
- Strong knowledge of RDBMS, data warehouses, and NoSQL systems.
- Familiar with Airflow, Terraform, Docker, or Kubernetes.
- Strong problem-solving and communication skills.
Nice-to-Have:
- AI/ML data pipeline experience.
- Exposure to real-time data tools (Kafka, Kinesis).