About the Client
Our client is a leading organization committed to leveraging advanced technologies to drive innovation and efficiency. They are seeking a skilled Data Engineer to join their team and play a key role in building and optimizing data solutions that support business-critical decisions.
Key Responsibilities
- Design, develop, and maintain data pipelines and ETL workflows using AWS services.
- Implement orchestration and automation for data workflows.
- Work with large datasets to ensure data integrity, scalability, and performance.
- Collaborate with stakeholders to understand data requirements and deliver solutions.
- Deploy changes directly to production environments with confidence and accountability.
- Support migration efforts to Databricks and optimize workflows for performance.
Job Requirements
- Minimum 5 years experience with data lake architectures, big data technologies, and data pipeline orchestration.
- Proven experience with data lake architectures, big data technologies, and data pipeline orchestration.
- Familiarity with CI/CD practices for data engineering.
- AWS Certification (e.g., AWS Certified Data Analytics – Specialty or Solutions Architect) is a plus.
- Strong problem-solving skills and attention to detail.
Key Skills
- Expert-level fluency in AWS services relevant to data engineering, including Glue, S3, Lambda, Step Functions, and related tools.
- Strong proficiency in PySpark for distributed data processing.
- Advanced SQL skills for querying and optimizing data operations.
- Ability to deploy changes live with self-validation and quality assurance.
- Experience with Databricks is a plus.
- Ability to work independently and manage tasks in a siloed environment.