Search by job, company or skills

  • Posted 6 days ago
  • Be among the first 10 applicants
Early Applicant

Job Description

The Role

You will be responsible for:

  • Design and develop scalable data architectures and systems that can handle large volumes of data and support real-time processing and analytics.
  • Collect, extract, transform, and load data from various sources, including databases, data warehouses, and data lakes.
  • Ensure data quality and integrity through the use of validation and cleansing processes and implement data security and privacy measures.
  • Collaborate with data scientists, analysts, and business users to understand their data needs and requirements and provide them with access to the data they need.
  • Develop and maintain ETL processes, data pipelines, and other data integration tools and technologies.
  • Perform data modelling and database design and optimize data structures and queries for performance and scalability.
  • Monitor and maintain data infrastructure and systems to ensure they are operating at peak performance and availability.
  • Stay up to date with new and emerging technologies and techniques in the data engineering field, and recommend new tools and technologies as needed.
  • Participate in team meetings, code reviews, and other collaborative activities to ensure the successful development and implementation of data infrastructure and systems.

Ideal Profile

  • Bachelor's degree in Computer Science, Electrical Engineering, or related field (required), Master's degree in Computer Science or related field (preferred).
  • Strong knowledge of data management and database technologies, such as SQL, NoSQL, and Spark.
  • Minimum 8 years of experience with data integration tools and technologies, such as ETL, data pipelines, and data warehousing.
  • Programming Languages: Strong proficiency in Python for data processing, scripting, and automation.
  • Database Management: Hands-on experience with relational databases (e.g., MySQL, PostgreSQL) and NoSQL databases (e.g., MongoDB).
  • Data Warehousing: Solid experience in designing and implementing data warehousing solutions, with practical experience using Snowflake.
  • Data Orchestration: Experience with data workflow orchestration tools such as Apache Airflow or Prefect.
  • Data Pipelines: Proven experience with streaming and batch data processing technologies, including Apache Kafka, Apache Spark, and Apache Flink.
  • Cloud Management: Strong experience managing and deploying data solutions on cloud platforms, preferably AWS.
  • Data Transformation: Proficient in data cleaning, modeling, and transformation using tools and libraries like Pandas, dbt, and Spark.
  • SQL Expertise: Advanced SQL skills, including extensive experience with CTEs, Window Functions, and Recursive CTEs for complex data analysis and manipulation.

What's on Offer

  • Work alongside & learn from best in class talent
  • Excellent career development opportunities
  • Great work environment

More Info

Job Type:
Industry:
Employment Type:

Job ID: 134833893