
Search by job, company or skills
Must Have Skills:
Hands-on experience of Hadoop, PySpark, Spark SQL, Hive, Hadoop Big Data Eco System Tools.
Should be able to develop, tweak queries and work on performance enhancement.
Solid understanding of object-oriented programming and HDFS concepts
The candidate will be responsible for delivering code, setting up environment, connectivity, deploying the code in production after testing.
Good to Have:
Preferable to have good DWH/ Data Lake knowledge.
Conceptual and creative problem-solving skills, ability to work with considerable ambiguity, ability to learn new and complex concepts quickly.
Experience in working with teams in a complex organization involving multiple reporting lines
The candidate should have good DevOps and Agile Development Framework knowledge.
Responsibility of / Expectations from the Role
Need to work as a developer in Cloudera Hadoop.
Work on Hadoop, PySpark, Spark SQL, Hive, Bigdata Eco System Tools.
Experience in working with teams in a complex organization involving multiple reporting lines.
The candidate should have strong functional and technical knowledge to deliver what is required and he/she should be well acquainted with Banking terminologies.
The candidate should have strong DevOps and Agile Development Framework knowledge.
Create PySpark jobs for data transformation and aggregation.
Experience with stream-processing systems like Spark-Streaming.
Job ID: 135103585