Description de l'offre
must have skills
Understand functional requirements related to Data processing . Deliver scalable, and high-performance, real-time data processing applications.
Development of various KPIs in Spark, Map Reduce, Hive
Good knowledge of Spark in Python/Scala/Java,logstash
Migration of PIG KPI scripts to Hadoop M/R Java jobs
Good knowledge of Hbase,Hive,Pig and Impala , NOSQL
Develop scripts/code to schedule jobs and automate file management
Knowledge on architecture of Hadoop 2.0
good to have skills
Good to have knowledge on Java, Kafka or any message broker(Active MQ, Rabbit MQ).
Good knowledge of Linux and tool like Splunk, Tableau and Data Meer
Familiarity with open source configuration management and deployment tools such as Puppet or Chef.
Knowledge of any of the scripting language(Bash, Perl, Python). Experience in statistics methods, machine learning and AI is a plus
You will be responsible to design and build applications for the hadoop platform & to develop innovative big data solutions
You should have the capability to architect highly scalabledistributed systems, using different open source tools
You will be having experience with object-oriented design, coding and testing patterns
You will be responsible to build application with Hadoop using Hive ,Java, linux or Python
You will be responsible to ensure performance, quality in each deliverables
Commitment to collaborative problem solving, sophisticated design
You will be responsible E2E of the presentation of the full platform data
Spark, python, scala, logstash, pig, Hive, impala, splunk, Hadoop,
BE / MCA / M tech or higher