Offers “Lbrands”

Expires soon Lbrands

Software Engineer – MAPR

  • Internship
  • Bengaluru (Bangalore Urban)
  • IT development

Job description

Description

OVERVIEW

The Enterprise Data Warehouse Software Engineer in Big Data is a key role for Data Services development activities required to enable Business Intelligence solutions for L Brands.

Responsibilities will include providing technical solutions to complex business requirements through innovative data architecture design and data services solutions. This role requires hands-on experience in data warehouse, business intelligence, Data modeling, ETL, HADOOP/MapR technologies and data services development using SQL, SPARK with JAVA, HIVE techniques, and architecture. This role will interface and work in collaboration with a highly talented, energetic and diverse BI team and other cross functional teams (POS, finance, manufacturing, enterprise planning, HRMS, Supply-Chain and several other systems) to devise end-to-end solutions.

Responsibilities

·  Design & develop data flows, data models & data warehouses / data solutions
·  Collaborate with report developers to source relevant data and build solution to support development of dashboards / reports
·  Provide user support including incident management, data issues & maintenance of daily / weekly data refresh schedules & on-call responsibilities to meet business SLA’s
·  Deliver required documentation for build and support responsibilities, including architecture and data flow diagrams

Qualifications

Education

Required :   Bachelor’s Degree in Computer Science/Information Systems

Preferred :   Master’s Degree in Computer Science/Information Systems

Required Skills / Qualifications

·  Minimum 3 years of IT experience working on relevant technologies
·  Must have extensive hands on experience in designing, developing, and maintaining software solutions in Hadoop cluster
·  Must have working experience in Spark with JAVA
·  Must demonstrate Hadoop best practices
·  Must have experience with strong UNIX shell scripting, SQOOP, eclipse (Any IDE)
·  Must have experience with Developing Pig scripts/Hive QL, UDF for analyzing all semi-structured / unstructured / structured data flows
·  Must have experience with NoSQL Databases such as HBASE
·  Demonstrates broad knowledge of technical solutions, design patterns, and code for medium/complex applications deployed in Hadoop production
·  Good to have working experience with Developing MapReduce programs running on the Hadoop cluster using Java
·  Good to have working experience in the data warehousing and Business Intelligence systems
·  Participate in design reviews, code reviews, unit testing and integration testing
·  Assume ownership and accountability for the assigned deliverables through all phases of the development lifecycle
·  SDLC Methodology (Agile / Scrum / Iterative Development)

Make every future a success.
  • Job directory
  • Business directory