Description de l'offre
Amazon's eCommerce Foundation (eCF) organization is responsible for the core components that drive the Amazon website and customer experience. Serving millions of customer page views and orders per day, eCF builds for scale. As an organization within eCF, the Big Data Technologies (BDT) group is no exception. We collect petabytes of data from thousands of data sources inside and outside Amazon including the Amazon catalog system, inventory system, customer order system, page views on the website and Alexa systems. We also support Amazon subsidiaries such as IMDB and Audible. We provide interfaces for our internal customers to access and query the data hundreds of thousands of times per day, using Amazon Web Service's (AWS) Redshift, Hive, Spark and Oracle. We build scalable solutions that grow with the Amazon business.
BDT is growing, and the data processing landscape is shifting. Our data is consumed by thousands of teams across Amazon including Research Scientists, Machine Learning Specialists, Business Analysts and Data Engineers. BDT team is building an enterprise-wide Big Data Marketplace leveraging AWS technologies. We enable teams at Amazon to produce analytical data in any form of storage (S3, DynamoDB, Aurora, etc.) and process that data using any type of compute environment such as EMR/Spark, Redshift, Athena, and others via a common bus. We are developing innovative products including the next-generation of data catalog, data discovery engine, data transformation platform, and more with state-of-the-art user experience. The Data Management and Optimization (DMO) team is looking for top engineers to build them from the ground up.
The DMO team provides a bridge between producers and consumers of analytical data stored in a data lake. By decoupling producers from consumers, our products remove friction for data ingress and maintenance while enabling optimized data consumption and analysis:
- We enable producers to correctly and efficiently maintain and evolve warehoused data with minimal impact to downstream consumers.
- We provide a highly performant compute platform by researching and delivering optimal, specialized services that run the most common and costly producer compute tasks.
- We provide toolsets that enable customers to create and consume validated, read-optimized views of data.
- We are also involved in helping reduce barriers to producing new/derived datasets.
This is a hands-on position where you will do everything from designing and building extremely scalable components and cutting-edge features to formulating strategy and direction for Big Data at Amazon. You will also mentor junior engineers and work with the most sophisticated customers in the business to help them get the best results. You need to not only be a top software developer with excellent programming skills, an understanding of big data and parallelization, and a stellar record of delivery, but also excel at leadership and customer obsession and have a real passion for massive-scale computing.
Your responsibilities will include:
- Keeping your finger on the pulse of the constantly evolving and growing Big Data field
- Translation of complex functional and technical requirements into detailed architecture and design
- Delivering systems and features with top-notch quality, on time
- Stay current on technical knowledge to keep pace with rapidly changing technology, and work with the team in bringing new technologies on board
Come help us build the future of Big Data!
· BS degree or higher in Computer Science (or related program) and the ability to code in any of the following:
· Java or C++, C#,Python or experience in programming with an Object Oriented language
· Computer Science fundamentals
· Software/System Design experience
· 3+ years of relevant work experience