Offers “Amazon”

Expires soon Amazon

Data Engineer

  • Seattle (King)
  • Bachelor's Degree
  • IT development

Job description

DESCRIPTION

Amazon Web Services (AWS) Finance team is seeking a Data Engineer (DE) with a passion developing data architecture and tools to support self-service data analytics, and dive deep on AWS organization performance/optimization. You will build solutions to support data needs in one of the world's largest and most complex data warehouse environments.

The successful candidate will be a self-starter comfortable with ambiguity, with strong attention to detail, and the ability to work in a fast-paced environment. You will build and manage analytical resources on AWS and support analytical projects. You should have deep expertise and proven success in the design, creation, management, and business use of extremely large datasets. You should be expert at designing, implementing, and operating stable, scalable, low cost solutions to flow data from production systems into the data warehouse and into end-user facing applications. Above all you should be passionate about working with huge data sets and eager to learn new solutions to answer business questions and drive change.

Responsibilities

In this role, you will have the opportunity to display your skills in the following areas:
· Design, implement, and support an analytical platform providing ad hoc access to large datasets and computing power
· Implement data structures using best practices in data modeling, ETL/ELT processes, and SQL, Oracle, Redshift, and OLAP technologies
· Managing AWS resources including EC2, RDS, Redshift, Kinesis, EMR and etc.
· Interface with other technology teams to extract, transform, and load data from a wide variety of data sources using SQL and AWS big data technologies
· Explore and learn latest technologies to provide
· Recognize and adopt best practices in reporting and analysis: data integrity, test design, analysis, validation, and documentation
· Build robust and scalable data integration (ETL) pipelines using SQL, Python and Spark.
· Build and deliver high quality datasets to support business analyst and customer reporting needs.
· Continually improve ongoing reporting and analysis processes, automating or simplifying self-service support for customers
· Participate in strategic & tactical planning discussions, including annual budget processes

Desired profile

BASIC QUALIFICATIONS

· Bachelor's degree or higher in Computer Science, MIS, related technical field, or equivalent work experience.
· At least 4 years of relevant work experience in analytics, data engineering, business intelligence or related field
· At least 4 years of hands-on experience in writing complex, highly-optimized SQL queries across large datasets.
· At least 2 years of experience in one languages like Python etc.
· Demonstrable ability in data modeling, ETL development, and Data warehousing, or similar skills
· Experience building/operating systems for data extraction, ingestion, and processing of large data sets involving petabytes of data
· Demonstrable advanced skills and experience using SQL with large data sets (e.g. Oracle, SQL Server, Redshift)
· Experience with AWS technologies including Redshift, RDS, S3, EMR and RDS
· Proven track record of successful communication of analytical outcomes through written communication, including an ability to effectively communicate with both business and technical teams
· Exposure and knowledge of Security, encryption and Data Governance
· Experience in working and delivering end-to-end projects independently.

Make every future a success.
  • Job directory
  • Business directory