If you are looking for an opportunity to solve deep technical problems and build innovative solutions in a fast paced environment working with smart, passionate colleagues, this might be the role for you.
Amazon Transportation Services (ATS) get millions of packages to customers worldwide faster and cheaper while providing world class customer experience.
Our data infrastructure is completely cloud driven and includes several billions of rows of data getting crunched daily to enable reporting and business decisions impacting billions of dollars a year.
With rapid expansion into new geographies, innovations in supply chain, delivery models and customer experience, increasingly complex transportation network, ever expanding selection of products and growing number of shipments worldwide, we have a unique opportunity to be a part of the growth journey and fight the battle against taming big data as the business scales even further.
We leverage cutting edge technologies in big data and AWS Technologies to provide high volume, low latency, high availability services to our business partners.
The ideal candidate will have experience working with large datasets and distributed computing technologies. The candidate relishes working with large volumes of data, enjoys the challenge of highly complex technical contexts, and, above all else, is passionate about data and analytics.
He / she is an expert with data modeling, ETL design and business intelligence tools, has hand-on knowledge on columnar databases such as Redshift and other related AWS technologies.
He / she passionately partners with the customers to identify strategic opportunities in the field of data engineering. He / she is a self-
starter, comfortable with ambiguity, able to think big (while paying careful attention to detail) and enjoys working in a fast-
paced team that continuously learns and evolves on a day to day basis.
Bachelor's degree or higher in a quantitative / technical field (e.g. Computer Science, Statistics, Engineering)
3+ years of relevant experience in one of the following areas : Data engineering, business intelligence or business analytics
3+ years of hands-on experience in writing complex, highly-optimized SQL queries across large datasets
1+ years of experience in scripting languages like Python, Bash etc
Experience in data modeling, ETL development, and Data warehousing Experience with Oracle, Redshift etc.
Sharp analytical abilities, proven design skills, excellent communication skills
Experience mentoring and training others on complex technical issues
3+ years of experience as a Data Engineer, BI Engineer, Business / Financial Analyst or Systems Analyst in a company with large, complex data sources
Experience with AWS services including S3, Redshift, EMR and Apache Spark
Experience with software coding practices is a strong plus
Experience using Linux / UNIX to process large data sets