EU Supply Chain and Trans Tech is looking for an experienced Data Engineer to play a key role in maintaining and transforming EU Supply Chain and Trans data lake into meaningful analytics.
The ideal candidate will be passionate about leading a team driving next generation extremely large, scalable and fast distributed systems on AWS stack (with focus on Redshift).
The Data Engineer will help us grow our capability set, drive efficiency, and improve our overall data technology offering.
The ideal candidate is an expert in data modelling, ETL design, and closely partners with our stakeholders to identify data infrastructure opportunities that have a significant business impact.
This candidate relishes working with large volumes of data, enjoys the challenge of highly complex technical contexts, and above all else, is passionate about data and analytics.
He should possess excellent written and verbal communication skills are required as the candidate will work very closely with diverse teams and senior leadership.
In addition, the candidate should demonstrate thought leadership to drive business insights through machine / deep learning initiatives using out elaborate data sets.
Along with complex problems to solve, we provide you world class work environment and a chance to work with few of industry most talented team members in data engineering space and opportunity to contribute and create history while having fun.
Design and lead reviews for efficient data models using industry best practices and metadata for ad hoc and pre-built reporting
Design, build and own all the components of a high volume data warehouse end to end.
Interface with business customers, gathering requirements and delivering complete data & reporting solutions owning the design, development, and maintenance of ongoing metrics, reports, analyses, dashboards, etc.
to drive key business decisions
Continually improve ongoing reporting and analysis processes, automating or simplifying self-service support for customers
Interface with other technology teams to extract, transform, and load (ETL) data from a wide variety of data sources
Own the functional and nonfunctional scaling of software systems in your ownership area.
Stakeholder management and status reporting
Identifying new opportunities in partnership with business partners
Bachelors in Computer Science, Engineering, Statistics, Mathematics or related field
7-12 years of experience in data engineering / business intelligence space
Curious, self-motivated & a self-starter with a can do attitude’. Comfortable working in fast paced dynamic environment.
Strong understanding of ETL concepts and experience building them with large-scale, complex datasets using traditional or map reduce batch mechanism.
Strong data modelling skills with solid knowledge of various industry standards such as dimensional modelling, star schemas etc
Extremely proficient in writing performant SQL working with large data volumes
Experience designing and operating very large Data Warehouses
Experience with at least one of scripting languages(e.g., UNIX Shell scripting, Python, Perl, Ruby).
Good to have experience working on AWS stack
Clear thinker with superb problem-solving skills to prioritize and stay focused on big needle movers
Master’s degree in Computer Science, Information Systems, Mathematics or related discipline
Strong knowledge of one or more scripting language (Python, Perl, SCALA)
Strong analytical skills with excellent knowledge of Oracle, SQL, and PL / SQL.
Expert understanding of ETL techniques and best practices to handle extremely large volume of data
Experience with AWS using S3, EC2, Redshift, Aurora, Lambda, QuickSight, etc.
Experience in data ingestions techniques for batch and stream processing using AWS Batch, AWS Kinesis, AWS Data Pipeline
Experience in AWS Big Data technologies such EMR, Glue, Athena, Redshift Spectrum
Strong knowledge of machine learning, data mining, and predictive modeling
Ability to handle multiple, competing priorities in a fast-paced environment
Work well in teams, respecting ideas from teammates, business partners, and technical experts
Experience in maintaining data warehouse systems and working with large scale data transformations using Hadoop, Hive, Spark, or other Big Data technologies
Strong customer focus, ownership, and drive to get things done
AWS certifications or other related professional technical certifications