This Position is a short term assignment and will end 31st December 2020.
EU Transportation Tech is looking for a Data Engineer to play a key role in automating financial processes and reporting for Global Transportation Services.
The ideal candidate will be passionate about partnering in a team that drives next generation extremely large, scale-able and fast distributed systems on AWS stack (with focus on Redshift).
The Data Engineer will help us grow our capability set, drive efficiency, and improve our overall data technology offering.
The ideal candidate has exposure to data modelling, ETL design, and can build optimized data pipelines for a BigData environment.
This candidate relishes working with large volumes of data, enjoys the challenge of highly complex technical contexts, and above all else, is passionate about data and analytics.
S(h)e should possess excellent communication skills as the candidate will work closely with diverse teams and sometimes senior leadership.
In addition, the candidate should demonstrate thought leadership to drive business insights through machine / deep learning initiatives using out elaborate data sets.
Along with complex problems to solve, we provide you world class work environment and a chance to work with few of industry most talented team members in data engineering space and opportunity to contribute and create history while having fun.
Create and manage finance Redshift cluster for their analytical needs
Implement and test data pipelines from variety of systems (batch and real time).
Collaborate with the development team and end customers.
Advise the team on SQL optimization, performance tune when required
Troubleshoot existing datasets, broken pipelines, data integrity issues
Ensure quality of data through tested deliverables and auditing
Understand team, data and software architecture; actively seeks knowledge and solutions
Bachelors in Computer Science, Engineering, Statistics, Mathematics or related field
5-7 years of experience in data engineering / business intelligence space
Experience in managing large databases
Curious, self-motivated & a self-starter with a can do attitude’. Comfortable working in fast paced dynamic environment.
Good understanding of ETL concepts and explosure to large-scale, complex datasets proessing using traditional or map reduce batch mechanism.
Proficient in writing performant SQL working with large data volumes; ability to understand and action query plans
Experience with at least one of scripting languages(e.g., UNIX Shell scripting, Python, Perl, Ruby).
Exposure to AWS stack / cloud computing
Clear thinker with superb problem-solving skills to prioritize and stay focused on big needle movers
Master’s degree in Computer Science, Information Systems, Mathematics or related discipline
Good data modelling skills with solid knowledge of various industry standards such as dimensional modelling, star schemas etc
Strong knowledge of Python / Spark / PySpark
Strong analytical skills; ability to present complex datasets in visual form
Strong understanding of ETL techniques and best practices to handle extremely large volume of data
Experience with AWS using S3, EC2, Redshift, Aurora, Lambda, QuickSight, etc.
Experience in data ingestions techniques for batch and stream processing using AWS Batch, AWS Kinesis, AWS Data Pipeline
Experience in AWS Big Data technologies such EMR, Glue, Athena, Redshift Spectrum
Ability to handle multiple, competing priorities in a fast-paced environment
Work well in teams, respecting ideas from teammates, business partners, and technical experts
Strong ownership and drive to get things done
AWS certifications or other related professional technical certifications