Must have skills:
Nice to have skills:
- You will build excellent foundations: design, implement, and manage solutions that are fit for purpose and are reliable, performant and secure.
- You will design, develop, deploy and maintain ETL jobs
- You will conceive, deploy and maintain microservices on a modern AWS-based software stack
- You will implement or ensure the correct usage of: monitoring, alerting, and other operational must-haves
- You will be part of a team where Continuous Integration, Code Quality, Peer Reviews, Code Reviews, etc. are kept at very high standards
- You will work very closely on interesting (hard) problems with Data Scientists who will be happy to share knowledge and collaborate
The right person will be comfortable in an “all hands on deck” environment, loves solving problems, is excited to work in high-performance teams, and can thrive in a startup culture.
- Experience building reliable data pipelines, where reliable can mean At Least Once, Exactly Once delivery etc.
- Experience with building and/or running large-scale applications on a PaaS/SaaS cloud, especially on AWS
- Knowledge of distributed systems topics, data structures, algorithmic complexity
- Knowledge of relational and columnar databases and their respective tradeoffs
- Demonstrated proficiency in at least two of: Python, Scala, Java
- Knowledge of AWS services, especially Kinesis, DynamoDB, Lambda, S3, Cloudformation, ECS and/or open source equivalents like Kafka, MongoDB
- Knowledge of Micro-batching or stream processing frameworks, for example Spark, Flink
- Effective communication skills and an interest in being part of highly autonomous and cross-functional teams
- A degree in Computer Science and/or several years work experience as a Data Engineer or Backend Engineer in a commercial environment
- Fluent communication in English
- Ability to self-manage, gauge priorities and meet deadlines in stressful situations
Position closed, but we can still help
Check out our current open positions