Back to jobs listings
Engineering - Paris - Full-time
We believe in a world where all cars are shared. Carsharing empowers people to get going in a smarter, easier way, while also having a positive impact on the environment and making cities more liveable. It’s this vision that propels us forward and inspires us to think even bigger.
Since April 2019, Drivy is now part of Getaround. Together, we’re the world's leading carsharing platform with a community of more than 5 million users sharing over 11,000 connected cars across 7 countries.
Our team is collaborative, positive, curious, and engaged. We think fast, work smart, laugh often, and are looking for like-minded people to join us in our mission to disrupt car ownership and make cities better.
Who you’ll work with
You will join the engineering team (27 brilliant people!) as a Data Engineer in our growing Data Engineering squad (2 people). You’ll report to Michael and will take on many different challenges.
Our team has a great impact and high-leverage: it helps the entire company be more productive and make better decisions. We love to be open about the work we do. Some examples are:
• Why we've chosen Snowflake ❄️ as our Data Warehouse
We also give back to the community by contributing to different projects like Redash, Apache Airflow, Telegraf, Embulk or many different Ruby projects.
What you'll work on
Our main challenge this year is to make our data stack and pipelines more reliable to go from a model where data is used mainly for reporting and ad-hoc analysis to one where we can use this data in our product.
Here are a few more detailed examples of what you'll take on:
• Ensure high SLAs on our core tables, making sure they have excellent freshness, reliability, and documentation.
• Work closely with data analysts/data scientists and other groups to support them with various tasks and implement new ETL pipelines.
• Maintain and enhance our growing core data infrastructure and ETL framework.
• Build tools to improve company productivity with data.
• Develop processes to monitor and ensure data integrity.
We’re building a marketplace that will scale to millions of users in many different countries in the coming years. You can imagine that there will also be many challenging problems that we haven’t even thought of yet!
What you'll bring to the table
•2+ years experience
•You are able to write complex SQL in your sleep and you're fluent in Python.
• You care about agile software processes, reliability, data quality, and responsible experimentation. You are also pragmatically lazy. If it can be automated, it will be automated.
• You’re a team player, have excellent communication and organisational skills, understand the value of collaboration and work well within a team.
• You take satisfaction in clearing roadblocks for the team.
• Knowledge of ETL design, implementation and maintenance
• Experience with MPP or a columnar database and with cloud-based infrastructures such as AWS or GCP.
• Able to communicate in English
Our Tech Stack
• Apache Airflow for ETL workflows
• Embulk with various Python, Ruby and Shell scripts for ETL
• Snowflake as our Data warehouse
What we offer:
• A solid engineering team with a lot of experience
• The peace of mind of a large test suite and pull requests to stay sane and ship often
• Getting to learn from your peers and to share your knowledge on the blog and in our internal presentations every two weeks
• A ticket to one technical conference of your choice each year
• Open source projects are being extracted from the codebase when appropriate
• We'll buy any non fiction books you want to read, and you'll get access to our growing library
• Offices in Paris, San Francisco, Berlin, London, Barcelona, New York...
• Tickets restaurants, a good health insurance
• "Hack days" to experiment with new technologies and ideas as well as other team events
Don't hesitate to give our Data challenge a try!