Headquarters: Toronto, Canada
We’re looking for a Data Engineer to manage our existing data workflows, develop and maintain new ETL pipelines, and conduct data related QA. You will be creating and maintaining production-quality in-house tools within a large shared code base, and the data you curate will be used by thousands of university students, researchers, and marketing professionals.
The ideal candidate is a self-starter, has a high level of attention to detail, is comfortable asking questions, enjoys working with talented colleagues, and has an interest in analytics and data visualization.
This is a 100% remote position, our developers can live and work anywhere . This is a full-time salaried position.
Required Skills & Experience:
Five years of professional software development work experience
Expert relational database and data manipulation skills
Experience with data orchestration platforms (Dagster, Airflow, or Prefect)
Experience with development on large OOP software projects
Thorough understanding of API design principles
Ability to maintain our full data processing stack, primarily in Python but with legacy code in PHP
Experience with Linux servers, including Linux command line and SSH
Must write clean, production quality, well documented, maintainable code
Bonus Skills & Experience:
Experience with Hadoop (Hive and/or Trino)
Experience using AWS services for big data tasks
We Work Remotely: Remote jobs in design, programming, marketing and more