Skip to content

Senior dbt Analytics Engineer

Tucows LogoTucows
View Organization

Tucows (NASDAQ:TCX, TSX:TC) is possibly the biggest Internet company you’ve never heard of. We started as a simple shareware site in 1993 and have grown into a stable of businesses; mobile, internet and domains.

We embrace a people-first philosophy that is rooted in respect, trust, and flexibility. We believe that whatever works for our employees is what works best for us. It’s also why the majority of our roles are remote-first, meaning you can work from anywhere you can connect to the internet!

Today, close to a thousand people work in over 16 countries to help us make the Internet better. If this sounds exciting to you, join the herd!

About the role:

At Tucows, data is essential for the organization. We are in the midst of building a world-class analytics, reporting and data science platform and recognize the foundational importance of solid data engineering. We think of data as a first class citizen. Our platform technology includes tools like Stitchdata, Fivetran, Airflow, Data Build Tool (DBT), Great Expectations (GE), Kafka, Pentaho Data Integration (PDI), Snowflake and Looker. We also use GitHub for version control.

We are seeking to hire a keen and self-motivated Data Engineer who loves the Internet, and loves learning and applying new technologies to solve complex problems. You will work closely with the analytics, data science and platform engineering teams, as well as the business stakeholders. You will enable the business to make increasingly better decisions by creating robust data pipelines and well-designed, high-quality data structures.

In this role, you can expect to:

  • Help to evolve and scale the data platform to enable the business.
  • Work closely with the analytics team and business stakeholders to understand the needs of the business providing data processes to support business decisions.
  • Perform analytics engineering, creating large-scale batch and real-time scalable data pipelines (ETL/ELT).
  • Write complex queries on large, heterogeneous data sets, make it easily accessible, and optimize the performance of our data platform.
  • Perform Data wrangling to transform and map data from raw data forms into formats more appropriate and valuable for analytics
  • Continuously optimize testing and tooling to improve data quality.
  • Be involved in DataOps as a culture
  • Employ best-practices practices in continuous integration and delivery.

You may be a good fit for our team if you have:

  • You have at least 2 years of software development/data engineering experience using python programming language.
  • You have experience building and testing scalable and reliable data pipelines, combining and transforming data from sources to consumption, and architecting data stores.
  • You have experience with one or more data processing frameworks.
  • Have hands-on experience with dimensional modeling and are familiar with our data stack.
  • You have SQL skills and have experience with enterprise, cloud and open source RDBMSes, and also messaging and event streaming platforms.
  • You have excellent record in process automation and building application tools as you continually identify opportunities for improvement
  • You know, and can demonstrate, the value of agile processes, continuous integration, and continuous delivery.
  • You have excellent written and verbal communication skills.
  • You love to collaborate, are a good team player, an excellent listener, and are fun to work with.
  • Have experience with cloud data technologies.
  • You understand the importance of data governance, security and stewardship
  • Predictive modeling and machine learning engineering experience is a strong plus.