Analytics Engineer
Recognized by FastCo in 2022 as one of the World Changing Ideas Awards and in 2020 as one of the World's Most Innovative Companies, Trove powers resale for the world's most beloved brands, extending the life of millions of products and creating more inclusive, less wasteful business models. Trove is the market leader in branded resale and trade-in for world-class brands and retailers such as Canada Goose, lululemon, Patagonia, REI, Levi’s, Arc’teryx, Allbirds, and more. Through its proprietary Recommerce Operating System, Trove is accelerating the shift toward more sustainable business models, foundational for circularity. Over the last decade, Trove has equipped leading brands with technology and operations to create and scale branded resale programs by enabling customer trade-in of items, single-SKU identification and condition grading, site build and maintenance, and customer data collection, analytics and reporting. A Certified B Corporation, Trove is pioneering a new era of retail essential to a more sustainable future.
About the Analytics Engineer
In this role, you will enable Trove to use data to solve new and challenging problems in the fastest-growing segment of the retail industry. As an Analytics Engineer, you’ll join a team of Analysts and Analytics Engineers working together to build a best-in-class Analytics function. You will build user-friendly, high performing data models; add to documented best practices; and make architectural decisions to improve critical data processing. You will work to improve our data platform’s reliability, resiliency, and scalability in order to support our data analysts and data scientists directly, and all of our data stakeholders indirectly.
Our team is empowered to solve problems and make business recommendations independently, but also collaborates to design creative solutions to tough problems, offer and receive constructive feedback, and pair on thorny technical challenges. Our current data tooling includes Redshift, Snowflake, dbt, Looker, Stitch and Hex as well as AWS tools like DMS and S3.
Responsibilities
- Interface with data analysts, data scientists, and all data stakeholders to understand their needs, engineer solutions and promote best practices
- Build, manage, scale, and optimize our data pipelines, automated jobs, and integrations
- Extend and optimize the current data pipeline to help assemble complex data sets to address a diverse set of business and data analytics requests. We use off-the-shelf solutions like Stitch where we can, but occasionally need to build custom integrations as well
- Evolve tools, best practices and processes to enable the data team to monitor daily execution, diagnose and log issues, and fix pipelines to ensure SLAs are met with internal and external stakeholders
- Vet tools and technologies for the most viable solution for each problem at hand. Manage the selected tools
- Ensure focus on code and data quality within the data team. Partner with analysts to fine-tune queries of large, complex data sets
Example Projects
- Design and implement data models using our event tracking data (from Heap) to create tables that are fast and easy for analysts and our BI tooling (Looker) to use to answer questions about the user experience
- Manage changes driven by feature evolution in Trove's transactional software solution
- Work with Machine Learning Engineers to acquire (using public APIs, scraping) and model external data to be used in machine learning models
Qualifications
- You are energized by the thought of working across a wide array of technologies, project types, and stakeholders
- You have a product-focused mindset. You enjoy digging deep into business requirements and architecting systems that will scale and extend to accommodate those needs
- You have strong overall programming skills, and are able to write modular, maintainable code (in dbt, python or another scripting language)
- You have a deep understanding of SQL
- You have strong communication skills, including communicating complex technical information to a non-technical audience
- Strong knowledge of and experience with dbt is a big plus
- Familiarity and experience with Looker is also a plus
Bonus points if you have...
- Redshift cluster management and query optimization
- Snowflake experience
- Data acquisition tools including AWS DMS, Stitch and Fivetran
- Terraform and Ansible
- Familiarity with a data orchestration tool such as Airflow or Prefect
- Experience working with machine learning systems
The annual compensation range for this position is $94,660-$137,413 plus competitive bonus and equity.
Stay informed about the latest analytics engineering opportunities. Subscribe to our weekly newsletter.