Staff Analytics Engineer
Product Operations is a fast-growing, global team of data-driven influencers, setting the standard for how data and analytics is used at HubSpot, and inspiring product teams to action. We draw from multiple sources of data (user event data, NPS, transactional data, etc) to discover what is going well, where we can improve, and what to build next, partnering with Product Management, User Experience, and Engineering leadership to build better solutions for our customers.
The Staff Analytics Engineer serves as a force multiplier for the broader Prod Ops org: building and maintaining our highest value data assets; defining and implementing team strategy for analytics development processes; and enabling the team to operate with a consistent level of excellence as we scale in a remote, global environment. They lead with technical prowess to drive data decision making, fostering a culture of insights, and establishing a level of technical influence amplified across teams. In this role, you will work cross functionally, developing a deep understanding of the business and analytics priorities for the Prod Ops org to motivate solutions that will unlock our ability to scale and drive actionable insights - from refining and extending our data models, to refactoring tables, to building new assets and training content. You will collaborate with distributed analysts across the company to drive alignment on key business logic and metric definitions, influence the evolution of our core analytics infrastructure, and contribute to HubSpot’s library of best practices.
In this role, you’ll get to:
- Collaborate with technical and non-technical stakeholders, bridging the gap between business problem and technical solution
- Own and champion cross functional, centralized “crown jewel” data assets that answer HubSpot’s most critical operational questions, reinforcing them as Source of Truth across the organization
- Develop scalable data models that enable performant analysis into Hubspot’s products and business
- Curate, organize and document Product Operations data and reporting environments across Looker, dbt and Amplitude
- Collaborate closely with Hubspot’s BI and Data Engineering teams to expand data access and availability
- Codify and democratize best practices for SQL query optimization/reusability and reporting, leveraging HubSpot’s data assets, and analytics development within Snowflake SQL, dbt, LookML, and git
- Define teamwide coding standards, documentation requirements and development processes (code reviews, testing requirements, etc)
- Scope requirements with internal stakeholders and lead working groups to usher projects through their entire lifecycle while building clear roadmaps for cross-functional team members
Basic Qualifications:
Development
- Leading the technical execution of high complexity business intelligence and analytics projects in code
- Deep knowledge and experience with advanced SQL concepts including window functions, CTEs, nested queries, and other similar techniques
- Experience with complex datasets and computer science fundamentals, including software development lifecycle (SDLC)
- Extensive experience independently building and optimizing highly complex data pipelines, architectures and data sets; experience diagramming architecture and entity relationships with Lucidcharts (for example)
- Deep understanding of multi-step ETL and ELT jobs / data pipelines and working with job scheduling systems, with an ability to reverse engineer and refactor existing technical projects
Analytics
- Strong understanding of the analyst’s workflow as it relates to both structured and unstructured datasets
- Exceptional ability to document technical designs that sets a standard within the team
- 5+ years of hands on experience experience with advanced SQL (writing and optimizing), cloud data warehouses (eg Snowflake, Redshift, Bigquery) and relational databases
- Ability to collaborate through code-management/version control tools like GitHub Enterprise (for peer-reviews, feature branches, resolving conflicts and commits)
Team Enablement
- Lead mentoring efforts with colleagues including training and onboarding exercises (especially at-scale, or in a remote-first or global environment)
- Effective communication across multiple modes (video, wikis, decks, guided exercises) and a knack for choosing the best format for the task and audience at hand
- Writing and reviewing end-user and technical documents, including requirements and design documents for existing and future data systems, as well as data standards and policies
Preferred Qualifications:
Development
- Exposure to data pipeline and workflow management tools (i.e. Airflow, Google Cloud Composer, Luigi, etc)
- Experience with script based analytic transformation tools (dbt, AWS Glue, Talend, etc)
- 4+ years of experience with data quality and validation exercises in data science, business analytics, business intelligence (BI), or comparable big data “like” environments
- Proven expertise working with and implementing Slowly Changing Dimensions (SCD) strategies
Analytics
- Experience with JSON Flattening and extracting internal elements
- Conceptual knowledge and expertise with Looker (LookML, Looks and Dashboards) or equivalent BI visualization tool (e.g. Tableau, Qlik, Power Bi)
Team Enablement
- Experience identifying and driving process improvements around data use to drive team effectiveness
- Best practice definition: identifying the minimal set of rules for the greatest teamwide gains in clarity, accuracy, discoverability, and reusability
- Patience, empathy, finding success in the team’s success
Cash compensation range: $160,000-$189,600 annually
Stay informed about the latest analytics engineering opportunities. Subscribe to our weekly newsletter.