Analytics Engineer, Data Platform
Location
Columbus, OH
Posted
Today
We are building the data foundation that will power AndHealth's self-service analytics organization. As an Analytics Engineer, Data Platform, you will own a critical piece of that foundation from raw data ingestion to curated data products that analysts, clinicians, and business stakeholders can trust and use independently.
You will work closely with Data and Software Engineering on ETL pipelines, build and maintain dbt models that encode domain-specific business logic, and help stand up the semantic layer and BI tooling (we use Omni) that enables self-service across the organization. This is a full-stack data role: you are equally comfortable in the weeds of SQL and dbt as you are thinking about how a metric should be defined for a care operations team.
This role sits within a small, growing Analytics Engineering team and is an opportunity to shape the platform from the ground up.
What you'll do in the role
- Design, build, and maintain dbt models that transform raw clinical, pharmacy, billing, and care operations data into clean, reliable, domain-specific data marts.
- Partner with Data and Software Engineering on ETL pipeline design, data ingestion, and raw-to-staging transformations by ensuring data arrives in a form that AE can work with.
- Develop and own the semantic layer in Omni by defining governed metric definitions, curated datasets, and self-service data products that analysts and stakeholders can consume directly.
- Build a thorough testing suite across the data platform: schema tests, data quality checks, anomaly detection, and SLA monitoring to ensure stakeholders can trust what they see.
- Implement and maintain data governance practices including lineage documentation, cataloging, access control, and column-level documentation in dbt.
- Become a domain expert in your assigned area (pharmacy operations, billing, or care operations) by deeply understanding the business logic and translating it into accurate, scalable data models.
- Work closely with analysts to understand their data needs, accelerate their workflows, and reduce time spent on ad hoc data prep — enabling them to focus on higher-order analysis and strategy.
- Contribute to platform-level decisions: warehouse organization, modeling conventions, CI/CD for dbt, and tooling standards across the AE team.
- Proactively identify data quality issues, gaps in coverage, and opportunities to improve the reliability and usability of the data platform.
Education and licensure requirements
- Bachelor's degree in Computer Science, Economics, Engineering, Mathematics, or a related quantitative field, or equivalent practical experience.
Other skills or qualifications
Required
- Strong SQL proficiency: comfortable writing complex queries, CTEs, window functions, and performance-optimized transformations across large datasets.
- Hands-on experience with dbt (Core or Cloud): you understand the modeling layer, ref() dependencies, tests, macros, and how to structure a well-organized dbt project.
- Solid understanding of data warehouse concepts: dimensional modeling, mart layers, slowly changing dimensions, and how to think about the staging / intermediate / mart separation.
- Experience working with ETL/ELT pipelines and partnering with data or software engineers on data ingestion.
- Comfort with the command line: run scripts, manage files, and troubleshoot basic shell operations. You don't need to be a sysadmin, but you're not afraid of a terminal.
- Strong analytical instincts: able to interrogate data, identify anomalies, trace root causes, and communicate findings clearly to both technical and non-technical audiences.
- Comfort working in ambiguous, fast-moving environments with competing priorities.
Preferred
- Experience with a semantic layer or BI tool such as Omni, Looker, Metabase, or similar — especially defining metrics, dimensions, and governed data products.
- Familiarity with healthcare data: clinical, pharmacy, billing, or claims data from EHRs, TPAs, or pharmacy operating systems.
- Experience with data quality frameworks, testing strategies, or anomaly detection in a production data environment.
- Exposure to data governance tooling: data catalogs, lineage tracking, or column-level documentation.
- Python or another scripting language for data tasks or pipeline work.
Stay informed about the latest analytics engineering opportunities. Subscribe to our weekly newsletter.