Skip to content

Sr. Manager, Analytics Engineer - Biopharma

Pfizer LogoPfizer
View Organization

Salary

The annual base salary for this position ranges from $113,900.00 to $189,700.00

Pfizer is seeking hardworking, passionate and results-oriented individuals to join our Analytics Engineering team to build data foundations and tools to craft the future. You will design and implement scalable, extensible and highly-available data pipelines on large volume data sets, that will enable impactful insights & strategy for our products. Our culture is about getting things done iteratively and rapidly, with open feedback and debate along the way; we believe analytics is a team sport, but we strive for independent decision-making and taking sensible risks. Our team collaborates deeply with partners across analytics, data science, marketing, digital, business teams: our mission is to drive innovation by providing the business, marketing and data scientist partners best in class systems, data products, and tools to make decisions that drive business performance and patient experience. This will include using large and complex data sources, helping derive measurable insights, delivering dynamic and intuitive decision tools, and bringing our data to life via amazing visualizations.

The ideal candidate is a self-motived teammate, skilled in a broad set of data processing techniques with the ability to adapt and learn quickly, provide results with limited direction, and choose the best possible data processing solution is a must.

Role Responsibilities

Reporting to the head of Analytics Engineering this person will collaborate with data scientists, reporting/data visualization specialists, business, marketing teams all working together to identify requirements that will derive the creation of data pipelines and other data products. You will work closely with Pfizer’s Digital team to understand the architecture and internal APIs involved in upcoming and ongoing projects. We are seeking an outstanding person to play a pivotal role in helping the analysts & business users make decisions using data and visualizations. You will partner with key partners across the business, data science & marketing teams as you design and build query friendly data structures.

• Translate business requirements by business team into data and engineering specifications

• Build scalable data sets based on specifications from the available raw data and derive business metrics/insights

• Understand and Identify server APIs that needs to be instrumented for data analytics and align events for execution in already established data pipelines

• Explore and understand sophisticated data sets, identify and formulate correlational rules between heterogenous sources for effective analytics and reporting

• Process, clean and validate the integrity of data used for analysis

• Develop Python and Shell Scripts for data ingestion and stitching from external data sources for business insights

• Partner with data scientists to build scalable modeling features pipeline

• Work closely with analytics and data science teams to develop robust data pipeline and analytics visualization

Basic Qualifications

• Bachelor’s degree with 7+ years of experience OR Masters Degree with 6+ years of experience OR PhD with 2+ years of experience

• Degree preferably in engineering, economics, statistics, computer science, or related quantitative field.

• Relevant professional experience with Big Data systems, pipelines and data processing

• Preferred experience in Applied Econometrics, Statistics, Data Mining, Machine Learning, Analytics, Mathematics, Operations Research, Industrial Engineering, or related field

• Practical hands-on experience with technologies like Apache Hadoop, Apache Pig, Apache Hive, Apache Sqoop & Apache Spark

• Experience working in Dataiku is a plus but not required

• Ability to understand API Specs, identify relevant API calls , extract data and implement data pipelines & SQL-friendly data structures

• Identify Data Validation rules and alerts based on data publishing specifications for data integrity and anomaly detection

• Understanding on various distributed file formats such as Apache AVRO, Apache Parquet and common methods in data transformation

• Expertise in Python, Unix Shell scripting and Dependency driven job schedulers

• Expertise in Core JAVA, Oracle, Teradata and ANSI SQL

• Familiarity with rule based tools and APIs for multi stage data correlation on large data sets is a plus!

• Candidate demonstrates a breadth of diverse leadership experiences and capabilities including: the ability to influence and collaborate with peers, develop and coach others, oversee and guide the work of other colleagues to achieve meaningful outcomes and create business impact.