Senior Analytics Engineer

Dremio, Can be based anywhere

Senior Analytics Engineer

Salary Not Specified

Dremio, Can be based anywhere

  • Full time
  • Permanent
  • Remote working

Posted today, 9 Jan | Get your application in now to be one of the first to apply.

Closing date: Closing date not specified

job Ref: b2533a1f27c64a7cb0e830e866c44d40

Full Job Description

Be Part of Building the Future
Dremio is the unified lakehouse platform for self-service analytics and AI, serving hundreds of global enterprises, including Maersk, Amazon, Regeneron, NetApp, and S&P Global. Customers rely on Dremio for cloud, hybrid, and on-prem lakehouses to power their data mesh, data warehouse migration, data virtualization, and unified data access use cases. Based on open source technologies, including Apache Iceberg and Apache Arrow, Dremio provides an open lakehouse architecture enabling the fastest time to insight and platform flexibility at a fraction of the cost. Learn more at www.dremio.com.About the role
In this role, you will be responsible for data pipelines into the Data Lake, data availability and quality of the Data Lake and creating reports and dashboards.You will be empowered to find opportunities to capture more data in pursuit of a higher degree of observability of the behavior of our users. You will engage with everyone from C-Suite / VPs to Account Executives and Support Engineers to understand needs and drive pipeline integrations and data availability to meet these needs.
What you'll be doing
Initially working cross functionally with product, engineering, marketing and pre-sales leadership you will:

Engage with stakeholders to understand, document and prioritize data driven decision making needs.
Collaborate cross functionally to deploy necessary tracking / analytics tools to support product usage telemetry data needs.
Create and maintain data pipelines adhering to semantic layer best practices for internal Data Lake.
Build and maintain data pipelines to provide enriched telemetry data for anomaly detection use-cases.
Configure, deploy and maintain integrations with popular tools such as GitHub and Slack.
Support pipeline development in response to internal data, analytics and requests required by the business.
Establish, document, communicate dashboarding and reporting dimensions.
Seek opportunities for innovation to further empower our decision makers, mature our Lakehouse and drive the business.
Create Dashboards on product telemetry, user activity, pipeline health, documentation usage, etc.
QA Dashboards and Reports against the raw data and partner with Data Lake owners to ensure accuracy.

Bachelors or Masters in Data Science, Computer Science, Math or equivalent
At minimum 5 years of experience as an Analytics Engineer, Data Engineer, or similar role
Expert in designing and implementing data pipelines using Python
Highly skilled SQL expert with a clear understanding of window functions, ctes, subqueries etc.
Basic understanding/familiarity of Iceberg, or existing knowledge of a similar datalake table format with a strong desire to learn about Iceberg
Experience creating reports and dashboards using Tableau or equivalent tool
Experience working with AWS, GCP or Azure cloud based storage
Experience working with tools like FiveTran, Stitch or equivalent.
Experience with product analytics data solutions like Intercom, Heap and Google Analytics.
Strong familiarity with dbt
Experience with ticketing systems (JIRA), communication tools (Slack), repository systems (GitHub).
Excellent project management skills with a proven track record of cross-functional impact.
Interested and motivated to be part of a fast-moving startup with a fun and accomplished team