Location: Calgary, AB (Hybrid/Remote)
Type: Full-time
Position Overview
MetaFactor is hiring a Calgary-based Data Analytics Engineer to design, build, and optimize scalable data pipelines and analytics solutions.
The ideal candidate will have Upstream Oil & Gas and/or Power/Utility industry experience, along with strong experience with modern data engineering tools and cloud platforms, and is passionate about delivering well-structured, documented, and automated data solutions.
Key Responsibilities
- Design, build, and maintain data pipelines using tools such as Azure Data Factory, Databricks, Airflow, or similar orchestration frameworks.
- Develop, test, and deploy dbt models for data transformation and lineage tracking.
- Integrate and optimize data from streaming sources (e.g., Event Hubs, Kafka, IoT streams) and batch systems.
- Build robust data models and performance-optimized datasets in Snowflake and Azure Synapse.
- Work closely with analysts and data scientists to enable reliable, governed access to curated datasets.
- Implement CI/CD and version control best practices for data code (Git, dev/test/prod environments).
- Ensure data quality, observability, and compliance with organizational and regulatory standards.
- Monitor and optimize compute and storage costs across platforms.
Required Skills & Experience
- 3+ years of experience in data engineering, analytics engineering, or ETL development.
- Strong proficiency in SQL and experience with Python for data manipulation and automation.
- Hands-on experience with at least two of:
- Azure data services (Data Factory, Synapse, Event Hubs, Data Lake)
- Databricks (Spark, Delta Live Tables)
- Snowflake
- Apache Airflow or similar orchestration tools
- dbt (Data Build Tool)
- Experience working with data warehousing, data lakes, and ELT/ETL frameworks.
- Understanding of streaming vs batch architecture, and when to use each.
- Strong version control (Git) and CI/CD familiarity.
- Excellent communication and documentation skills.
Preferred Qualifications
Certifications in one or more of the following:
- Databricks Certified Data Engineer Associate / Professional
- Snowflake SnowPro Core or Advanced
- Microsoft Certified: Azure Data Engineer Associate (DP-203)
- Experience with infrastructure as code (Terraform, ARM, or Bicep) and containerized workflows (Docker, Kubernetes).
- Familiarity with data governance, metadata management, or Purview.
- Interest in data observability and pipeline monitoring (Monte Carlo, Datadog, etc.).
Due to a large volume of applications we are asking for some additional information to help with the decision-making process. Please provide your resume and a short video introducing yourself and explaining why you are the best fit for the job.
Please submit your resume and your video presentation link to the email below.![]()