Foresters Financial Logo

Foresters Financial

Data Engineer

Posted 9 Days Ago
Be an Early Applicant
In-Office
Toronto, ON, CAN
Junior
In-Office
Toronto, ON, CAN
Junior
The Data Engineer designs and builds data pipelines and models, collaborates with teams to ensure quality data delivery, implements AI-driven automation, and ensures compliance with governance standards.
The summary above was generated by AI
Career Opportunity

Role Title

Data Engineer

Purpose of role

The Data Engineer serves as a technical practitioner and individual contributor responsible for designing, building, and maintaining data pipelines, data models, and infrastructure that enable reliable and efficient data delivery across the organization. They work closely with data scientists, analytics teams, business stakeholders, and platform engineers to understand data requirements and deliver high-quality, well-documented data assets. The candidate will also collaborate with IT Security and Compliance teams to ensure data solutions adhere to security standards, data governance policies, and regulatory requirements.
This technical role will contribute across the full data engineering lifecycle, including data ingestion, transformation, storage optimization, and quality assurance. The candidate will build and maintain ETL/ELT processes using modern data stack technologies, implement data modeling best practices, and develop reusable frameworks that accelerate data delivery. They will apply software engineering principles to data development, including version control, code review, unit testing, and CI/CD practices to ensure data assets are production-ready and maintainable.
This role will help advance the organization's data engineering capabilities by staying current with emerging technologies, tools, and best practices. The candidate will contribute to the evaluation and adoption of cloud-native data platforms, streaming architectures, and data lakehouse patterns. They will also support AI/ML initiatives by building and maintaining data pipelines that feed machine learning workflows, including feature engineering, data preparation for model training, and integration of model outputs into downstream data products and reporting systems.

Job Description

Key Responsibilities

Data Engineering

  • Develop and optimize high-performance, end-to-end data transformation and reporting solutions using PySpark, Spark SQL, and T-SQL within cloud-native environments like Databricks
  • Architect and implement complex logical and physical data models that support modern patterns such as Data Lakehouse, Data Mesh, and Data Fabric
  • Construct robust ETL/ELT pipelines that facilitate seamless data ingestion and transformation from diverse sources into production-ready data assets
  • Build and maintain specialized data structures, including feature stores and automated retraining pipelines, to support the operationalization of machine learning models
  • Design test strategy, test plan, and test cases
  • Own and maintain complete and current assigned technical documentation
  • Collaborate with PMO on project estimation and resourcing
  • Participate in project risk assessment to identify potential obstacles ahead of time

AI-Driven ETL/ELT Automation

  • Leverage AI-accelerated development tools (e.g., Databricks Assistant, GitHub Copilot, Claude Code) to automate code generation, unit testing, and the refactoring of legacy ETL processes
  • Implement AI-powered automation for pipeline monitoring, utilizing machine learning for anomaly detection and the development of self-healing data infrastructure
  • Utilize AI tools to accelerate metadata management and data lineage mapping, ensuring that automated pipelines remain compliant with governance standards

Data Governance

  • Operationalize data quality and retention policies directly within data pipelines to ensure the integrity and security of the data lakehouse
  • Automate the capture of technical metadata and lineage as part of the standard engineering lifecycle to satisfy regulatory and compliance requirements
Key Qualifications
  • Education (minimum required): University graduate with a major in computer science or equivalent work experience
  • Experience (minimum required): 2+ years of experience in data engineering and operations, financial institution experience is an asset
  • Working knowledge of modern data architectures, including the practical application and construction of Data Lakehouse, Cloud Data Warehouse, Data Mesh, and Data Fabric environments
  • Hands-on experience in developing and optimizing complex ETL/ELT pipelines using PySpark, Spark SQL, and T-SQL within cloud-native environments such as Databricks, Snowflake, or Amazon Redshift
  • Technical proficiency in operationalizing machine learning workflows, including the development of feature stores, model serving layers, and automated retraining pipelines
  • Experience with Agile work planning and CI/CD platforms (e.g., Azure DevOps, GitHub Enterprise) to automate the deployment and validation of data assets
  • Expertise in leveraging AI-accelerated software development tools (e.g., GitHub Copilot, Databricks Assistant, Claude Code) to automate code generation, accelerate unit testing, and refactor legacy ETL logic
  • Experience applying AI/ML techniques to automate data engineering tasks, such as metadata extraction, schema mapping, and the creation of self-healing data infrastructure
  • Proven track record of designing and implementing end-to-end data transformation and reporting solutions, with a deep understanding of logical and physical data modeling
  • Experience with data analytics and reporting tools such as Power BI, Micro Strategy, and SAS
  • Hands-on experience implementing technical data governance, including the automated enforcement of data quality rules, metadata management, and data retention policies within the pipeline code
  • Excellent verbal and written communication skills (e.g. developing business cases and delivering presentations to senior management)
  • Strong analytical and problem-solving skills
  • Well organized, innovative with a high level of initiative
  • Detail oriented, able to manage several complex processes and tasks with a high level of accuracy
  • Demonstrated ability to work independently and deal with changing priorities while meeting tight deadlines
  • Strong interpersonal skills with the ability to build relationships and work in a team environment

#LI-Hybrid

Salary Range:

$75,000.00 - $95,000.00

 

The actual base salary for this position will depend on several factors, including job-related skills, experience, and education. In addition to base pay, eligible employees may participate in a discretionary variable incentive plan, results are subject to both individual and company performance.

Please note that this posting is intended to fill an existing vacancy; however, there may be instances where more than one vacancy is available for the same role.

Equal Opportunity Employment and Inclusion – at Foresters Financial, we are committed to sustaining an equal opportunity environment for all job applicants. We embrace Inclusion, Diversity and Equity (IDE) as a core strategic objective for building strong, innovative teams in which all our employees can show up wholly and authentically as themselves.

Foresters Financial strives to provide an accessible candidate experience for prospective employees with different abilities. If you anticipate needing any type of accommodations during the recruitment process, please email [email protected] in advance of your appointment.

Thank you for choosing Foresters. Only those candidates who will be selected for further consideration will be contacted by our Talent Acquisition Team.

Top Skills

Amazon Redshift
Azure Devops
Ci/Cd
Databricks
Git
Github Copilot
Micro Strategy
Power BI
Pyspark
SAS
Snowflake
Spark Sql
T-Sql
HQ

Foresters Financial Toronto, Ontario, CAN Office

789 Don Mills Road, Toronto, Ontario, Canada, M3C 1T9

Similar Jobs

10 Days Ago
Remote or Hybrid
Ontario, ON, CAN
Junior
Junior
AdTech • Consumer Web • Digital Media • eCommerce • Marketing Tech
The Data Engineer 2 will build data integration pipelines, maintain data quality, and collaborate with stakeholders to meet data deliverables.
Top Skills: AWSGCPPythonSQL
4 Days Ago
Easy Apply
Hybrid
Toronto, ON, CAN
Easy Apply
Senior level
Senior level
Artificial Intelligence • Marketing Tech • Software
The Senior Data Engineer will design and maintain streaming data pipelines, develop analytical databases, and ensure data system reliability while collaborating with teams across the organization.
Top Skills: Apache FlinkAWSCloudFormationElixirGCPKafkaKinesisPulsarPythonSQLTerraform
5 Days Ago
Remote or Hybrid
Canada
Senior level
Senior level
Digital Media • Gaming • Information Technology • Software • Sports • Esports • Big Data Analytics
As a Senior Data Science Engineer, you will lead modeling projects, build and optimize machine learning models, and mentor junior data scientists while collaborating with various teams to drive business outcomes.
Top Skills: PythonRSQL

What you need to know about the Toronto Tech Scene

Although home to some of the biggest names in tech, including Google, Microsoft and Amazon, Toronto has established itself as one of the largest startup ecosystems in the world. And with over 2,000 startups — more than 30 percent of the country's total startups — Toronto continues to attract new businesses. Be it helping entrepreneurs manage their finances, simplifying business operations by automating payroll or assisting pharmaceutical companies in launching new drugs, the city's tech scene is just getting started.

Sign up now Access later

Create Free Account

Please log in or sign up to report this job.

Create Free Account