d-Matrix Logo

d-Matrix

ML Compiler Architect, Senior Principal

Reposted 12 Days Ago
Be an Early Applicant
In-Office
Toronto, ON
Expert/Leader
In-Office
Toronto, ON
Expert/Leader
Design and implement a scalable MLIR-based compiler framework for cloud-based AI inference, collaborating with multiple teams to optimize model deployment.
The summary above was generated by AI

At d-Matrix, we are focused on unleashing the potential of generative AI to power the transformation of technology. We are at the forefront of software and hardware innovation, pushing the boundaries of what is possible. Our culture is one of respect and collaboration.

We value humility and believe in direct communication. Our team is inclusive, and our differing perspectives allow for better solutions. We are seeking individuals passionate about tackling challenges and are driven by execution.  Ready to come find your playground? Together, we can help shape the endless possibilities of AI. 

Location:

Hybrid, working onsite at our Toronto, Ontario, Canada headquarters 3-5 days per week.

Role: Software Compiler Architect – MLIR/LLVM for Cloud Inference

What You Will Do:

As a hands-on Front-End Software Compiler Architect focused on cloud-based AI inference, you will drive the design and implementation of a scalable MLIR-based compiler framework optimized for deploying large-scale NLP and transformer models in cloud environments. You will architect the end-to-end software pipeline that translates high-level AI models into efficient, low-latency executables on a distributed, multi-chiplet hardware platform featuring heterogeneous compute elements such as in-memory tensor processors, vector engines, and hierarchical memory.

Your compiler designs will enable dynamic partitioning, scheduling, and deployment of inference workloads across a cloud-scale infrastructure, supporting both statically compiled and runtime-optimized execution paths. You will focus on compiler strategies that minimize inference latency, maximize throughput, and efficiently utilize compute and memory resources in data center environments, in addition to your work on developing the compiler.

You will collaborate cross-functionally with systems architects, ML framework teams, runtime developers, performance engineers, and cloud orchestration groups to ensure seamless integration and optimized inference delivery at scale.

Key Responsibilities:

• Architect the MLIR-based compiler for cloud inference workloads, focusing on efficient mapping of large-scale AI models (e.g., LLMs, Transformers, Torch-MLIR) onto distributed compute and memory hierarchies.

• Lead the development of compiler passes for model partitioning, operator fusion, tensor layout optimization, memory tiling, and latency-aware scheduling.

• Design support for hybrid offline/online compilation and deployment flows with runtime-aware mapping, allowing for adaptive resource utilization and load balancing in cloud scenarios.

• Define compiler abstractions that interoperate efficiently with runtime systems, orchestration layers, and cloud deployment frameworks.

• Drive scalability, reproducibility, and performance through well-designed IR transformations and distributed execution strategies.

• Mentor and guide a team of compiler engineers to deliver high-performance inference-optimized software stacks.

What You Will Bring:

• BS 15+ Yrs / MS 12+ Yrs / PhD 10+ Yrs Computer Science or Electrical Engineering, with 12+ years of experience in Front End Compiler and systems software development, with a focus on ML inference.

• Deep experience in designing or leading compiler efforts using MLIR, LLVM, Torch-MLIR, or similar frameworks.

• Strong understanding of model optimization for inference: quantization, fusion, tensor layout transformation, memory hierarchy utilization, and scheduling.

• Expertise in deploying ML models to heterogeneous compute environments, with specific attention to latency, throughput, and resource scaling in cloud systems.

• Proven track record working with AI frameworks (e.g., PyTorch, TensorFlow), ONNX, and hardware backends.

• Experience with cloud infrastructure, including resource provisioning, distributed execution, and profiling tools.

Preferred Qualifications:

• Experience targeting inference accelerators (AI ASICs, FPGAs, GPUs) in cloud-scale deployments.

• Knowledge of cloud deployment orchestration (e.g., Kubernetes, containerized AI workloads).

• Strong leadership skills with experience mentoring teams and collaborating with large-scale software and hardware organizations.

• Excellent written and verbal communication; capable of presenting complex compiler architectures and trade-offs to both technical and executive stakeholders.

This role is a cornerstone of our cloud AI software strategy. You'll shape the way inference workloads are deployed, optimized, and scaled across data center infrastructure.

Equal Opportunity Employment Policy

d-Matrix is proud to be an equal opportunity workplace and affirmative action employer. We’re committed to fostering an inclusive environment where everyone feels welcomed and empowered to do their best work. We hire the best talent for our teams, regardless of race, religion, color, age, disability, sex, gender identity, sexual orientation, ancestry, genetic information, marital status, national origin, political affiliation, or veteran status. Our focus is on hiring teammates with humble expertise, kindness, dedication and a willingness to embrace challenges and learn together every day.

d-Matrix does not accept resumes or candidate submissions from external agencies. We appreciate the interest and effort of recruitment firms, but we kindly request that individual interested in opportunities with d-Matrix apply directly through our official channels. This approach allows us to streamline our hiring processes and maintain a consistent and fair evaluation of al applicants. Thank you for your understanding and cooperation.

Top Skills

Kubernetes
Llvm
Mlir
Onnx
PyTorch
TensorFlow
Torch-Mlir

Similar Jobs

17 Minutes Ago
In-Office
Toronto, ON, CAN
Senior level
Senior level
Food • Retail • Agriculture • Manufacturing
Lead AI-driven marketing transformation by managing product deployments, engaging with teams, and ensuring best practices for AI adoption and ROI.
Top Skills: AdtechAIAnalyticsCdpCRMMarketing AutomationMartechMlNlpPredictive Analytics
17 Minutes Ago
In-Office
Toronto, ON, CAN
Junior
Junior
Food • Retail • Agriculture • Manufacturing
The Sr. Analyst supports the Revenue Management team through financial analysis, modeling, and reporting, focusing on revenue growth initiatives and collaborating with cross-functional teams.
Top Skills: Circana Supply TrackExcelMicrosoft PowerpointOnestreamPower BISAP
17 Minutes Ago
In-Office
Toronto, ON, CAN
Expert/Leader
Expert/Leader
Food • Retail • Agriculture • Manufacturing
The Director of Digital Technology Operations manages operational activities, leads strategic projects, prepares executive communications, and fosters stakeholder engagement to ensure digital technology excellence.
Top Skills: ExcelPowerPoint

What you need to know about the Toronto Tech Scene

Although home to some of the biggest names in tech, including Google, Microsoft and Amazon, Toronto has established itself as one of the largest startup ecosystems in the world. And with over 2,000 startups — more than 30 percent of the country's total startups — Toronto continues to attract new businesses. Be it helping entrepreneurs manage their finances, simplifying business operations by automating payroll or assisting pharmaceutical companies in launching new drugs, the city's tech scene is just getting started.

Sign up now Access later

Create Free Account

Please log in or sign up to report this job.

Create Free Account