Cerebras Systems Logo

Cerebras Systems

Senior Runtime Engineer

Reposted 12 Days Ago
Be an Early Applicant
Easy Apply
In-Office
2 Locations
Senior level
Easy Apply
In-Office
2 Locations
Senior level
Design and develop high-performance distributed software for scalable AI training systems, focusing on data pipelines and system efficiency.
The summary above was generated by AI

Cerebras Systems builds the world's largest AI chip, 56 times larger than GPUs. Our novel wafer-scale architecture provides the AI compute power of dozens of GPUs on a single chip, with the programming simplicity of a single device. This approach allows Cerebras to deliver industry-leading training and inference speeds and empowers machine learning users to effortlessly run large-scale ML applications, without the hassle of managing hundreds of GPUs or TPUs.  

Cerebras' current customers include global corporations across multiple industries, national labs, and top-tier healthcare systems. In January, we announced a multi-year, multi-million-dollar partnership with Mayo Clinic, underscoring our commitment to transforming AI applications across various fields. In August, we launched Cerebras Inference, the fastest Generative AI inference solution in the world, over 10 times faster than GPU-based hyperscale cloud inference services.

About The Role

We are building the next generation of large-scale AI training systems that power pre-training, post-training, distillation, fine-tuning, and reinforcement learning workloads at an unprecedented scale and efficiency.

You will design and develop high-performance distributed software that orchestrates massive compute and data pipelines across heterogeneous clusters. Your work will push the limits of concurrency, throughput, and scalability—enabling efficient execution of models with trillions of parameters. This role sits at the intersection of systems engineering and machine learning performance, demanding both architectural depth and low-level implementation skills. You will help shape how models are trained and optimized end-to-end, from data ingestion to distributed execution, across cutting-edge hardware platforms.

Responsibilities
  • Design and implement distributed runtime components to efficiently manage large-scale model training, fine-tuning, and RL workloads.
  • Develop and optimize high-performance data and communication pipelines that fully utilize CPU, memory, storage, and network resources.
  • Enable scalable execution across multiple compute nodes, ensuring high concurrency and minimal bottlenecks.
  • Collaborate closely with ML and compiler teams to integrate new model architectures, training regimes, and hardware-specific optimizations.
  • Diagnose and resolve complex performance issues across the software stack using profiling and instrumentation tools.
  • Contribute to overall system design, architecture reviews, and roadmap planning for large-scale AI workloads.
Skills & Qualifications
  • 3+ years of experience developing high-performance or distributed system software.
  • Strong programming skills in C/C++, with expertise in multi-threading, memory management, and performance optimization.
  • Experience with distributed systems, networking, or inter-process communication.
  • Solid understanding of data structures, concurrency, and system-level resource management (CPU, I/O, and memory).
  • Proven ability to debug, profile, and optimize code across scales—from threads to clusters.
  • Bachelor’s, Master’s, or equivalent experience in Computer Science, Electrical Engineering, or related field.
Preferred Skills & Qualifications
  • Familiarity with machine learning training or inference pipelines, especially distributed training and large-model scaling.
  • Exposure to Python and PyTorch, particularly in the context of model training or performance tuning.
  • Experience with compiler internals, custom hardware interfaces, or low-level protocol design.
  • Prior work on high-performance clusters, HPC systems, or custom hardware/software co-design.
  • Deep curiosity about how to unlock new levels of performance for large-scale AI workloads.
Why Join Cerebras

People who are serious about software make their own hardware. At Cerebras we have built a breakthrough architecture that is unlocking new opportunities for the AI industry. With dozens of model releases and rapid growth, we’ve reached an inflection  point in our business. Members of our team tell us there are five main reasons they joined Cerebras:

  1. Build a breakthrough AI platform beyond the constraints of the GPU.
  2. Publish and open source their cutting-edge AI research.
  3. Work on one of the fastest AI supercomputers in the world.
  4. Enjoy job stability with startup vitality.
  5. Our simple, non-corporate work culture that respects individual beliefs.

Read our blog: Five Reasons to Join Cerebras in 2025.

Apply today and become part of the forefront of groundbreaking advancements in AI!

Cerebras Systems is committed to creating an equal and diverse environment and is proud to be an equal opportunity employer. We celebrate different backgrounds, perspectives, and skills. We believe inclusive teams build better products and companies. We try every day to build a work environment that empowers people to do their best work through continuous learning, growth and support of those around them.

This website or its third-party tools process personal data. For more details, click here to review our CCPA disclosure notice.

Top Skills

C/C++
Python
PyTorch

Cerebras Systems Toronto, Ontario, CAN Office

150 King St W, Toronto, Ontario, Canada, M5H 1J9

Similar Jobs

An Hour Ago
Remote or Hybrid
Ottawa, ON, CAN
Senior level
Senior level
Artificial Intelligence • Cloud • HR Tech • Information Technology • Productivity • Software • Automation
The Principal Platform Architect guides customers in their digital transformation journey through the ServiceNow platform, ensuring alignment between technology solutions and business objectives while providing governance and technical expertise.
Top Skills: AICloud Application TechnologyServicenow
An Hour Ago
Remote or Hybrid
Toronto, ON, CAN
Senior level
Senior level
Artificial Intelligence • Cloud • HR Tech • Information Technology • Productivity • Software • Automation
Architects guide customers through digital transformation by providing architectural guidance, optimizing platform strategies, and ensuring successful project delivery with a focus on ServiceNow solutions.
Top Skills: AICloud Application TechnologyServicenow
An Hour Ago
Remote or Hybrid
Toronto, ON, CAN
Senior level
Senior level
Artificial Intelligence • Cloud • HR Tech • Information Technology • Productivity • Software • Automation
The Principal Platform Architect is responsible for advising clients on establishing a technical foundation in the ServiceNow platform and driving effective digital transformation. This role includes managing technical governance, developing customer roadmaps, and guiding teams in delivering technical solutions.
Top Skills: AICloud ApplicationsServicenow Platform

What you need to know about the Toronto Tech Scene

Although home to some of the biggest names in tech, including Google, Microsoft and Amazon, Toronto has established itself as one of the largest startup ecosystems in the world. And with over 2,000 startups — more than 30 percent of the country's total startups — Toronto continues to attract new businesses. Be it helping entrepreneurs manage their finances, simplifying business operations by automating payroll or assisting pharmaceutical companies in launching new drugs, the city's tech scene is just getting started.

Sign up now Access later

Create Free Account

Please log in or sign up to report this job.

Create Free Account