Cerebras Systems Logo

Cerebras Systems

Senior ML Software Engineer - Integration & Quality

Reposted 9 Days Ago
Be an Early Applicant
Easy Apply
In-Office
2 Locations
Senior level
Easy Apply
In-Office
2 Locations
Senior level
Responsible for software integration and quality for Cerebras AI platform, focusing on automation, debugging, and cross-team collaboration. Develops QA strategies and testing methodologies to ensure product quality.
The summary above was generated by AI

Cerebras Systems builds the world's largest AI chip, 56 times larger than GPUs. Our novel wafer-scale architecture provides the AI compute power of dozens of GPUs on a single chip, with the programming simplicity of a single device. This approach allows Cerebras to deliver industry-leading training and inference speeds and empowers machine learning users to effortlessly run large-scale ML applications, without the hassle of managing hundreds of GPUs or TPUs.  

Cerebras' current customers include top model labs, global enterprises, and cutting-edge AI-native startups. OpenAI recently announced a multi-year partnership with Cerebras, to deploy 750 megawatts of scale, transforming key workloads with ultra high-speed inference. 

Thanks to the groundbreaking wafer-scale architecture, Cerebras Inference offers the fastest Generative AI inference solution in the world, over 10 times faster than GPU-based hyperscale cloud inference services. This order of magnitude increase in speed is transforming the user experience of AI applications, unlocking real-time iteration and increasing intelligence via additional agentic computation.

About the Role

We are looking for a Software Engineer to join the ML Integration and Quality team at Cerebras. This team sits at the intersection of machine learning infrastructure, distributed systems, and hardware/software co-design.

In this role, you will help integrate and validate the software stack that powers the Cerebras AI platform, ensuring large-scale ML workloads run reliably and efficiently across our systems. You will work closely with engineers across runtime, compiler, kernel, and hardware teams to debug complex issues, improve automation, and strengthen the reliability of our AI infrastructure.

This is an excellent opportunity for engineers who enjoy working across the stack, debugging complex systems, and improving the reliability of large-scale AI platforms.

Responsibilities

  • Integrate and validate software components across the Cerebras AI platform.
  • Collaborate with engineers across ML runtime, compiler, kernel, and hardware teams to ensure reliable feature integration.
  • Investigate and debug complex issues across distributed systems and large-scale ML workloads.
  • Build automation tools and infrastructure to support integration testing, system validation, and debugging workflows.
  • Develop and maintain testbeds used to validate system performance and reliability.
  • Identify system bottlenecks, failure points, and edge cases that impact ML workload performance.
  • Contribute to test plans and validation strategies for new features and platform capabilities.
  • Improve observability, diagnostics, and debugging workflows across the ML software stack.
  • Work with product and engineering teams to ensure high-quality releases of the Cerebras inference platform.

Minimum Skills & Qualifications

  • ~5 years of experience in software engineering, systems engineering, or infrastructure development.
  • Strong programming skills in Python, C++, Go, or similar languages.
  • Experience debugging complex systems or distributed software environments.
  • Familiarity with systems-level development, infrastructure tooling, or platform integration.
  • Experience building automation tools, testing frameworks, or internal developer tooling.
  • Strong problem-solving skills and the ability to investigate issues across multiple system layers.
  • Excellent communication and collaboration skills.

Preferred Skills

  • Experience working with machine learning infrastructure or ML model deployment.
  • Familiarity with LLM or multimodal model workloads.
  • Experience with distributed systems, cloud infrastructure, or large-scale compute clusters.
  • Exposure to performance debugging, profiling, or system observability tools.
  • Experience with microservices, containerized environments, or cluster orchestration.
  • Exposure to hardware accelerators, compilers, or ML frameworks.
Location
  • This role follows a hybrid schedule, requiring in-office presence 3 days per Please note, fully remote is not an option.
  • Office locations: Sunnyvale, CA or Toronto, ON. 
Why Join Cerebras

People who are serious about software make their own hardware. At Cerebras we have built a breakthrough architecture that is unlocking new opportunities for the AI industry. With dozens of model releases and rapid growth, we’ve reached an inflection  point in our business. Members of our team tell us there are five main reasons they joined Cerebras:

  1. Build a breakthrough AI platform beyond the constraints of the GPU.
  2. Publish and open source their cutting-edge AI research.
  3. Work on one of the fastest AI supercomputers in the world.
  4. Enjoy job stability with startup vitality.
  5. Our simple, non-corporate work culture that respects individual beliefs.

Read our blog: Five Reasons to Join Cerebras in 2026.

Apply today and become part of the forefront of groundbreaking advancements in AI!

Cerebras Systems is committed to creating an equal and diverse environment and is proud to be an equal opportunity employer. We celebrate different backgrounds, perspectives, and skills. We believe inclusive teams build better products and companies. We try every day to build a work environment that empowers people to do their best work through continuous learning, growth and support of those around them.

This website or its third-party tools process personal data. For more details, click here to review our CCPA disclosure notice.

Top Skills

C++
Go
Microservices
Ml Frameworks
Python

Cerebras Systems Toronto, Ontario, CAN Office

150 King St W, Toronto, Ontario, Canada, M5H 1J9

Similar Jobs

A Minute Ago
Hybrid
Toronto, ON, CAN
Senior level
Senior level
Fintech • Machine Learning • Payments • Software • Financial Services
As a Process Owner, you'll lead a team in improving and managing customer loyalty processes, ensuring seamless program execution and risk governance while fostering a culture of continuous improvement.
Top Skills: LeanSix SigmaSQLTableau
A Minute Ago
Hybrid
Toronto, ON, CAN
Senior level
Senior level
Fintech • Machine Learning • Payments • Software • Financial Services
Seeking a Product Design Manager to lead a design team, drive customer acquisition strategies, and enhance digital experiences. Responsibilities include mentoring designers, collaborating on product strategies, and ensuring high-quality design delivery.
Top Skills: Agile MethodologiesDesign SystemsDigital Accessibility Best PracticesUi/Ux Design Principles
An Hour Ago
Easy Apply
Hybrid
Toronto, ON, CAN
Easy Apply
Mid level
Mid level
Marketing Tech • Mobile • Software
The Engagement Manager II will work at Braze to enhance customer engagement through collaboration, high standards, and accountability. They will focus on delivering great customer experiences and driving value for both consumers and businesses on a global scale.

What you need to know about the Toronto Tech Scene

Although home to some of the biggest names in tech, including Google, Microsoft and Amazon, Toronto has established itself as one of the largest startup ecosystems in the world. And with over 2,000 startups — more than 30 percent of the country's total startups — Toronto continues to attract new businesses. Be it helping entrepreneurs manage their finances, simplifying business operations by automating payroll or assisting pharmaceutical companies in launching new drugs, the city's tech scene is just getting started.

Sign up now Access later

Create Free Account

Please log in or sign up to report this job.

Create Free Account