Cerebras Systems builds the world's largest AI chip, 56 times larger than GPUs. Our novel wafer-scale architecture provides the AI compute power of dozens of GPUs on a single chip, with the programming simplicity of a single device. This approach allows Cerebras to deliver industry-leading training and inference speeds and empowers machine learning users to effortlessly run large-scale ML applications, without the hassle of managing hundreds of GPUs or TPUs.
Cerebras' current customers include top model labs, global enterprises, and cutting-edge AI-native startups. OpenAI recently announced a multi-year partnership with Cerebras, to deploy 750 megawatts of scale, transforming key workloads with ultra high-speed inference.
Thanks to the groundbreaking wafer-scale architecture, Cerebras Inference offers the fastest Generative AI inference solution in the world, over 10 times faster than GPU-based hyperscale cloud inference services. This order of magnitude increase in speed is transforming the user experience of AI applications, unlocking real-time iteration and increasing intelligence via additional agentic computation.
About the Role
We are looking for a Software Engineer to join the ML Integration and Quality team at Cerebras. This team sits at the intersection of machine learning infrastructure, distributed systems, and hardware/software co-design.
In this role, you will help integrate and validate the software stack that powers the Cerebras AI platform, ensuring large-scale ML workloads run reliably and efficiently across our systems. You will work closely with engineers across runtime, compiler, kernel, and hardware teams to debug complex issues, improve automation, and strengthen the reliability of our AI infrastructure.
This is an excellent opportunity for engineers who enjoy working across the stack, debugging complex systems, and improving the reliability of large-scale AI platforms.
Responsibilities
- Integrate and validate software components across the Cerebras AI platform.
- Collaborate with engineers across ML runtime, compiler, kernel, and hardware teams to ensure reliable feature integration.
- Investigate and debug complex issues across distributed systems and large-scale ML workloads.
- Build automation tools and infrastructure to support integration testing, system validation, and debugging workflows.
- Develop and maintain testbeds used to validate system performance and reliability.
- Identify system bottlenecks, failure points, and edge cases that impact ML workload performance.
- Contribute to test plans and validation strategies for new features and platform capabilities.
- Improve observability, diagnostics, and debugging workflows across the ML software stack.
- Work with product and engineering teams to ensure high-quality releases of the Cerebras inference platform.
Minimum Skills & Qualifications
- ~5 years of experience in software engineering, systems engineering, or infrastructure development.
- Strong programming skills in Python, C++, Go, or similar languages.
- Experience debugging complex systems or distributed software environments.
- Familiarity with systems-level development, infrastructure tooling, or platform integration.
- Experience building automation tools, testing frameworks, or internal developer tooling.
- Strong problem-solving skills and the ability to investigate issues across multiple system layers.
- Excellent communication and collaboration skills.
Preferred Skills
- Experience working with machine learning infrastructure or ML model deployment.
- Familiarity with LLM or multimodal model workloads.
- Experience with distributed systems, cloud infrastructure, or large-scale compute clusters.
- Exposure to performance debugging, profiling, or system observability tools.
- Experience with microservices, containerized environments, or cluster orchestration.
- Exposure to hardware accelerators, compilers, or ML frameworks.
- This role follows a hybrid schedule, requiring in-office presence 3 days per Please note, fully remote is not an option.
- Office locations: Sunnyvale, CA or Toronto, ON.
People who are serious about software make their own hardware. At Cerebras we have built a breakthrough architecture that is unlocking new opportunities for the AI industry. With dozens of model releases and rapid growth, we’ve reached an inflection point in our business. Members of our team tell us there are five main reasons they joined Cerebras:
- Build a breakthrough AI platform beyond the constraints of the GPU.
- Publish and open source their cutting-edge AI research.
- Work on one of the fastest AI supercomputers in the world.
- Enjoy job stability with startup vitality.
- Our simple, non-corporate work culture that respects individual beliefs.
Read our blog: Five Reasons to Join Cerebras in 2026.
Apply today and become part of the forefront of groundbreaking advancements in AI!Cerebras Systems is committed to creating an equal and diverse environment and is proud to be an equal opportunity employer. We celebrate different backgrounds, perspectives, and skills. We believe inclusive teams build better products and companies. We try every day to build a work environment that empowers people to do their best work through continuous learning, growth and support of those around them.
This website or its third-party tools process personal data. For more details, click here to review our CCPA disclosure notice.
Top Skills
Cerebras Systems Toronto, Ontario, CAN Office
150 King St W, Toronto, Ontario, Canada, M5H 1J9


.jpg)