Kraken Digital Asset Exchange Logo

Kraken Digital Asset Exchange

Senior AI Compute Infrastructure Engineer

Posted Yesterday
Be an Early Applicant
Remote
Hiring Remotely in Canada
Senior level
Remote
Hiring Remotely in Canada
Senior level
The Senior AI Compute Infrastructure Engineer will manage and optimize GPU and accelerator clusters for AI workloads, collaborate with teams, and ensure reliable performance and cost efficiency.
The summary above was generated by AI
Building the Future of Crypto 

Our Krakenites are a world-class team with crypto conviction, united by our desire to discover and unlock the potential of crypto and blockchain technology.

What makes us different?

Kraken is a mission-focused company rooted in crypto values. As a Krakenite, you’ll join us on our mission to accelerate the global adoption of crypto, so that everyone can achieve financial freedom and inclusion. For over a decade, Kraken’s focus on our mission and crypto ethos has attracted many of the most talented crypto experts in the world.

Before you apply, please read the Kraken Culture page to learn more about our internal culture, values, and mission. We also expect candidates to familiarize themselves with the Kraken app. Learn how to create a Kraken account here.

As a fully remote company, we have Krakenites in 70+ countries who speak over 50 languages. Krakenites are industry pioneers who develop premium crypto products for experienced traders, institutions, and newcomers to the space. Kraken is committed to industry-leading security, crypto education, and world-class client support through our products like Kraken Pro, Desktop, Wallet, and Kraken Futures.

Become a Krakenite and build the future of crypto!

Proof of work
 
The team

Kraken is building a dedicated AI Compute and Infrastructure team to power the next generation of model training, inference, evaluation, and experimentation across the exchange. This team sits within engineering leadership and owns the infrastructure layer that lets Kraken run AI workloads with control, speed, reliability, and cost discipline.

The team is responsible for GPU and accelerator infrastructure, cluster operations, scheduling, model serving, observability, capacity planning, and cost-efficient compute at scale. This is the backbone that allows Kraken to train, serve, evaluate, and iterate on AI systems in-house where it matters for privacy, latency, reliability, cost, or product differentiation.

You will join a small, senior, high-impact team working directly with AI/ML researchers, platform engineers, security teams, and product teams. The mandate is simple: make Kraken's AI ambitions real by building compute infrastructure that is fast, dependable, efficient, and production-grade.

 
The opportunity
  • Own and operate GPU and accelerator clusters used for training, inference, evaluation, and experimentation, including drivers, runtimes, kernels, device plugins, node configuration, scheduling primitives, and workload isolation.

  • Design infrastructure that enables Kraken teams to run models locally on GPUs where it is strategically and economically preferable, reducing unnecessary dependency on external providers and containing compute costs.

  • Build and improve scheduling, orchestration, placement, quota management, and utilization systems across heterogeneous accelerator environments.

  • Optimize inference pipelines for latency, throughput, reliability, memory efficiency, and cost using frameworks such as vLLM, Triton Inference Server, TensorRT, or equivalent serving stacks.

  • Partner with ML engineers and researchers to remove bottlenecks in training, evaluation, batch inference, online inference, deployment, and production debugging workflows.

  • Build observability for GPU utilization, memory pressure, queue depth, saturation, token throughput, request latency, failed workloads, capacity pressure, and spend.

  • Drive reliability, incident response, alerting, runbooks, and post-incident improvements for always-on AI compute infrastructure.

  • Evaluate and integrate new hardware, cloud instance families, specialized accelerators, runtimes, schedulers, and serving frameworks as the AI infrastructure landscape evolves.

  • Build tooling that makes GPU usage visible, accountable, and easier for internal teams to consume without needing to become infrastructure experts.

  • Contribute to long-term architecture decisions that balance performance, cost efficiency, scalability, operational simplicity, and production safety.

 
Skills you should HODL
  • 5+ years of infrastructure engineering experience, with significant time spent on GPU compute, ML infrastructure, distributed systems, high-performance computing, or large-scale production platforms.

  • Hands-on experience operating GPU clusters or accelerator-backed infrastructure in production or production-like environments, including scheduling, orchestration, utilization monitoring, and cost optimization.

  • Strong systems engineering fundamentals across Linux, networking, storage, containers, Kubernetes, distributed runtimes, and production debugging.

  • Experience with ML serving frameworks such as vLLM, Triton Inference Server, TensorRT, TorchServe, KServe, Ray Serve, or equivalent systems.

  • Proficiency in Python for infrastructure automation, tooling, debugging, integration, and operational workflows.

  • Practical understanding of performance tradeoffs across batching, concurrency, memory usage, GPU utilization, model size, latency, throughput, availability, and cost.

  • Track record of optimizing compute costs while maintaining clear performance, reliability, and availability expectations.

  • Experience building observable systems with useful metrics, logs, traces, dashboards, alerts, and incident workflows.

  • Comfortable working in high-stakes, always-on environments where uptime, throughput, correctness, and operational discipline are critical.

  • Clear communicator who can translate infrastructure tradeoffs for researchers, product teams, platform engineers, security stakeholders, and engineering leadership.

 
Nice to haves
  • Experience at a frontier AI lab, hyperscaler, high-frequency trading firm, research platform, or high-scale ML organization.

  • Familiarity with custom silicon or specialized accelerators such as TPUs, AWS Trainium, Gaudi, or similar platforms.

  • Background in capacity planning, procurement input, reserved capacity strategy, cloud accelerator economics, or GPU fleet cost management.

  • Experience with distributed training frameworks such as DeepSpeed, Megatron-LM, FSDP, Ray, or equivalent systems.

  • Experience debugging CUDA, NCCL, kernel, driver, runtime, memory, networking, or low-level performance issues.

  • Experience with Rust, C++, Go, CUDA, or other systems languages used for performance-critical infrastructure.

  • Crypto, financial services, trading infrastructure, or security-sensitive production infrastructure experience.

Unless a specific application deadline is stated in the job posting, applications are accepted on an ongoing basis.

Please note, applicants are permitted to redact or remove information on their resume that identifies age, date of birth, or dates of attendance at or graduation from an educational institution.

We consider qualified applicants with criminal histories for employment on our team, assessing candidates in a manner consistent with the requirements of the San Francisco Fair Chance Ordinance.

Kraken is powered by people from around the world and we celebrate all Krakenites for their diverse talents, backgrounds, contributions and unique perspectives. We hire strictly based on merit, meaning we seek out the candidates with the right abilities, knowledge, and skills considered the most suitable for the job. We encourage you to apply for roles where you don't fully meet the listed requirements, especially if you're passionate or knowledgable about crypto!

We may ask candidates to complete job-related skills or work-style assessments as part of our hiring process. These assessments are designed to evaluate competencies relevant to the role and are applied consistently across candidates for similar positions. Assessment results are considered alongside other relevant information, such as experience and interviews, and are not the sole basis for any employment decision.

As an equal opportunity employer, we don’t tolerate discrimination or harassment of any kind. Whether that’s based on race, ethnicity, age, gender identity, citizenship, religion, sexual orientation, disability, pregnancy, veteran status or any other protected characteristic as outlined by federal, state or local laws. 

Stay in the know

Follow us on Twitter

Learn on the Kraken Blog

Connect on LinkedIn


Candidate Privacy Notice

Similar Jobs

6 Hours Ago
Easy Apply
Remote or Hybrid
CA
Easy Apply
Mid level
Mid level
Artificial Intelligence • Cloud • Computer Vision • Hardware • Internet of Things • Software
As a Digital Designer III, you will lead digital projects from concept to launch, creating wireframes and prototypes while collaborating with cross-functional teams. Responsibilities include using AI for design efficiency, developing design systems, and leveraging data for user experience improvements.
Top Skills: Adobe Creative SuiteCSSFigmaHTMLJavaScriptWeb Prototyping Tools
6 Hours Ago
In-Office or Remote
CA
Senior level
Senior level
Artificial Intelligence • Big Data • Cloud • Information Technology • Software • Cybersecurity • Data Privacy
Manage sales compensation functions, ensuring accurate commission calculations, improving processes, and collaborating with various departments. Lead a team focused on enhancing commission systems and reporting insights.
Top Skills: Bi ToolsExcelXactly
6 Hours Ago
Remote or Hybrid
Toronto, ON, CAN
Senior level
Senior level
Blockchain • Fintech • Payments • Consulting • Cryptocurrency • Cybersecurity • Quantum Computing
The Director of Government Affairs and Policy leads the strategy for shaping policy and government relations in Canada, advocating legislation to support Mastercard's objectives while engaging with stakeholders.

What you need to know about the Toronto Tech Scene

Although home to some of the biggest names in tech, including Google, Microsoft and Amazon, Toronto has established itself as one of the largest startup ecosystems in the world. And with over 2,000 startups — more than 30 percent of the country's total startups — Toronto continues to attract new businesses. Be it helping entrepreneurs manage their finances, simplifying business operations by automating payroll or assisting pharmaceutical companies in launching new drugs, the city's tech scene is just getting started.

Sign up now Access later

Create Free Account

Please log in or sign up to report this job.

Create Free Account