The Research Engineer will design scalable evaluation systems for multimodal generative AI models, integrating metrics and automated pipelines for model assessment and improvement.
About Luma AI
Luma's mission is to build multimodal AI to expand human imagination and capabilities. We believe that multimodality is critical for intelligence. To go beyond language models and build more aware, capable, and useful systems, the next step function change will come from vision. So, we are working on training and scaling up multimodal foundation models for systems that can see and understand, show and explain, and eventually interact with our world to effect change.
About the Role
Luma is pushing the boundaries of generative AI, building tools that redefine how visual content is created. We're seeking a Research Engineer to design and scale the infrastructure that powers our model evaluation efforts. This role is about building the pipelines, metrics, and automated systems that close the loop between model output, evaluation, and improvement. You'll work across research, engineering, and product teams to ensure our models are measured rigorously, consistently, and in ways that directly inform development.
Responsibilities
- Design and implement scalable pipelines for automated evaluation of generative models, with a focus on visual and multimodal outputs (image, video, text, audio).
- Develop novel metrics and evaluation models that capture qualities like fidelity, coherence, temporal consistency, and alignment with human intent.
- Integrate evaluation signals into training loops (including reinforcement learning and reward modeling) to continuously improve model performance.
- Build infrastructure for large-scale regression testing, benchmarking, and monitoring of multimodal generative models.
- Collaborate with researchers running human studies to translate human evaluation frameworks into automated or semi-automated systems.
- Partner with model researchers to identify failure cases and build targeted evaluation harnesses.
- Maintain dashboards, reporting tools, and alerting systems to surface evaluation results to stakeholders.
- Stay current with emerging evaluation techniques in generative AI, multimodal LLMs, and perceptual quality assessment.
Qualifications
- Master's or PhD in Computer Science, Machine Learning, or a related technical field (or equivalent industry experience).
- 5+ years of experience building ML evaluation systems, model pipelines, or large-scale infrastructure.
- Hands-on experience working with visual data (images and/or video), including evaluation, modeling, or data preparation.
- Proficiency in Python and ML frameworks (PyTorch, JAX, or TensorFlow).
- Familiarity with human-in-the-loop evaluation workflows and how to scale them with automation.
- Strong background in machine learning, with experience in generative models (diffusion, LLMs, multimodal architectures).
- Strong software engineering skills (CI/CD, testing, data pipelines, distributed systems).
Nice to Have
- Experience with reinforcement learning or reward modeling.
- Prior work on perceptual metrics, multimodal evaluation benchmarks, or retrieval-based evaluation.
- Background in large-scale model training or evaluation infrastructure.
- Experience designing metrics for perceptual quality
- Familiarity with creative media workflows (film, VFX, animation, digital art).
- Contributions to open-source evaluation libraries or benchmarks.
Luma’s mission is to build unified general intelligence that can generate, understand, and operate in the physical world.
We believe that multimodality is critical for intelligence. To go beyond language models and build more aware, capable and useful systems, the next step function change will come from vision. So, we are working on training and scaling up multimodal foundation models for systems that can see and understand, show and explain, and eventually interact with our world to effect change.
Top Skills
Jax
Python
PyTorch
TensorFlow
Similar Jobs
Artificial Intelligence • Consumer Web • Digital Media • Information Technology • Social Impact • Software
As a Senior Quality Platform Engineer, you will develop and maintain quality infrastructure, improve developer experience, and implement quality engineering practices to ensure scalable, efficient testing workflows.
Top Skills:
AWSAzureCircleCICypressDockerGCPGithub ActionsGitlabJavaJavaScriptJestJunitKubernetesPlaywrightPythonRubyTypescript
Artificial Intelligence • Fintech • Payments • Business Intelligence • Financial Services • Generative AI
The Account Executive will drive outbound sales, establish relationships with C-level executives, negotiate contracts, and support SME growth through innovative financial solutions.
Top Skills:
Google SuiteLinkedin Sales NavigatorOutreachSalesforceZoominfo
Artificial Intelligence • Fintech • Payments • Business Intelligence • Financial Services • Generative AI
Responsible for managing end-to-end partner relationships, driving partner revenue, and executing go-to-market strategies while building a strong partner ecosystem.
Top Skills:
HubspotSalesforceZoominfo
What you need to know about the Toronto Tech Scene
Although home to some of the biggest names in tech, including Google, Microsoft and Amazon, Toronto has established itself as one of the largest startup ecosystems in the world. And with over 2,000 startups — more than 30 percent of the country's total startups — Toronto continues to attract new businesses. Be it helping entrepreneurs manage their finances, simplifying business operations by automating payroll or assisting pharmaceutical companies in launching new drugs, the city's tech scene is just getting started.


