Case study

How Meraki Accelerates AI Development with ClearML: Streamlining Experiment Tracking, Workflow Automation, and Scalable MLOps

March 24, 2025

Nate Fung , ML Engineer, Cisco Meraki

Cisco Meraki is a cloud-managed IT solutions company that provides networking, security, and IoT products designed for simplicity, scalability, and centralized management. As a subsidiary of Cisco, Meraki offers a suite of hardware and software solutions, including wireless access points, switches, security appliances (firewalls), smart cameras, and endpoint management tools—all managed through the Meraki Dashboard, a cloud-based platform.

Meraki is widely used by enterprises, small businesses, schools, hospitals, and governments to simplify IT operations, enhance security, and improve network performance with zero-touch deployment and automation. This is their story of how they use ClearML:

Meraki has been using ClearML for managing experiments and workflow automation of AI projects that are running on-prem and in the cloud, such as research for computer vision and LLMs for Natural Language Understanding, and to create workloads for evaluating model performance and automating RAG pipelines.

We’ve found that ClearML helps us improve collaboration among teams to accelerate AI development and streamlines workflow automation for experiments and CI/CD pipelines for production.

In doing so, we’ve experienced a positive impact of more and more teams developing AI features, and faster development of automated and scalable ML workflows with efficient utilization of our compute resources.

Challenges Before ClearML

AI development is expanding among many teams across Meraki. Researchers are discovering innovative solutions using AI, product teams are including AI features into their products, and platform and analytics teams use data and advanced analytics to make business decisions and improve operational efficiency.

Some teams have found it challenging to get started with their AI projects and to experiment with their data because they don’t know which tools to use or how to manage them, meanwhile other teams manage their own tools and have their own process that they want to continue improving.

Some teams are limited to running experiments on their personal computer and want easy access to larger compute resources, meanwhile other teams already have specialized setup of powerful on-prem computing resources or regularly use cloud-based computing resources. It was important to unify all three types of resources for ease of access and operational efficiency.

To bridge the gap between our teams, we believed that a centralized ML platform with tooling that could help standardize usage, reduce duplicated efforts, and have an easy learning curve would empower them to more easily get started and continue advancing their AI projects.

Kubeflow was a solution that was trialed but the complexity and maintenance was challenging. It required dedicated teams of DevOps to manage and the steep learning curve made it difficult for new users to get onboard. So we turned to ClearML, an open source platform for building, training, and deploying AI/ML models.

With ClearML, we found that those complexities and learning curves were reduced dramatically. It seamlessly integrates with our AWS infrastructure and works right out of the box. It checked off a lot of requirements including robust experiment tracking for on-prem and cloud-based projects, dataset management, workflow automation, and CPU and GPU resource management. It’s a great ML platform solution that can support our growing number of AI projects.

Zeroing in on Specific ClearML Features

For example, being able to easily track and monitor experiments both on-prem and in the cloud was a key feature for us since some teams have specialized environments and others require the convenience of running workloads with autoscaling.

The experiment tracking is helpful for providing a structured way to compare trials in an experiment to guide and advance the project’s research strategies, such as better understanding which hyperparameters have larger impact on the model accuracy and how to optimize the models and experiments. With just a few lines of code, we can start to monitor existing local experiments right away, and with one API call we can run entire experiments in a remote worker with larger compute and memory resources. ClearML made it easier to collaborate and share metrics and artifacts from each trial since they were all centralized in the same project folder and associated with their ClearML Task.

The workflow automation improves our MLOps process and empowers our users to create scalable workflows. With a few simple CLI commands, users can run their workloads on remote workers without having to always worry about setting up Docker containers or provisioning compute resources because ClearML manages most of it for us.

For example, data scientists can quickly prototype their workflows to more effectively collaborate with ML engineers when it comes time to productionize and optimize workflows. With the flexibility of autoscaling, pipelines, and CI/CD integrations, it’s more streamlined to prototype ML workflows for Proof of Concepts and then scale them for production workloads. The resource management helps us with utilizing CPU and GPU resources efficiently and allows teams to develop at any time of the day.

Editor’s Note:

ClearML Autoscalers spin up additional cloud compute as needed and supplement your existing infrastructure with minimal extra work. Users can add additional compute resources from AWS, GCP, or Azure on top of their current compute infrastructure with virtually no additional overhead (bare metal, Kubernetes, cloud, or hybrid). When the extra compute is not needed anymore, the ClearML Autoscalers spin them down to maintain costs.

ClearML Pipelines enables customers to chain ClearML Tasks into automatically executed pipelines with custom logic between Tasks. It simplifies data movement between different ClearML Tasks within a pipeline. ClearML Pipelines are built so that different Tasks can run on different compute resources, optimizing compute to specific Tasks.

The ClearML team provided great support via Slack and were helpful and responsive. The setup and integration with our own AWS infrastructure was smooth.

Here are some testimonials from our teams:

“I’ve been really impressed with ClearML so far—it’s an amazing platform with a lot of fantastic features. Here are the ones that have stood out to me:

  • Logging: The logging dashboards are incredibly helpful. They not only allow us to display the results of our experiments but also serve as a valuable reference for the product team to understand our work better.
  • Comparison Tool: The ability to compare different experiments side by side has been invaluable in analyzing our models and algorithms, especially when combined with the detailed logs page.
  • Pipelines: While we haven’t fully utilized this feature to its potential, it’s a powerful tool that we’re planning to use to automate our training and testing processes, enhancing visibility into the flow of information and organizing our tool chain which is currently spread across different GitLab pipelines.
  • Documentation and Support: The documentation is great, and the support team has been doing an outstanding job.

Overall, it’s been great working with it.”

“ClearML has actually been a huge asset for our team. One of the big highlights is how it helped us stabilize and track our datasets over multiple versions; this ensures we know exactly which dataset version corresponds to which experiment, which made the entire workflow more reliable. We also really appreciate ClearML’s experiment tracking capabilities for model training. It’s been straightforward to compare different model versions, monitor training evolutions over time, and evaluate performance with consistent, centralized metrics. Being able to quickly see how changes in hyperparameters or data affect the outcomes has saved us a lot of time.”


Figure 1: Hardware Analytics team training models and using SHAP values for explainability in ClearML

Looking into the Future

ClearML helps us get over the hurdles of getting started with AI projects and advancing existing ones by providing comprehensive experiment tracking and workflow automation. With ClearML, we’re able to improve AI development for many teams at Meraki to carry out our AI vision more effectively.


If you’d like to see ClearML in action, please request a demo to speak to someone on our sales team.

Facebook
Twitter
LinkedIn
Scroll to Top