Fleet performance

Compare machine performance
across your installed base

Most OEMs can see what happens on a single machine or customer site. But comparing performance across your fleet is much harder. StriData helps you standardize KPIs across installations, so you can see what performs best, where issues repeat, and what to improve next.

Request Quick Scan
A structured 1–2 day session. Clear next steps for your analytics setup.
Prefer to talk first? Book a 30-minute call
Trusted by
The problem

Why fleet comparison usually breaks down

Most OEMs already have machine data per site. The challenge is turning that into one consistent view across the installed base.

No shared KPI definition

Availability, downtime, or performance often mean different things across teams, machines, or customer sites. That makes comparison unreliable from the start.

You can’t compare across customers consistently

Even when the data exists, it is often structured differently per installation. That makes it hard to see which sites, machine types, or configurations actually perform better.

Analytics gets rebuilt site by site

Instead of one reusable model, dashboards are often created per customer or project. That slows down roll-out and prevents fleet-wide learning.

What fleet analytics requires

What fleet analytics actually requires

Fleet analytics is not just a dashboard. It starts with a shared structure that makes comparison possible across machines, customers, and sites.

1

Agree on KPI definitions

Decide what availability, downtime, utilization, or output should mean across the fleet, so you are not comparing different interpretations of the same metric.

2

Standardize the logic once

Translate machine data into one reusable KPI model instead of redefining reporting logic per customer, site, or dashboard project.

3

Apply it across installations

Use the same structure across machines and customer sites, while still allowing room for local details where they actually matter.

4

Compare and act

See where performance differs, identify recurring patterns, and use those insights to improve product decisions, service quality, and operational consistency.

In practice: fleet analytics becomes useful when every site is not treated as a separate reporting project, but as part of one shared performance model.
What this looks like

A fleet view that makes performance comparable

Once KPI logic is standardized, performance can be compared across machines, customer sites, and configurations. That gives product and operations teams one shared view of what performs best, where issues repeat, and where action is needed first.

  • Compare availability and downtime across all customer sites
  • Identify which machines or configurations perform best
  • See recurring issues across the installed base
  • Use one KPI model across the entire fleet

What this gives you

A shared performance view your product, service, and operations teams can all work from.

  • 1
    One KPI structure across customers and sites
  • 2
    Faster visibility into underperforming machines
  • 3
    A practical basis for product and service improvement
Live Power BI example
Fleet performance dashboard
KPI examples

KPIs that need to be consistent across your fleet

Fleet comparison only works when KPIs mean the same thing everywhere. These are typical examples OEMs standardize first.

Availability

The percentage of scheduled time a machine is running or ready for production.

Downtime

The total time a machine is not available, ideally categorized by cause.

Utilization

The share of available machine time that is actually used for production.

MTBF

Mean time between failures, used to identify recurring reliability issues.

Output

The number of produced units in a defined period, measured consistently across machines.

Changeover time

The time required to switch between products or runs, often relevant for benchmarking.

Proof in practice

What fleet-level visibility looks like in practice

1,500+
machines connected across the installed base
40+
countries with active machine data
5 min
to onboard a new customer site

TMI operates more than 1,500 machines across 40 countries. By structuring machine data into one reusable analytics layer, performance can be viewed consistently across installations — making it possible to compare machines, identify patterns, and improve both product and service decisions.

Read the full case study →
Frequently Asked Questions

Questions about fleet performance

The main challenge is usually not whether machine data exists, but how to make performance comparable across customers, sites, and machine types. These are the questions OEMs typically ask first.

That is common. The goal is not to force every machine into the same raw data format, but to map different machine structures into one shared KPI model where comparison makes sense. Raw data can differ, as long as the performance logic becomes consistent.
Not necessarily. What matters most is that the required data can be accessed and structured consistently. The analytics layer sits on top of the connectivity and data sources you already use, rather than requiring every customer to work in exactly the same way.
Usually by starting with the decisions you want to support. From there, KPI logic is defined in a way that is practical, reusable, and realistic across machine types, customers, and service contexts. The goal is not perfect theory, but a model that supports real comparison.

Start with one KPI model for your full fleet

If you already collect machine data but still cannot compare performance across customers consistently, the next step is usually not another dashboard. It is a shared analytics structure. That is exactly what we explore in the Quick Scan.

StriData has structured analytics for 1500+ machines across 40 countries, all built on existing connectivity, without replacing infrastructure.

Martijn van Dijk

Founder & Data Engineer