11 March 2026 · 13 min read · Arviteni
Most care groups pull board reports from five different systems into a spreadsheet every month. This guide defines the KPIs that matter and how to build unified reporting.
Board reporting in a care group often works like this: someone spends two days at the end of every month opening five different systems, exporting data into spreadsheets, and stitching it together manually before the meeting. By the time the report lands in front of the board, the data is already out of date and the person who built it has spent most of their week doing it.
This is not a niche problem. It is the default state for most multi-site care organisations. Care management platforms, HR systems, rostering tools, training platforms, and finance packages all hold critical data, but they do not talk to each other. The result is fragmented reporting that does not reflect the actual operational picture, and a significant amount of administrative time spent reconciling numbers instead of acting on them.
This guide sets out which KPIs actually matter for care homes, how to think about them by category, and how to move from a spreadsheet-based reporting process to one that gives you a live operational view.
Most care providers have accumulated their systems over time, often selecting the best available option for each function rather than thinking about the data layer underneath. A care home group might use one platform for care planning and incident recording, a separate HR system for staff records and contracts, a rostering tool for shift scheduling, a dedicated training platform for Care Certificate and mandatory training, and an accounting package for financial management. Each of these does its job reasonably well. None of them share data with the others.
The consequences show up in board reporting. To answer a basic question like "what was our agency spend as a percentage of total payroll last month," someone has to pull a payroll figure from the HR system, pull agency hours from the rostering tool, and calculate the ratio manually. To understand whether staffing shortfalls are linked to recruitment performance, someone has to correlate vacancy data from the ATS with sickness data from HR and agency usage from the rota. None of this happens automatically.
The other consequence is that reports become backward-looking by necessity. When building a report takes two days, you only do it once a month. By the time trends are visible in the data, they are already a month old, and the operational response has to work backwards from lagging indicators.
Understanding which metrics to track is the first step toward fixing this. The second step is the infrastructure to surface them without manual effort.
Recruitment is one of the most resource-intensive operational challenges in the care sector, and most organisations measure it poorly. The common mistake is tracking only top-level vacancy numbers, without understanding what is driving them or how efficiently the recruitment process converts applicants into starters.
Time to hire is the number of days between a vacancy opening and a new starter's first shift. For care roles, an average over 40 to 45 days is a warning sign: vacancies that stay open that long typically get covered by agency staff, which drives up cost and disrupts continuity of care. Breaking time to hire down by role type and site reveals whether slow hiring is a process problem or a supply problem.
Vacancy ageing tracks how long individual vacancies have been open, rather than just an average. A vacancy open for 90 days is a materially different problem from one open for 10 days, and aggregate averages hide the long tail. Vacancy ageing, reported site by site, shows you where intervention is needed.
Cost per hire combines recruitment advertising spend, recruiter time, and any agency fees or referral bonuses paid to fill a role. Most care organisations do not calculate this formally, which means they have no basis for evaluating whether a job board spend is delivering value or whether increasing the employee referral bonus would be more cost-effective.
Application to placement ratio measures how many applications are received for each successful starter. A low ratio indicates poor applicant quality or an ineffective attraction strategy. A very high ratio suggests interview capacity or process bottlenecks are losing good candidates who apply but never progress.
These metrics depend on data that lives in the recruitment system, and connecting them to cost data from HR and finance is where the picture becomes genuinely useful. CareGate ATS tracks application volumes, stage progression, and time to hire for care organisations, which provides the foundation for this category of reporting.
Compliance reporting in care is non-negotiable. CQC inspection readiness depends on being able to demonstrate, at short notice, that your workforce meets mandatory training requirements. Most care providers can answer this question, but only by pulling a training report from their platform and cross-referencing it with an HR headcount list, a process that introduces error.
Care Certificate completion rate is the percentage of eligible staff who have completed the Care Certificate within 12 weeks of starting. This is a straightforward metric, but tracking it requires knowing who is eligible, which depends on role and start date data from HR, matched to completion data from the training platform.
Mandatory training compliance percentage is the proportion of staff whose mandatory training certificates are in date across all required modules. The useful version of this metric is not the overall percentage, but the breakdown by module, by site, and with a forward-looking view of upcoming expiries. A staff member whose moving and handling certificate expires in three weeks is a compliance risk today, not when the certificate has already lapsed.
Certificate expiry tracking is closely related but deserves its own attention. The failure mode is not usually organisations that ignore expiries: it is organisations that track expiries in a spreadsheet that is updated manually and therefore always partially out of date. Automated expiry alerts, based on live data from the training system and triggered to both the staff member and their manager, remove this failure mode.
Supervision and appraisal completion is less often tracked formally but increasingly visible in CQC inspections. The percentage of staff who have received their scheduled supervision and annual appraisal is a proxy for management quality and staff development, and it is a metric that should appear on operational dashboards alongside clinical compliance data.
Care homes that actively manage referrals and occupancy need visibility into the business development pipeline alongside operational data. For groups that sell across NHS, local authority, and self-funded channels, the funder mix has a direct effect on revenue per bed and requires its own tracking.
Pipeline conversion rate measures the percentage of referrals or enquiries that convert to admissions. Tracking this over time, and breaking it down by referral source, reveals which channels are most efficient and where follow-up processes are breaking down. A hospital discharge team that sends five referrals a month but only converts at 20% is a different problem from a low-volume source converting at 60%.
Funder mix is the breakdown of residents by funding type: NHS Continuing Healthcare, local authority, and self-funded. This matters because reimbursement rates differ significantly across these categories, and the mix directly affects revenue per bed even at constant occupancy. A shift in funder mix is an early financial indicator that warrants attention.
Revenue per bed is occupied bed revenue divided by total registered bed capacity. This is a useful normalised metric for comparing performance across sites of different sizes, and it captures both occupancy and rate effects in a single figure. Reporting it alongside cost per resident gives the full margin picture.
The CareGate CRM is designed specifically for managing referrals and the sales pipeline in care organisations, tracking enquiries from first contact through to admission and supporting the reporting needed for this category.
Operational data is the broadest category and the one most commonly tracked in isolation, where the value comes from seeing the relationships between metrics rather than each one individually.
Bed occupancy is the most fundamental care home metric: the percentage of registered beds occupied by residents. Most care managers know their occupancy instinctively, but tracking it formally, by site and over time, shows seasonal patterns, the impact of occupancy initiatives, and the relationship between marketing activity and bed fill rates.
Staff turnover rate is typically calculated as leavers in a period divided by average headcount, expressed as a percentage. The sector average runs at 25 to 35% annually, but variation between sites in the same group can be significant and is often an early indicator of management or culture problems at specific locations. Turnover that exceeds 40% annualised creates a permanent recruitment cost overhead that is hard to escape.
Agency usage percentage is agency hours divided by total worked hours in a period. This is the metric that most operations directors watch most closely, because agency cost typically runs at two to three times the cost of a permanent equivalent and represents the most controllable element of the payroll cost base. The relationship between vacancy ageing, sickness rates, and agency usage, seen together, tells you whether agency dependency is a recruitment problem, an absence management problem, or both.
Sickness rates are total sick days divided by available working days, expressed as a percentage. Bradford Factor scoring, which weights frequent short absences more heavily than long-term sickness, is a useful adjunct metric for identifying patterns that warrant management intervention before they become a formal process.
The value of these four metrics comes from seeing them together. High sickness plus high agency usage plus long vacancy ageing at a single site is a different operational picture from the same numbers spread evenly across a group, and it points to different interventions.
Financial reporting in care typically sits closest to the board and furthest from operational data. Bringing operational and financial metrics into the same view changes the quality of board conversations.
Revenue per bed appears in both this category and the business development section, because it sits at the intersection of occupancy, funder mix, and rate negotiation. Tracking it monthly, by site, gives a clear view of where revenue performance is strong or deteriorating.
Cost per resident per week is total site operating cost divided by occupied beds and divided by weeks in the period. This normalises costs for comparison across sites with different capacities and resident dependency levels, and it is the metric most useful for understanding whether cost increases are driven by volume changes or unit cost changes.
Agency spend as a percentage of payroll connects the operational picture to the financial one. When a site's agency percentage rises, the financial impact is visible immediately in this metric, which gives the board a financially framed version of the operational indicator that care managers track. Typically anything above 8 to 10% of total payroll warrants formal review.
Debtor days is the average number of days between invoicing and payment, relevant for providers billing local authorities or NHS commissioners. Debtor days trending upward is an early cash flow indicator and sometimes a signal of dispute or query volumes that need commercial attention.
Power BI is the tool many care organisations reach for when they decide to invest in reporting. It is genuinely powerful, widely available through Microsoft 365 licences, and capable of building the kind of dashboards described above. The case for it is real.
The challenge is what it takes to get there. Power BI requires data connections to each source system, which depends on those systems having accessible APIs or connectable data exports. It requires a data model that correctly joins the entities across systems: a staff member in the HR system needs to be the same entity as a user in the training platform and a worker in the rostering tool. Building and maintaining that model requires data engineering skills. Once built, the model needs ongoing maintenance as source systems are updated, as new fields are added, and as the organisation changes.
Most care providers do not have a data engineer, and most managed IT partners do not have deep enough knowledge of care-specific data structures to build this model reliably. The result is often a Power BI project that runs for several months, costs considerably more than budgeted, and delivers something that either does not work as intended or degrades within six months as source systems change.
The alternative is a platform built specifically for care, where the data connections, the entity matching, and the care-specific metric definitions are already handled. You configure it for your organisation rather than building it from scratch.
The broader topic of analytics infrastructure for care providers is covered in more depth in our guide to business intelligence for care organisations.
Even with the right data and the right tools, reporting only adds value if it is connected to decisions and actions. The reporting cadence that works for most care groups has three levels.
Daily operational checks are lightweight and typically accessed by care managers and site managers directly in the platform: today's occupancy, any training expiries flagged, open shifts with no cover. These are operational tools, not reporting exercises.
Weekly management reviews cover the previous week's key metrics: sickness, agency usage, any new vacancies opened, and pipeline enquiries received. At this frequency, trends become visible quickly enough to act on.
Monthly board reporting covers the full KPI set: financial performance, occupancy, recruitment pipeline, training compliance, and staff turnover. At this level, the board should be seeing trend charts, not just current figures, and the report should be generated automatically rather than assembled manually.
The shift from manual assembly to automatic generation is the operational change that matters most. When board reporting takes two hours rather than two days, it gets done more frequently, more accurately, and by people who can spend the time they recover on analysis rather than extraction.
If your current reporting process involves anyone exporting data from multiple systems and building a report by hand, that time is being spent on a task that should not exist.
CareGate Analytics is Arviteni's unified reporting layer for care organisations, built to pull data from across the CareGate platform including recruitment, CRM, and training, and to surface the KPIs that matter in a single operational view. It is designed for care providers, which means the metrics, the terminology, and the compliance definitions are already built in rather than configured from scratch.
The board report that currently takes two days should take two minutes. If you want to understand what that looks like for your organisation, the CareGate Analytics product page is the place to start.