The 5-criteria rubric
Every review on this site uses the same five criteria. Using a fixed rubric means scores are comparable across products — you can trust that an 8.5 in Usability means the same thing for Jobber as it does for ServiceTitan or FieldEdge.
1. Usability
Usability measures how quickly real people — technicians and dispatchers, not software trainers — can complete core tasks after a cold start. A dispatch board that takes a week to learn costs real money in productivity and onboarding overhead.
- Time-to-first-job: how many minutes does it take to schedule a work order from scratch?
- Click depth: how many screens does a dispatcher touch to book, assign, and close a job?
- Mobile parity: can a technician complete the same workflow from their phone as from a desktop?
- Error recovery: how gracefully does the app handle a double-booked tech or a mis-entered address?
2. Pricing
Pricing transparency is a proxy for how a vendor treats customers after the sale. Software that hides its pricing behind a "contact us" form almost always has a more complicated cost structure than software that publishes it openly.
- Is pricing publicly listed without requiring a demo call or lead form?
- What is the all-in cost at 5 users, 20 users, and 50 users?
- Are onboarding fees, training fees, or module fees disclosed upfront?
- How does the pricing model scale — per-user, per-tech, flat rate, or usage-based?
3. Feature depth
Feature depth rewards platforms that cover the full field-service workflow without requiring a separate point solution for every job stage. A platform that does everything adequately scores higher than one that does two things brilliantly and outsources the rest.
- Quoting and estimates — from customer inquiry to approved quote.
- Job scheduling and dispatch — recurring jobs, dynamic re-routing, crew management.
- Time tracking and payroll integration — GPS verification, overtime rules.
- Invoicing and payment collection — in-field card acceptance, automated follow-ups.
- Customer communication — automated appointment reminders, on-my-way SMS, review requests.
4. Support
Support quality only matters when something breaks — and in field service, software problems during a dispatch window have direct revenue consequences. We score support on accessibility (can you reach a human quickly?) and resolution quality (does the human actually resolve the issue?).
- Live chat first-response time (measured during business hours).
- Email support response time (measured over 5 business days).
- Phone support availability and wait time.
- Documentation quality — is the knowledge base current, searchable, and accurate?
- User community — is there an active community where operators help each other?
5. Integrations
No FSM platform is an island. Every field service business already has an accounting system, a CRM, or a payment processor. Integrations score both the breadth of the ecosystem and the reliability of the connections that matter most.
- QuickBooks integration — tested hands-on for two-way sync accuracy.
- Stripe / Square / payment processing — in-field and office payment capture.
- CRM connections — Salesforce, HubSpot, or the platform's own customer records.
- GPS and routing — whether the platform connects to a real routing engine or relies on manual scheduling.
- API availability — does the platform expose a public API for custom integrations?
How we score
Each of the five criteria receives a score from 0 to 10. Each criterion also carries a weight reflecting its importance to a typical field service operation: Usability (25%), Feature depth (25%), Pricing (20%), Integrations (20%), Support (10%). The weighted average produces the "Editor's score" displayed on every review.
Scores are not adjusted for company size, industry, or price point. A platform's weaknesses in any criterion drag down the aggregate score regardless of its strengths elsewhere. The goal is honest comparison — not promotional scoring that makes every product look like a winner.
The Editor's score chip shown on reviews (e.g. "8.4 / 10") is always the weighted aggregate of these five criteria. No manual adjustments, no "bonus points" for brand reputation, and no score floor.
What we don’t do
- No pay-for-placement. A vendor cannot pay to appear in a best-of list, to receive a higher score, or to have negative content removed.
- No vendor-controlled review. Vendors may correct factual errors, but they do not approve copy, preview scores, or influence the final verdict.
- No review without hands-on testing. Chip tests every platform in a live or sandbox environment before any score is published. Demo-only reviews are not published on this site.
- No undisclosed affiliate relationships. Every affiliate link is identified. If Chip earns a commission when you click through to a vendor, that is disclosed on the review and on the About Chip page.
- No anonymous authorship. Every review and comparison carries Chip's name, photo, and credentials. Readers can verify his experience and contact him directly with corrections.
How we update reviews
Each review carries two dates: Published (the original publication date) and Last reviewed (the most recent date Chip re-evaluated the product). These are different, and both matter.
The 12-month freshness rule: any review whose "Last reviewed" date is more than 12 months in the past is flagged as potentially stale in the site's build pipeline. Flagged reviews are prioritized for re-review. FSM software can change substantially in 12 months — pricing, core features, and support quality are all moving targets.
Vendor-driven re-reviews: when a vendor ships a major feature update (a new dispatch board, a pricing model change, a new integration), Chip re-reviews the affected criteria rather than waiting for the 12-month cycle. Vendors may notify us of major changes at the contact page, but re-reviews are conducted independently — not in coordination with the vendor.
Correction policy: factual errors (wrong pricing, a feature that no longer exists, an incorrect spec) are corrected as soon as they are verified. Corrections are noted at the bottom of the affected review. Score changes are noted with the date and a brief explanation.
Affiliate disclosure
When you click through our links, we may earn a commission. It never changes our verdict.
fieldservicesoftware.io participates in affiliate programs with some of the software vendors reviewed on this site. This means that if you click a link to a vendor and subsequently purchase or subscribe, we may receive a commission from that vendor at no additional cost to you.
Affiliate relationships do not influence the content of any review, the criteria used to score products, or the final verdict. Products are not reviewed because they have an affiliate program, and products with affiliate programs are not reviewed more favorably than products without them. The editorial rubric is applied identically to all products regardless of whether a commercial relationship exists.
For the full disclosure, including which vendors have affiliate relationships with this site, see the FTC disclosure section on Chip's author page.
Have a correction?
If you find a factual error in a review — incorrect pricing, a feature description that no longer matches the current product, or a spec we got wrong — please let us know. Corrections are taken seriously and published promptly. You can reach Chip directly via the contact page.