CMRPstudy is a complete CMRP exam prep resource — study guide, working calculators, and glossary. Built for practitioners. Free, always.
No fluff. Built from the SMRP Body of Knowledge and real field experience, covering what actually appears on the exam.
CMRPstudy was built by a practicing Maintenance Planner & Scheduler actively pursuing CMRP certification. The content reflects real-world application in a manufacturing environment — not just what textbooks say.
Everything here is grounded in the SMRP Body of Knowledge, Nowlan & Heap's failure pattern research, Doc Palmer's planning principles, and years of hands-on reliability improvement work. No advertising. No paid placements. No course upsells.
The goal: give every maintenance professional access to the same quality of study material regardless of whether they can afford a prep course.
| Questions | 110 (≈10 unscored) |
| Time Limit | 2.5 hours |
| Format | Computer-based, Prometric |
| Scoring | 200–800 scaled score |
| Passing Score | ≈520 (not a raw %) |
| Recertification | Every 3 years |
| Largest Domain | D3 Equipment (45%) |
Pillar 1 (Leadership) enables all others. Without leadership commitment and cultural alignment, no reliability initiative survives the first production crisis. Leaders who revert to measuring only output — not reliability performance — create the pressure that collapses every other pillar.
Pillar 2 (Work Management) is the delivery mechanism. The best maintenance strategies (Pillar 3) and most rigorous failure analysis (Pillar 5) produce zero value if the work management system can't deliver them to the field efficiently. An organization with great RCM output and poor scheduling has done expensive analysis for nothing.
Pillar 3 (Proactive Maintenance) generates the operating data. Condition monitoring findings feed Pillar 5 analysis. PM compliance rates feed Pillar 1 KPI dashboards. Without Pillar 3 execution, the other pillars have no field data to act on.
Pillar 4 (Skills) is the human enabler. Technically sophisticated PdM programs fail when the workforce lacks the skills to execute correctly. Misapplied vibration analysis — or an operator who can't recognize abnormal equipment sounds — is worse than no program at all.
Pillar 5 (Reliability Engineering) drives continuous improvement. Engineering analysis converts field experience into strategic decisions: which assets to prioritize, which failure modes to target, how to optimize maintenance strategy over time.
Organizations implement technical tools (Pillars 3 and 5) without the organizational foundation (Pillar 1) or work management infrastructure (Pillar 2) to sustain them. The tools sit unused because there's no cultural mandate to use them and no scheduling system to deliver the resulting work. This is the single most tested scenario on the CMRP exam.
| Stage | Characteristics | Typical MC/RAV |
|---|---|---|
| 1 — Reactive | Run-to-failure dominant. Fire-fighting culture. No PM program. Emergency work >50%. | 8–15%+ |
| 2 — Preventive | PM program exists but mostly time-based. Low schedule compliance. Planning inconsistent. | 6–9% |
| 3 — Proactive | PdM program active. Planning & scheduling mature. RCA practiced. Leading indicators tracked. | 3–6% |
| 4 — Predictive/CBM | Condition-based decisions dominant. FMEA-driven strategy. Failure modes well understood. | 2–4% |
| 5 — Optimized | AI-assisted. Real-time condition integration. Continuous improvement loops embedded. | <2% |
Accountability requires: visible KPIs (posted, reviewed weekly), individual ownership (each metric has a named owner), consequence (performance affects something — recognition, discussion, planning resources), and root cause response (when KPIs miss target, the response is to analyze why, not blame).
Planners must be organizationally separate from craft supervision. A planner who also supervises technicians will always prioritize the urgent (today's work) over the important (next week's preparation). The planner works FOR the supervisors by preparing work packages — not alongside them in the field.
The planner's time horizon is work not yet started. Once a job enters execution, it belongs to the supervisor. Planners diverted to manage active jobs degrade planning quality for future work — the cost compounds every week.
For each significant asset, maintain a file containing: historical job plans, parts lists with vendor info, OEM manuals, special tools required, isolation/clearance procedures, and lessons learned from past repairs. This file transforms individual technician experience into organizational knowledge available to everyone.
Estimates should reflect what a skilled craftsperson can accomplish — not the average performer, not padded for worst-case. Accurate estimates enable accurate schedule loading and meaningful schedule compliance measurement.
Craft technicians are the source of field knowledge that improves job plans. A feedback loop must exist: technicians complete job feedback noting actual time, parts deviations, problems, and improvements. The planner incorporates this into the job plan file — individual experience becomes organizational memory.
Wrench time (hands-on productive time as % of available work time) is the fundamental measure of planning and scheduling effectiveness. Improving from the industry average of 25–35% to world-class 55–65% roughly doubles craft productivity without adding headcount.
| Timing | Activity | Owner |
|---|---|---|
| Mon–Thu | Planner prepares next week's work packages; acquires parts; confirms equipment availability with operations | Planner |
| Thursday | Weekly coordination meeting: planner, scheduler, supervisor, operations representative review following week's plan | All stakeholders |
| Friday by noon | Schedule published; work packages distributed to supervisors | Scheduler/Planner |
| Daily | Supervisor assigns work, manages execution, updates system with completions and deviations | Supervisor |
| End of week | Schedule compliance calculated, posted, and debriefed | All |
Palmer's principle is to load the schedule to 100% of available craft hours. The intuitive approach — loading 85% to "leave room for emergencies" — is wrong. The scheduler's job is to plan the best use of available time. Emergency work is the supervisor's problem to manage in real time. Pre-compensating at 85% trains the organization to expect 85% execution and guarantees the remaining 15% is wasted.
| Task Type | Use When | Examples |
|---|---|---|
| Time-based (scheduled) | Known age-related deterioration; component reliability decreases predictably with time/use (Weibull β > 3) | Oil changes, filter replacements, belt changes, seal replacements |
| On-condition (CBM/PdM) | Detectable P-F interval exists; condition monitoring can find the failure before it occurs | Vibration routes, thermography surveys, oil sampling, ultrasound routes |
| Failure-finding | Hidden function — failure is not self-announcing and only matters when the protected function is demanded | Testing relief valve setpoints, exercising emergency shutdown valves, testing standby pump auto-starts |
| Run-to-failure (RTF) | Failure consequence is acceptable AND no cost-effective proactive task exists | Light bulbs, fuses, low-value redundant non-critical components |
The P-F interval is the time between when a potential failure (P) can first be detected and when functional failure (F) occurs. It governs how PdM programs are designed:
| OEE Component | Loss | Definition |
|---|---|---|
| Availability | Breakdowns | Unplanned equipment failures causing production stoppage |
| Setup & Adjustment | Time lost during changeovers, startups, adjustments | |
| Performance | Minor Stoppages / Idling | Brief stops not recorded as breakdowns |
| Reduced Speed | Equipment running below designed capacity | |
| Quality | Process Defects / Scrap | Products not meeting specification during steady-state |
| Startup / Yield Losses | Defects produced during startup before stable conditions |
Competency-based training focuses on demonstrated ability — not attendance. The test is not "did they take the class?" but "can they perform the task correctly in the field?"
| Level | Capabilities |
|---|---|
| Category I | Data collection, route-based monitoring, basic anomaly identification, escalation to higher level |
| Category II | Analysis and diagnosis, report generation, maintenance task recommendations |
| Category III | Advanced analysis, program design, calibration, training of lower levels |
| Category IV | Expert-level, method development, technical authority, standards development |
When experienced technicians retire, they take institutional knowledge with them unless it's systematically captured. Strategies: document job plans with lessons learned (Palmer Principle 5), require CMMS failure code notes, formal mentoring programs, accessible technical libraries with OEM manuals and RCA reports.
Failure Reporting, Analysis & Corrective Action System. Without FRACAS, failures are repaired but not learned from. The cycle:
| Category | Components |
|---|---|
| Production losses | Lost throughput × margin per unit; missed customer orders, late fees |
| Emergency maint premium | Overtime labor, expedited freight, emergency contractor rates (3–10× planned cost) |
| Quality defects | Scrap, rework, customer returns — equipment in degraded condition causes quality problems before outright failure |
| Safety incidents | Medical costs, investigation, regulatory fines, legal liability |
| Environmental incidents | Remediation, fines, permit violations |
| Customer/market impact | Lost contracts, relationship damage — often the largest long-term cost |
| KPI | Formula | Target | Type |
|---|---|---|---|
| PM Compliance | (PMs done on time / PMs scheduled) × 100 | ≥90% | Leading |
| Schedule Compliance | (Sched hrs completed / Sched hrs) × 100 | ≥90% | Leading |
| Wrench Time | Hands-on time / Total available time | 55–65% | Leading |
| Planned Work % | Planned hrs / Total work hrs | >85% | Leading |
| Emergency Work % | Emergency hrs / Total hrs | <10% | Lagging |
| MTBF | Total uptime / # failures | Trending up | Lagging |
| MTTR | Total repair time / # repairs | Trending down | Lagging |
| MC/RAV | (Annual maint cost / RAV) × 100 | 2–3% | Lagging |
| Pattern | Failure Rate | % of Equipment | Best Strategy |
|---|---|---|---|
| A — Bathtub | High → constant → increasing | 4% | CBM for useful life; time-based for wear-out |
| B — Wear-out | Steadily increasing | 2% | Time-based scheduled replacement |
| C — Gradual degradation | Slowly increasing | 5% | CBM or wide-interval time-based |
| D — Late increase | Low then increasing | 7% | Condition-based monitoring |
| E — Random | Constant throughout life | 14% | CBM; RTF if consequence allows |
| F — Infant mortality | High decreasing → low constant | 68% | Improve installation quality; CBM |
| Frequency | Primary Cause | Direction |
|---|---|---|
| 1× RPM | Rotor unbalance (dominant radial); bent shaft; also misalignment | Radial (H & V) |
| 2× RPM | Angular misalignment (especially axial); mechanical looseness; cracked shaft | Radial + Axial |
| Sub-harmonic (0.5×, 0.33×) | Fluid instability — oil whirl/whip; internal rub | Radial |
| BPFO | Outer race bearing defect; non-synchronous | Radial (load zone) |
| BPFI | Inner race defect; ±1× sidebands (inner race rotates) | Radial |
| BSF | Ball/roller defect; sub-harmonic | Radial |
| FTF | Cage defect; very low freq (0.35–0.48 × RPM) | Radial |
| Gear Mesh Freq (GMF) | # teeth × RPM; sidebands = wear/eccentricity | Radial |
| 2× Line Freq (120 Hz) | Electrical fault in motor (USA); stator eccentricity | Radial |
Emissivity is the ratio of radiation emitted by a surface vs. a perfect blackbody (ε = 1.0). Shiny metals have very low emissivity (0.03–0.15) — they primarily reflect surrounding temperatures rather than emitting their own. This causes false readings unless corrected. Apply matte paint or tape, or use emissivity-corrected settings.
| Category | What Thermography Finds |
|---|---|
| Electrical | Loose connections (resistance heating: P = I²R), overloaded circuits, phase imbalance, failing switches/breakers, transformer problems |
| Mechanical | Bearing overheating, motor winding hotspots through vent slots, misaligned couplings, seized conveyor rollers |
| Process / Refractory | Furnace refractory degradation (hot spots on shell), heat exchanger fouling (cold spots), failed-open steam traps (hotter than cycling traps) |
| Building / Insulation | Moisture infiltration, insulation gaps, roof anomalies |
Viscosity is the single most important lubricant property. The lubricant must maintain a hydrodynamic film separating metal surfaces under all operating conditions. Too low → film collapse, metal contact. Too high → excessive heat from shear and churning.
The ISO VG number = nominal kinematic viscosity in cSt at 40°C, ±10%. Common grades: 32, 46, 68, 100, 150, 220, 320, 460, 680. Each grade is approximately double the previous in viscosity.
| NLGI | Consistency | Applications |
|---|---|---|
| 000–00 | Semi-fluid | Enclosed gearboxes, centralized systems |
| 0–1 | Very soft | Centralized systems, cold climate |
| 2 | Medium (peanut butter) | ~80% of bearing applications — most common |
| 3 | Firm | High-speed bearings, vertical orientation |
| 4–6 | Hard to block | Open gears, wire ropes, extreme pressure |
Format: X/Y/Z — particle count range codes at ≥4µm, ≥6µm, ≥14µm per mL. Each number increment doubles the particle count. Typical targets: servo hydraulics 15/13/10; gearboxes 17/15/12.
Filtration Beta Ratio (βₓ): Upstream count ÷ downstream count at particle size x. β₁₀ = 200 means 99.5% efficiency at 10µm.
Misalignment is the leading cause of premature bearing and seal failure in rotating equipment.
| Speed (RPM) | Offset (mils) | Angularity (mils/inch) |
|---|---|---|
| <1,800 | ±3.0 | ±0.7 |
| 1,800–3,600 | ±2.0 | ±0.5 |
| >3,600 | ±1.0 | ±0.3 |
| G Grade | Application |
|---|---|
| G 0.4 | Gyroscopes, precision spindles |
| G 2.5 | Gas/steam turbines, centrifuges |
| G 6.3 | Industrial fans, pumps, motors — most common |
| G 16 | Agricultural machinery, large crankshafts |
| G 40 | Car/truck wheels |
| Level | Definition | Example |
|---|---|---|
| Physical | The material or mechanical event that caused the failure | Rolling element bearing failed from subsurface fatigue spalling |
| Human | The human decision or omission that enabled the physical cause | Bearing over-greased — excess pressure forced grease past seals |
| Latent/Systemic | The organizational weakness that enabled the human cause | No written greasing procedure exists; no quantity/interval specification; no competency verification for lubrication tasks |
See Pillar 3 for full TPM 8-pillar detail. Domain 4 exam focus areas:
| β Value | Failure Pattern | Failure Rate | Maintenance Strategy |
|---|---|---|---|
| β < 1 | Infant mortality | Decreasing | Improve installation quality & commissioning. Time-based PM makes it WORSE. |
| β = 1 | Random (exponential) | Constant | CBM or RTF. Time-based replacement does NOT reduce failure frequency. |
| 1 < β < 3 | Early wear-out onset | Slowly increasing | CBM preferred; time-based at wide interval acceptable |
| β ≈ 3.44 | Normal distribution approx | Symmetric around mean | Time-based PM; set at 70–80% of η |
| β > 3 | Wear-out | Rapidly increasing | Time-based PM IS effective. Set interval at 70–80% of η (characteristic life). |
| Method | How It Works | Best For | Key Limitation |
|---|---|---|---|
| VT (Visual) | Direct or aided visual examination | Surface defects; first step in any inspection | Surface only; requires access |
| PT (Liquid Penetrant) | Dye drawn into surface-breaking cracks by capillary action | Surface-breaking defects in any non-porous material | Surface-breaking only; rough surfaces reduce sensitivity |
| MT (Magnetic Particle) | Magnetic flux leakage at defects attracts particles | Surface & near-surface defects | Ferromagnetic materials only (carbon/alloy steel) |
| UT (Ultrasonic) | Sound waves reflect from internal defects | Internal defects, wall thickness measurement | Requires couplant; operator skill critical |
| PAUT | Multi-element UT with electronic steering; produces cross-section image | Complex geometry welds, code-compliant inspection | Higher cost; specialized operator |
| RT (Radiography) | X-ray/gamma ray differential absorption | Internal defects; permanent record; complex geometry | Radiation safety required; two-sided access needed |
| ET (Eddy Current) | Induced currents; defects disturb current flow | Heat exchanger tube inspection (fast, no contact); surface cracks | Conductive materials only; limited depth |
| AE (Acoustic Emission) | Passive — detects stress waves emitted by active defect growth | Monitoring pressure vessels during pressurization; active cracking | Noise discrimination; multi-sensor required for location |
| Failure Mode | Root Cause | Detection | Prevention |
|---|---|---|---|
| Bearing failure | Over/under lubrication, contamination, VFD fluting, misalignment | Vibration, ultrasound, temperature | Lube program, ultrasound-guided greasing, shaft grounding rings |
| Stator winding failure | Thermal cycling, voltage spikes, partial discharge, over-temperature | Surge test, partial discharge, Megger/PI trending | Thermal monitoring, VFD voltage spike filters |
| Rotor bar failure | Thermal cycling fatigue, casting defects, overloading | MCSA (sidebands at 1 ± 2s × line frequency) | Avoid frequent starts; VFD soft-starting |
| VFD bearing fluting | High-freq common-mode voltage → capacitive coupling → discharge current → raceway craters | Visual (washboard pattern), vibration, MCSA | Shaft grounding rings; insulated bearings on NDE |
| Overheating | Overloading, blocked ventilation, high ambient, frequent starts, voltage unbalance | Thermal monitoring, current monitoring, thermography | Proper motor sizing, clear ventilation, voltage balancing |
| Test | What It Measures | Pass Criteria |
|---|---|---|
| Megger (IR Test) | Winding-to-ground insulation resistance | Varies by voltage class; trending over time is more valuable than a single reading |
| Polarization Index (PI) | IR(10 min) / IR(1 min) | >2.0 = good; <1.0 = contaminated/damaged |
| Surge/Impulse Test | Turn-to-turn insulation | Detects inter-turn shorts other tests miss |
| Hi-Pot | Dielectric withstand (proof test) | Used for acceptance testing; destructive if overused |
| Phase | Timeline | Key Activities |
|---|---|---|
| Scope Development | 12–18 months prior | Inspection list, regulatory requirements, deferred maintenance, process improvements. Freeze scope 3 months before execution. |
| Engineering & Procurement | 6–12 months | Detailed work packages, long-lead materials, contractor bids |
| Detailed Planning | 3–6 months | Job plans, resource leveling, critical path, permit coordination |
| Execution | Shutdown period | Daily progress meetings, critical path monitoring, MOC for scope additions |
| Closeout | Post-shutdown | Cost vs. budget, schedule vs. plan, lessons learned |
Six calculators covering the most commonly used reliability and maintenance formulas. Every result is benchmarked against SMRP world-class targets.
Found an error? Have a recommendation? Want to suggest a practice question? Every message goes directly to Chad.
Messages go directly to [email protected] · Usually responded to within 48 hours