Methodology
Proofd scores each of your tasks across five research-backed dimensions, weighted by how much of your time each task takes. The result is a personalized risk profile — not a generic job-title prediction.
Proofd decomposes a job into its constituent tasks using O*NET occupational data, then scores each task from 0–100 across five dimensions: repeatability, data dependency, current AI capability, human interaction depth, and accountability tolerance. Tasks are weighted by time allocation to produce a composite risk score. Every report is reviewed and refined by a human career analyst before delivery.
The methodology draws on Frey & Osborne's (2017) automation probability framework, O*NET task-level occupational data, METR autonomous capability benchmarks, and the World Economic Forum's Future of Jobs Report.
Tasks with consistent, predictable steps are easier for AI to learn and automate.
We measure how structured and rule-based each task is. A task like 'generate monthly expense reports from standardized templates' scores high. A task like 'negotiate partnership terms with new vendors' scores low. Based on O*NET task descriptors and structured work activity classifications.
Tasks that primarily process structured data are more automatable than those requiring tacit knowledge.
We evaluate whether the task's inputs and outputs are primarily digital and structured (spreadsheets, databases, text documents) versus requiring physical presence, sensory judgment, or tacit knowledge that isn't captured in data. Informed by Brynjolfsson & Mitchell's (2017) suitability for machine learning rubric.
We benchmark each task against what today's AI systems can actually do — not what headlines claim.
Using METR's autonomous capability benchmarks and documented enterprise AI deployments, we assess whether current AI models can perform the task at production quality. METR tracks AI's ability to work independently — with autonomous work duration doubling every 4–7 months. We map your tasks against this trajectory.
Tasks requiring trust, empathy, persuasion, or physical presence remain harder for AI to absorb.
Some tasks are technically possible for AI but impractical to automate because they depend on human relationships. Mediating team conflicts, building client trust, leading sensitive conversations — these score low on displacement risk even if AI could technically generate the words. Draws on the World Economic Forum's Future of Jobs Report categories for human-centric skills.
High-stakes decisions where errors carry legal, financial, or safety consequences resist full automation.
When a task involves signing off on medical diagnoses, approving loan applications, or certifying engineering specifications, organizations and regulators require a human in the loop — regardless of AI accuracy. We score how much institutional and regulatory friction protects each task from full delegation to AI. Informed by emerging AI governance frameworks and liability precedents.
The Human Element
Every Proofd report is reviewed and refined by a human career analyst before it reaches you. AI is extraordinarily good at processing occupational data, cross-referencing capability benchmarks, and identifying patterns across thousands of task descriptions. But it can miss nuance.
Your role might have a unique combination of responsibilities that doesn't map neatly to standard occupational categories. Your industry might have regulatory dynamics that change the automation calculus. Your specific company context might make certain tasks more or less vulnerable than the baseline suggests.
That's where the human analyst comes in. They review the AI-generated analysis, validate the task decomposition against your input, adjust scores where context demands it, and ensure the 24-month action plan is practical — not just statistically plausible.
The result is a report that combines the breadth and speed of AI analysis with the judgment and contextual understanding of an experienced career professional. It's not pure AI output, and it's not a $500/hour consultant guessing. It's both, working together.
"The Future of Employment: How Susceptible Are Jobs to Computerisation?"
The foundational study that estimated automation probabilities for 702 occupations. We use their task-level decomposition framework — not just the headline occupation-level numbers — to understand which aspects of work are automatable.
U.S. Department of Labor
The most comprehensive public database of occupational tasks, skills, and work activities. We map your described responsibilities to O*NET task descriptors to ensure consistent, research-grounded task decomposition across all reports.
Model Evaluation & Threat Research
METR tracks how long AI systems can work independently on real-world tasks. Their benchmarks show autonomous work duration doubling every 4–7 months — the most concrete measure of how quickly AI capability is advancing at the task level.
WEF (2023, 2025 editions)
Surveys of over 800 companies across 27 industries on which skills and roles they expect to grow, decline, or transform due to AI and automation. We use their taxonomy of human-centric skills that resist automation to inform our human interaction depth scoring.