Fact Checker

Author: Harold Mansfield
Version: 1.0.0
Platform: Claude Cowork
License: MIT

Fact checker is a systematic, evidence-grounded fact-checking skill based on professional verification protocols. Designed for journalists, researchers, and anyone who needs to verify claims with rigor, transparency, and cited sources.

Requirement

  • This skill requires active web browsing which means turning on Claude in Chrome:
  • Go to Settings->Claude in Chrome-> and in the drop down choose “Allow extension”.

Installation

  1. Go to Claude.ai (or Cowork, same settings)
  2. Click your profile/avatar β†’ Settings
  3. Navigate to Capabilities β†’ Skills
  4. Click “Upload skill”
  5. Select the fact-checker.zip file as-is no need to unzip it

Claude handles the rest. It will unpack it, read the frontmatter, and the skill will appear in your skills list ready to toggle on.

*One thing to confirm before testing: Make sure the Google Chrome / web browsing toggle is also enabled in your Cowork settings, since the skill won’t be able to do lateral source verification without it.

Usage

You paste in a claim, a quote, a social post, or a full article. Before it does anything, the skill asks you two questions:

  1. Do you want a full breakdown of every verifiable claim or are there specific ones you want to focus on?
  2. And are there any parts you already suspect are wrong?

That scope-setting step matters. A 1,000-word article might contain 15 verifiable claims. Some of them are throwaways. Some of them are the whole point. You get to decide where to spend the time. Then it goes to work.

The skill uses a structured verification method called SIFT

Before searching for evidence on any individual claim, the skill applies a four-step framework called SIFT, developed by digital literacy researchers as a disciplined approach to evaluating online information.

πŸ‘‰ Stop. Is the content emotionally charged β€” designed to provoke outrage, fear, or urgency? That’s a red flag that warrants extra scrutiny, not faster sharing.

πŸ‘‰ Investigate the Source. Who published this? What’s their track record, their funding, their potential biases? This happens *before* reading the content deeply, because a source’s credibility shapes how much corroboration each claim needs.

πŸ‘‰ Find Better Coverage. The skill doesn’t trust a single source. It searches for what other outlets β€” ideally more authoritative ones β€” say about the same claim. If only one outlet is reporting something significant, that’s noted.

πŸ‘‰ Trace Claims and Media. Quotes, statistics, images, and video references get traced back to their original context. A quote can be real and still be misleading if it’s stripped of the context that changes its meaning.

Every claim gets a verdict:

βœ… True
🟑 Mostly True β€” accurate but needs a correction or update
❓ Unverifiable β€” not enough evidence to call it either way
❌ False

Each verdict comes with the publisher name, a brief supporting quote, and a direct link. Not a vague “sources say” β€” actual citations you can click. Then it flags what needs to be fixed.

A few things it won’t do:

  • It won’t fact-check opinions. “This policy is bad” isn’t a verifiable claim.
  • It won’t assign a False verdict just because it couldn’t find something, that gets labeled Unverifiable, which is its own important finding.
  • And it’s designed to apply the same standard regardless of who or what is being fact-checked.

Fact Checker

Download Fact Checker

Send download link to:

About Me

Harold Mansfield | CSAP

Harold Mansfield
AI Support Strategist
Sec+ CySA+ NIST 800-37

I help small business owners and teams turn AI confusion into AI solutions.

My background spans 15 years in IT support, infrastructure, cybersecurity, and systems administration for SMBs and corporate teams. That foundation shapes everything I do, from security-aware design to a practical understanding of what actually works in the real world versus what just sounds good in a demo.

I also build things. Most recently I conceived, built, and shipped Samaritan, a 12-agent autonomous OSINT investigation platform built on OpenClaw, running on ParrotOS. It includes automated case management, a curated library of nearly 2,000 intelligence sources, structured evidence pipelines, and upgrade safety automation.

When I show you how agentic AI works, I am showing you something I built myself.

That is the difference. Everything I teach comes from something I have actually built, broken, fixed, and shipped. If you are ready to stop guessing and start getting real results from AI, let’s talk.

SMB Consultants

1-313-230-4489

Monday – Friday 9AM-5PM EST