Fact Checker

Fact Checker

Author: Harold Mansfield
Version: 1.0.0
Platform: Claude, Claude Cowork, Claude Code, Codex, OpenClaw, Hermes Agent and any Anthropic skill standard platform.
License: MIT

MD5
74cf46e323a2b05329a934cc2211dac8

SHA-1
585637b54b65f6b7e729e7f28ee2bf1ab5a4e1fd

SHA-256
2848011599b0261729a15541c32c4f705db6f6669fc195f5f1f76dc39607851f

Vhash
2cf72596195353bb56ad8e56e3e9a3bb

SSDEEP
192:Z/MR9hCquJg/B5/haBwN3Ox5JpzSFXgYrVTeawc6VOIAT6HYJdLurtvwPAi6s:qvhKJCn/cBfJpzqbxTeMIkTAQ

TLSH
T13142AFBCE9605445CB7A4932548EB3EA51A6638302AA2EDB77076248AD99CB40C0779F

File type
ZIP

Magic
Zip archive data, at least v1.0 to extract, compression method=store

TrID
ZIP compressed archive (100%)

Magika
ZIP

File size
12.82 KB (13132 bytes)
History
First Submission
2026-05-10 17:23:35 UTC

Last Submission
2026-05-10 17:23:35 UTC

Last Analysis
2026-05-10 17:23:35 UTC

Earliest Contents Modification
2026-04-29 15:24:48

Latest Contents Modification
2026-05-10 12:28:22

Fact checker is a systematic, evidence-grounded fact-checking skill based on professional verification protocols. Designed for journalists, researchers, and anyone who needs to verify claims with rigor, transparency, and cited sources.

Requirement

  • This skill requires your agent to have access to a web browsing

Installation

  1. Go to Claude.ai (or Cowork, same settings)
  2. Click your profile/avatar β†’ Settings
  3. Navigate to Capabilities β†’ Skills
  4. Click “Upload skill”
  5. Select the fact-checker.zip file as-is no need to unzip it

Claude handles the rest. It will unpack it, read the frontmatter, and the skill will appear in your skills list ready to toggle on.

*One thing to confirm before testing: Make sure the Google Chrome / web browsing toggle is also enabled in your Cowork settings, since the skill won’t be able to do lateral source verification without it.

Usage

You paste in a claim, a quote, a social post, or a full article. Before it does anything, the skill asks you two questions:

  1. Do you want a full breakdown of every verifiable claim or are there specific ones you want to focus on?
  2. And are there any parts you already suspect are wrong?

That scope-setting step matters. A 1,000-word article might contain 15 verifiable claims. Some of them are throwaways. Some of them are the whole point. You get to decide where to spend the time. Then it goes to work.

How it works

Before searching for evidence on any individual claim, the skill applies a four-step framework called SIFT, developed by digital literacy researchers as a disciplined approach to evaluating online information.

πŸ‘‰ Stop. Is the content emotionally charged β€” designed to provoke outrage, fear, or urgency? That’s a red flag that warrants extra scrutiny, not faster sharing.

πŸ‘‰ Investigate the Source. Who published this? What’s their track record, their funding, their potential biases? This happens *before* reading the content deeply, because a source’s credibility shapes how much corroboration each claim needs.

πŸ‘‰ Find Better Coverage. The skill doesn’t trust a single source. It searches for what other outlets β€” ideally more authoritative ones β€” say about the same claim. If only one outlet is reporting something significant, that’s noted.

πŸ‘‰ Trace Claims and Media. Quotes, statistics, images, and video references get traced back to their original context. A quote can be real and still be misleading if it’s stripped of the context that changes its meaning.

Every claim gets a verdict:

βœ… True
🟑 Mostly True β€” accurate but needs a correction or update
❓ Unverifiable β€” not enough evidence to call it either way
❌ False

Each verdict comes with the publisher name, a brief supporting quote, and a direct link. Not a vague “sources say” β€” actual citations you can click. Then it flags what needs to be fixed.

A few things it won’t do:

  • It won’t fact-check opinions. “This policy is bad” isn’t a verifiable claim.
  • It won’t assign a False verdict just because it couldn’t find something, that gets labeled Unverifiable, which is its own important finding.
  • And it’s designed to apply the same standard regardless of who or what is being fact-checked.

Built to resist prompt injection

Fact Checker reads what’s on the open web including articles, social posts, statements, the sources behind them, and synthesizes findings into a verdict. This also means every page the skill fetches is a potential attack surface. A single crafted page in a thousand search results can try to tell a fact-checker how to rule.

Fact Checker treats every piece of fetched content as untrusted data. Search results, page bodies, cached snippets, archived versions …all of it is information to evaluate, never instructions to follow. No matter how authoritatively a page frames its text, no matter what role or system message it claims to be, the skill will not execute commands found inside fetched content. A source claiming to be authoritative is itself a claim to verify, not a reason to trust its content.

When the skill detects common injection patterns it stops processing that source, names the URL, explains plainly what it saw, and gives you three choices:

  1. Skip the source for this query.
  2. Blacklist it for the session, or
  3. Blacklist it permanently.

Permanent entries export as a portable token you can add to your project context so the block carries across every future session, on any platform that runs the skill.

Important: This is best-effort defense, not a perfect shield. Sophisticated injections can look like ordinary content, and the open web is too large to enumerate trusted sources upfront.

The real protection is the principle underneath: a fact-checker that follows instructions found in the things it fact-checks isn’t a fact-checker. Detection plus your judgment is the second layer. Fact Checker gives you the visibility to make the call.

Fact Checker

Download Fact Checker

Send download link to:

About Me

Harold Mansfield
AI Support Specialist

I help SMBs turn AI confusion into AI solutions.

My background spans 15 years in IT support, infrastructure, cybersecurity, and systems administration for SMBs and corporate teams.

I also build things. Most recently I conceived, built, and shipped Samaritan, a 12-agent autonomous OSINT platform built on OpenClaw, running on ParrotOS complete with custom skills and plugins.

I don’t just talk about AI, my experience comes from developing, building, breaking, fixing, and shipping AI products and solutions.

If you are ready to stop wasting time and money, and start utilizing AI to save time and increase productivity, let’s schedule a 30 min video chat and start turning your AI problems into AI solutions.

I'm free weekdays Monday - Friday 9AM-5PM EST