Research Integrity Screening MCP Server avatar

Research Integrity Screening MCP Server

Pricing

from $350.00 / 1,000 full integrity reports

Go to Apify Store
Research Integrity Screening MCP Server

Research Integrity Screening MCP Server

Academic integrity MCP wrapping 7 actors. Researcher screening, paper mill detection, citation anomalies, journal quality, grant-publication audit. Integrity Score 0-100. Pay-per-event.

Pricing

from $350.00 / 1,000 full integrity reports

Rating

0.0

(0)

Developer

ryan clinton

ryan clinton

Maintained by Community

Actor stats

0

Bookmarked

2

Total users

1

Monthly active users

a day ago

Last modified

Share

Academic fraud detection and publication quality intelligence for research integrity officers, funding agencies, and journal editors. This MCP server orchestrates 7 data sources across OpenAlex, ORCID, PubMed, Semantic Scholar, Crossref, CORE, and NIH Grants to screen researchers, detect paper mill indicators, assess journal quality, identify citation manipulation via Benford's law analysis, and audit grant-research linkages. Produces Researcher Integrity Scores (0-100) with CLEAR/MINOR/INVESTIGATION/HIGH_RISK verdicts.

What data can you access?

Data PointSource
Academic publications, citations, and open access statusOpenAlex
Researcher profiles, affiliations, and employment historyORCID
Biomedical literature and MeSH indexingPubMed
AI-powered citation analysis and paper embeddingsSemantic Scholar
Publication metadata and DOI resolutionCrossref
Open access full-text repositoriesCORE
Federal research grant awards and funding dataNIH Grants

MCP Tools

ToolPriceDescription
screen_researcher_integrity$1.50Full integrity screening: retractions, citation anomalies (Benford's law), publication velocity, ORCID verification
check_publication_flags$1.50Paper mill detection: template patterns, journal concentration, suspicious citation rings
assess_journal_quality$1.50Journal quality assessment: citation impact, open access ratio, source diversity, predatory indicators
detect_citation_anomalies$1.50Benford's law analysis of citation distributions to detect statistical manipulation
audit_grant_research_link$1.50Audit NIH grant-to-publication linkage: paper-to-grant ratio, funding risk assessment
compare_institutional_integrity$1.50Side-by-side integrity and quality comparison between two institutions
generate_integrity_report$3.00Comprehensive report across all 7 sources with 4 scoring models and verdict

Data Sources

  • OpenAlex -- Academic publication metadata, citation counts, institutional affiliations, and open access status
  • ORCID -- Researcher profiles with verified employment history, affiliations, and publication lists
  • PubMed -- Biomedical literature with MeSH terms, abstracts, and journal indexing
  • Semantic Scholar -- AI-powered citation analysis, paper influence scores, and semantic similarity
  • Crossref -- DOI metadata, publication dates, journal information, and reference lists
  • CORE -- Open access full-text repository with document similarity analysis
  • NIH Research Grants -- Federal grant awards, principal investigators, institutions, and funding amounts

How the scoring works

The MCP produces four scoring dimensions that combine into a Researcher Integrity Score (0-100):

Researcher Integrity Score combines retraction rate, citation anomalies detected via Benford's law, publication velocity red flags (unusually high output), and co-author network analysis. ORCID verification status provides additional confidence.

Paper Mill Indicator detects patterns consistent with paper mill output: template document structures, suspicious citation rings (groups of papers that exclusively cite each other), concentrated journal submissions, and author ordering anomalies.

Journal Quality Assessment evaluates predatory journal risk from citation patterns, editorial practice indicators, open access ratio, and source diversity. Low-quality journal concentration in a researcher's output raises integrity flags.

Funding Risk Linkage maps NIH grants to publications to assess whether grant-funded research resulted in flagged publications. A high ratio of flagged papers to grant dollars indicates funding agency exposure.

Score RangeVerdictInterpretation
0-25CLEARNo integrity flags detected
26-50MINORMinor anomalies, low concern
51-75INVESTIGATIONSignificant anomalies warrant review
76-100HIGH_RISKMultiple integrity flags, urgent review needed

How to connect this MCP server

Claude Desktop

Add to your claude_desktop_config.json:

{
"mcpServers": {
"research-integrity-screening": {
"url": "https://research-integrity-screening-mcp.apify.actor/mcp"
}
}
}

Programmatic (HTTP)

curl -X POST https://research-integrity-screening-mcp.apify.actor/mcp \
-H "Content-Type: application/json" \
-H "Authorization: Bearer YOUR_APIFY_TOKEN" \
-d '{"jsonrpc":"2.0","method":"tools/call","params":{"name":"screen_researcher_integrity","arguments":{"researcher":"John Smith Harvard"}},"id":1}'

This MCP also works with Cursor, Windsurf, Cline, and any other MCP-compatible client.

Use cases for research integrity intelligence

Pre-Award Grant Screening

Screen principal investigator publication records before awarding federal research grants. Identify researchers with citation anomalies or paper mill indicators that could indicate integrity risk.

Journal Submission Screening

Evaluate submitted manuscripts for paper mill indicators including template structures, suspicious citation patterns, and author affiliation anomalies before peer review.

Institutional Research Integrity Auditing

Benchmark institutional research integrity across departments by comparing publication quality, citation patterns, and funding linkages. Identify systemic integrity issues at the organizational level.

Faculty Hiring Due Diligence

Screen prospective faculty candidates for publication integrity flags. Verify ORCID profiles, assess citation distribution health, and check for retraction history.

Funding Agency Portfolio Review

Audit the relationship between grant funding and publication quality across a portfolio. Identify grants associated with flagged publications for further investigation.

Cross-Institutional Comparison

Compare research integrity metrics between two institutions to inform collaboration decisions, hiring benchmarks, or accreditation reviews.

How much does it cost?

This MCP uses pay-per-event pricing. You are only charged when a tool is called.

The Apify Free plan includes $5 of monthly platform credits, which covers 3 full integrity reports.

Example UseApproximate Cost
Screen a researcher's integrity$1.50
Detect citation anomalies for a lab$1.50
Full integrity report (all 7 sources)$3.00
Screen 10 grant applicants~$15.00

How it works

  1. You provide a researcher name, institution, or paper topic
  2. The MCP runs up to 7 Apify actors in parallel querying OpenAlex, ORCID, PubMed, Semantic Scholar, Crossref, CORE, and NIH Grants
  3. Scoring algorithms analyze the combined data -- Benford's law is applied to citation distributions, paper mill template detection runs against publication metadata, journal quality is assessed from impact metrics, and grant-publication linkages are audited
  4. Structured JSON is returned with the Integrity Score, verdict, per-model scores, and flagged items

The Benford's law analysis compares the leading digit distribution of citation counts against the expected logarithmic distribution. Significant deviation indicates potential citation manipulation.

FAQ

Q: Can this definitively identify fraud? A: No. The system identifies statistical anomalies and red flags that warrant investigation. It is a screening tool, not a forensic fraud detection system. Abnormal patterns have legitimate explanations.

Q: How does Benford's law apply to citations? A: Citation counts in large datasets naturally follow Benford's law (the first digit is 1 about 30% of the time). Significant deviation from this distribution can indicate artificial citation inflation or manipulation.

Q: What is a paper mill? A: Paper mills are organizations that produce fabricated or plagiarized manuscripts for sale. Their output often shows template structures, predictable citation rings, and concentrated submissions to specific journals.

Q: Does this screen for plagiarism? A: Not directly. The system detects structural patterns consistent with paper mills and citation manipulation. For plagiarism detection, dedicated text-comparison tools are more appropriate.

Q: Is it legal to use this? A: All data sources are publicly available academic databases. See Apify's guide on web scraping legality.

Q: Can I combine this with other MCPs? A: Yes. Use alongside the Higher Education Risk MCP for institutional due diligence or the Academic Commercialization Pipeline MCP for technology transfer intelligence.

MCP ServerDescription
ryanclinton/higher-education-risk-mcpAcademic institution due diligence and risk assessment
ryanclinton/academic-commercialization-pipeline-mcpResearch-to-product technology scouting
ryanclinton/academic-institution-talent-mcpUniversity research talent intelligence

Integrations

This MCP server is built on the Apify platform and supports:

  • Apify API for programmatic access and batch screening workflows
  • Scheduled runs via Apify Scheduler for recurring integrity monitoring
  • Webhooks for triggering alerts when new integrity flags are detected
  • Integration with 200+ Apify actors for extending research data coverage