Research Integrity Screening MCP Server
Pricing
from $350.00 / 1,000 full integrity reports
Research Integrity Screening MCP Server
Academic integrity MCP wrapping 7 actors. Researcher screening, paper mill detection, citation anomalies, journal quality, grant-publication audit. Integrity Score 0-100. Pay-per-event.
Pricing
from $350.00 / 1,000 full integrity reports
Rating
0.0
(0)
Developer

ryan clinton
Actor stats
0
Bookmarked
2
Total users
1
Monthly active users
a day ago
Last modified
Categories
Share
Academic fraud detection and publication quality intelligence for research integrity officers, funding agencies, and journal editors. This MCP server orchestrates 7 data sources across OpenAlex, ORCID, PubMed, Semantic Scholar, Crossref, CORE, and NIH Grants to screen researchers, detect paper mill indicators, assess journal quality, identify citation manipulation via Benford's law analysis, and audit grant-research linkages. Produces Researcher Integrity Scores (0-100) with CLEAR/MINOR/INVESTIGATION/HIGH_RISK verdicts.
What data can you access?
| Data Point | Source |
|---|---|
| Academic publications, citations, and open access status | OpenAlex |
| Researcher profiles, affiliations, and employment history | ORCID |
| Biomedical literature and MeSH indexing | PubMed |
| AI-powered citation analysis and paper embeddings | Semantic Scholar |
| Publication metadata and DOI resolution | Crossref |
| Open access full-text repositories | CORE |
| Federal research grant awards and funding data | NIH Grants |
MCP Tools
| Tool | Price | Description |
|---|---|---|
screen_researcher_integrity | $1.50 | Full integrity screening: retractions, citation anomalies (Benford's law), publication velocity, ORCID verification |
check_publication_flags | $1.50 | Paper mill detection: template patterns, journal concentration, suspicious citation rings |
assess_journal_quality | $1.50 | Journal quality assessment: citation impact, open access ratio, source diversity, predatory indicators |
detect_citation_anomalies | $1.50 | Benford's law analysis of citation distributions to detect statistical manipulation |
audit_grant_research_link | $1.50 | Audit NIH grant-to-publication linkage: paper-to-grant ratio, funding risk assessment |
compare_institutional_integrity | $1.50 | Side-by-side integrity and quality comparison between two institutions |
generate_integrity_report | $3.00 | Comprehensive report across all 7 sources with 4 scoring models and verdict |
Data Sources
- OpenAlex -- Academic publication metadata, citation counts, institutional affiliations, and open access status
- ORCID -- Researcher profiles with verified employment history, affiliations, and publication lists
- PubMed -- Biomedical literature with MeSH terms, abstracts, and journal indexing
- Semantic Scholar -- AI-powered citation analysis, paper influence scores, and semantic similarity
- Crossref -- DOI metadata, publication dates, journal information, and reference lists
- CORE -- Open access full-text repository with document similarity analysis
- NIH Research Grants -- Federal grant awards, principal investigators, institutions, and funding amounts
How the scoring works
The MCP produces four scoring dimensions that combine into a Researcher Integrity Score (0-100):
Researcher Integrity Score combines retraction rate, citation anomalies detected via Benford's law, publication velocity red flags (unusually high output), and co-author network analysis. ORCID verification status provides additional confidence.
Paper Mill Indicator detects patterns consistent with paper mill output: template document structures, suspicious citation rings (groups of papers that exclusively cite each other), concentrated journal submissions, and author ordering anomalies.
Journal Quality Assessment evaluates predatory journal risk from citation patterns, editorial practice indicators, open access ratio, and source diversity. Low-quality journal concentration in a researcher's output raises integrity flags.
Funding Risk Linkage maps NIH grants to publications to assess whether grant-funded research resulted in flagged publications. A high ratio of flagged papers to grant dollars indicates funding agency exposure.
| Score Range | Verdict | Interpretation |
|---|---|---|
| 0-25 | CLEAR | No integrity flags detected |
| 26-50 | MINOR | Minor anomalies, low concern |
| 51-75 | INVESTIGATION | Significant anomalies warrant review |
| 76-100 | HIGH_RISK | Multiple integrity flags, urgent review needed |
How to connect this MCP server
Claude Desktop
Add to your claude_desktop_config.json:
{"mcpServers": {"research-integrity-screening": {"url": "https://research-integrity-screening-mcp.apify.actor/mcp"}}}
Programmatic (HTTP)
curl -X POST https://research-integrity-screening-mcp.apify.actor/mcp \-H "Content-Type: application/json" \-H "Authorization: Bearer YOUR_APIFY_TOKEN" \-d '{"jsonrpc":"2.0","method":"tools/call","params":{"name":"screen_researcher_integrity","arguments":{"researcher":"John Smith Harvard"}},"id":1}'
This MCP also works with Cursor, Windsurf, Cline, and any other MCP-compatible client.
Use cases for research integrity intelligence
Pre-Award Grant Screening
Screen principal investigator publication records before awarding federal research grants. Identify researchers with citation anomalies or paper mill indicators that could indicate integrity risk.
Journal Submission Screening
Evaluate submitted manuscripts for paper mill indicators including template structures, suspicious citation patterns, and author affiliation anomalies before peer review.
Institutional Research Integrity Auditing
Benchmark institutional research integrity across departments by comparing publication quality, citation patterns, and funding linkages. Identify systemic integrity issues at the organizational level.
Faculty Hiring Due Diligence
Screen prospective faculty candidates for publication integrity flags. Verify ORCID profiles, assess citation distribution health, and check for retraction history.
Funding Agency Portfolio Review
Audit the relationship between grant funding and publication quality across a portfolio. Identify grants associated with flagged publications for further investigation.
Cross-Institutional Comparison
Compare research integrity metrics between two institutions to inform collaboration decisions, hiring benchmarks, or accreditation reviews.
How much does it cost?
This MCP uses pay-per-event pricing. You are only charged when a tool is called.
The Apify Free plan includes $5 of monthly platform credits, which covers 3 full integrity reports.
| Example Use | Approximate Cost |
|---|---|
| Screen a researcher's integrity | $1.50 |
| Detect citation anomalies for a lab | $1.50 |
| Full integrity report (all 7 sources) | $3.00 |
| Screen 10 grant applicants | ~$15.00 |
How it works
- You provide a researcher name, institution, or paper topic
- The MCP runs up to 7 Apify actors in parallel querying OpenAlex, ORCID, PubMed, Semantic Scholar, Crossref, CORE, and NIH Grants
- Scoring algorithms analyze the combined data -- Benford's law is applied to citation distributions, paper mill template detection runs against publication metadata, journal quality is assessed from impact metrics, and grant-publication linkages are audited
- Structured JSON is returned with the Integrity Score, verdict, per-model scores, and flagged items
The Benford's law analysis compares the leading digit distribution of citation counts against the expected logarithmic distribution. Significant deviation indicates potential citation manipulation.
FAQ
Q: Can this definitively identify fraud? A: No. The system identifies statistical anomalies and red flags that warrant investigation. It is a screening tool, not a forensic fraud detection system. Abnormal patterns have legitimate explanations.
Q: How does Benford's law apply to citations? A: Citation counts in large datasets naturally follow Benford's law (the first digit is 1 about 30% of the time). Significant deviation from this distribution can indicate artificial citation inflation or manipulation.
Q: What is a paper mill? A: Paper mills are organizations that produce fabricated or plagiarized manuscripts for sale. Their output often shows template structures, predictable citation rings, and concentrated submissions to specific journals.
Q: Does this screen for plagiarism? A: Not directly. The system detects structural patterns consistent with paper mills and citation manipulation. For plagiarism detection, dedicated text-comparison tools are more appropriate.
Q: Is it legal to use this? A: All data sources are publicly available academic databases. See Apify's guide on web scraping legality.
Q: Can I combine this with other MCPs? A: Yes. Use alongside the Higher Education Risk MCP for institutional due diligence or the Academic Commercialization Pipeline MCP for technology transfer intelligence.
Related MCP servers
| MCP Server | Description |
|---|---|
| ryanclinton/higher-education-risk-mcp | Academic institution due diligence and risk assessment |
| ryanclinton/academic-commercialization-pipeline-mcp | Research-to-product technology scouting |
| ryanclinton/academic-institution-talent-mcp | University research talent intelligence |
Integrations
This MCP server is built on the Apify platform and supports:
- Apify API for programmatic access and batch screening workflows
- Scheduled runs via Apify Scheduler for recurring integrity monitoring
- Webhooks for triggering alerts when new integrity flags are detected
- Integration with 200+ Apify actors for extending research data coverage