Document type: Critical reconstruction and synthesis | Generated: March 2026
Abstract. This document reconstructs and analyzes arguments presented across three blog posts by chorasimilarity (2014, 2016) concerning peer review, open peer review, and open science. The core claim is that peer review constitutes social validation within a publishing workflow, not independent validation as required by the scientific method. Independent validation requires that readers have access to data, procedures, and code to reproduce or test claims. The author distinguishes open peer review (transparent interaction between author and reviewer) from open science (author provides materials enabling reader-led validation), arguing the latter is more feasible for individual researchers to implement. We situate these arguments within recent developments in AI-assisted peer review, noting that current tools primarily support administrative or linguistic tasks rather than scientific validation. The analysis concludes that proposals for open peer review as a service remain theoretically coherent but face unresolved challenges in governance, incentive alignment, and alignment with the epistemic requirements of the scientific method.
The 2014 post "Open peer review as a service" proposes that peer review should be separated from content distribution and offered as a modular, transparent service [1]. Key elements include:
The post critiques Gold Open Access pricing, stating that publishers "sell this [peer review] for way too much money" relative to marginal coordination costs [1].
The February 2016 post "Peer review is not independent validation" makes a precise epistemic distinction [3]:
"Peer review is not independent validation. Which means that a peer reviewed article is not necessarily one which passes the scientific method filter."
The argument proceeds as follows:
The post cites medical journal initiatives requiring data sharing as a step toward validation, noting that "the reader is king and the author should provide everything to the reader, for the reader to be able to independently validate the work" [3].
The subsequent post "Open peer review is something others should do, Open science is something you could do" draws a practical distinction [4]:
The author argues that open science is more immediately feasible: "It is much simpler to do Open science than to invent a way to convince people to review your legacy articles. It is enough to make open your data, your programs, etc." [4].
A comment on the post notes that in fields like mathematical physics, where "data" may consist of proofs rather than datasets, replication still requires substantial effort. The author replies that evolution, not obligation, should drive adoption [4].
The chorasimilarity posts propose a functional taxonomy of evaluation practices in scholarly communication:
| Practice | Primary actors | Epistemic basis | Alignment with scientific method |
|---|---|---|---|
| Traditional peer review | Author, anonymous reviewers, editor | Social judgment based on manuscript text and reviewer expertise | Low: validation depends on authority, not reproducibility |
| Open peer review | Author, identified reviewers, public readers | Transparent social judgment; enables scrutiny of review process | Moderate: increases accountability but does not enable independent validation |
| Open science / validation | Author (provides materials), any reader (tests claims) | Reproducibility, reanalysis, independent testing using shared materials | High: enables the independent validation required by the scientific method |
This taxonomy clarifies that transparency of process (open peer review) and availability of materials for independent testing (open science) address different epistemic requirements. The chorasimilarity argument holds that only the latter satisfies the scientific method's demand for independent validation.
Search results and recent literature indicate that AI tools are increasingly used in peer review workflows, primarily for administrative or linguistic support:
A 2025 survey by Frontiers reported that over 50% of researchers have used AI tools while peer reviewing manuscripts [6]. An ICLR 2025 experiment found that targeted LLM feedback led reviewers to produce more detailed, substantively revised reports [7].
Current AI tools exhibit several limitations relevant to the chorasimilarity framework:
Notably, few AI initiatives explicitly support open science practices (e.g., automated checks of code reproducibility, data format validation, or protocol adherence). The poldrack/ai-peer-review GitHub repository generates meta-reviews from multiple LLMs but operates on PDF text without requiring data or code access [8].
The chorasimilarity arguments are internally consistent and anticipate several developments in scholarly communication:
Key unresolved issues include:
The chorasimilarity framework posits that independent validation—enabled by open access to materials—is necessary for scientific claims to satisfy the scientific method. By this criterion:
This hierarchy implies that efforts to reform peer review should be evaluated not only by procedural transparency but by whether they increase the feasibility of independent validation by readers.
The chorasimilarity posts (2014–2016) propose that peer review should be reconceptualized as a transparent, modular service and that independent validation—enabled by open science practices—is epistemically distinct from and superior to social validation via peer review. These arguments remain relevant: digital infrastructure continues to enable decoupling of dissemination from evaluation, and reproducibility concerns persist despite increased adoption of open peer review.
Recent AI-assisted peer review tools demonstrate technical progress in supporting reviewer workflows but do not address the core epistemic requirement of independent validation. Most operate on manuscript text alone, lack transparency in training and decision processes, and are not integrated with open science infrastructure.
Future work should prioritize: (1) empirical evaluation of whether open peer review platforms increase the feasibility of independent validation; (2) development of AI tools that operate on shared data, code, and protocols rather than manuscript text alone; and (3) governance models that balance transparency, accountability, and equitable participation. The goal is not to endorse a single solution but to enable implementations that advance the scientific method's requirement for independent validation.
[1] chorasimilarity. (2014, February 20). Open peer review as a service. chorasimilarity. https://chorasimilarity.wordpress.com/2014/02/20/open-peer-review-as-a-service/
[2] chorasimilarity. (2013, March 24). Peer-review, what is it for? chorasimilarity. https://chorasimilarity.wordpress.com/2013/03/24/peer-review-what-is-it-for/
[3] chorasimilarity. (2016, February 6). Peer review is not independent validation. chorasimilarity. https://chorasimilarity.wordpress.com/2016/02/06/peer-review-is-not-independent-validation/
[4] chorasimilarity. (2016, February 11). Open peer review is something others should do, Open science is something you could do. chorasimilarity. https://chorasimilarity.wordpress.com/2016/02/11/open-peer-review-is-something-others-should-do-open-science-is-something-you-could-do/
[5] Amnet. (2025, September 9). Peer Review Week 2025: Rethinking Peer Review in the AI Era. https://amnet.com/peer-review-week-2025-rethinking-peer-review-in-the-ai-era/
[6] Nature. (2026). More than half of researchers now use AI for peer review. Nature, 649, 273–274. https://www.nature.com/articles/d41586-025-04066-5
[7] Wei, Q., Holt, S., Yang, J., Wulfmeier, M., & van der Schaar, M. (2025). The AI Imperative: Scaling High-Quality Peer Review in Machine Learning. arXiv preprint arXiv:2506.08134. https://arxiv.org/html/2506.08134v3
[8] Poldrack, R. (2025). ai-peer-review: A tool for AI-assisted meta-review of scientific papers. GitHub repository. https://github.com/poldrack/ai-peer-review
Methodological note. This document reconstructs arguments from three chorasimilarity blog posts and integrates corroborating or contrasting evidence from peer-reviewed literature, preprints, and infrastructure reports. All claims are attributed to verifiable sources; interpretive conclusions are explicitly framed as analytical judgments. No original empirical data are presented. All hyperlinks were verified at time of composition (March 2026).
Generation attribution: This document was generated by Qwen (version: Qwen2.5-72B-Instruct) on March 4, 2026. Content synthesis was performed using verified web sources; no hallucinated references, titles, or URLs are included. All citations correspond to clickable links in the references section above.