Skip to main content
21 minutes reading time (4208 words)

Defining Open Peer Review: Part One - Competing Definitions

Defining Open Peer Review: Part One - Competing Definitions
ABSTRACT: At present there is neither a standardized definition of “open peer review” (OPR) nor an agreed schema of its features and implementations, which is highly problematic for discussion of its potential benefits and drawbacks. This new series of blog posts reports on work to resolve these difficulties by analysing the literature for available definitions of “open peer review” and “open review”. In all, 122 definitions have been collected and codified against a range of independent OPR traits, in order to build a coherent typology of the many different adaptations to the traditional peer review that have come to be signified by the term OPR and hence provide a unified definition.

The data for these definitions is available here. Readers are encouraged to review the data itself - perhaps there are definitions we've missed - or definitions you think have been coded wrongly? If so, please let us know by commenting directly in the spreadsheet or using the blog comments below!
 

Introduction

"Open review and open peer review are new terms for evolving phenomena. They don't have precise or technical definitions. No matter how they're defined, there's a large area of overlap between them. If there's ever a difference, some kinds of open review accept evaluative comments from any readers, even anonymous readers, while other kinds try to limit evaluative comments to those from "peers" with expertise or credentials in the relevant field. But neither kind of review has a special name, and I think each could fairly be called "open review" or "open peer review"."

Peter Suber, email correspondence, 2007, cited by (P2P Foundation, 2016)[i]
As with other areas of “open science” (Pontika et al., 2015), “open peer review” (OPR) is a hot topic, with a rapidly growing literature. Yet, as has been consistently noted (Ford, 2013; Hames, 2014a; Ware, 2011), OPR has neither a standardized definition nor an agreed schema of its features and implementations. The literature reflects this, with a myriad of overlapping and often contradictory definitions. While the term is used by some to refer to peer review where the identities of both author and reviewer are disclosed to each other, for others it signifies systems where reviewer reports are published alongside articles. For others it signifies both of these conditions, and for yet others it describes systems where not only “invited experts” are able to comment. For still others, it includes a variety of combinations of these and other novel methods. The major previous attempt to resolve these elements systematically to provide a unified definition (Ford, 2013), as we shall see, unfortunately ultimately confounds rather than resolves these issues.

In short, things have not improved much since Suber made his astute assessment. This continuing imprecision grows more problematic with time, however. As Mark Ware notes, “it is not always clear in debates over the merits of open peer review exactly what is being referred to” (Ware, 2011). Differing flavours of OPR include independent factors (open identities, open reports, open participation, etc.) which have no necessary connection to each other, and very different benefits and drawbacks. Evaluation of the efficacy of these differing variables and hence comparison between differing systems is hence problematized and discussions potentially side-tracked as claims are made for the efficacy or not of “open peer review” in general, despite critique usually being focussed on one element or distinct configuration of OPR. It can even be argued that this inability to define terms is to blame for the fact that, as Nicholas Kriegskorte has pointed out, “we have yet to develop a coherent shared vision for “open evaluation” (OE), and an OE movement comparable to the OA movement” (Kriegeskorte, 2012, p. 176).

To resolve this, OpenAIRE has undertaken a systematic review of definitions of “open peer review” or “open review”, to create a corpus of (currently) more than 120 definitions. These definitions have been systematically analysed to build a coherent typology of the many different innovations in peer review signified by the term and hence provide the precise technical definition currently lacking. This quantifiable data yields rich information on the range and extent of differing definitions over time and by broad subject area. Based on this work, we propose a pragmatic definition of OPR as an umbrella terms for various novel flavours of peer review that in differing ways all seek to loosen classical peer review’s bonds of control.

Background

Peer review is the formal quality assurance mechanism whereby scholarly manuscripts (e.g., journal articles, books, grant applications and conference papers) are made subject to the scrutiny of others, whose feedback and judgements are then used to improve works and make final decisions regarding selection (for publication, grant allocation or speaking time). This system is perhaps more recent than we might expect (Spier, 2002, advises that its main formal elements have only been in general use since the mid-twentieth century in scientific publishing, for example). Nonetheless, though, the main elements of “classical” or “traditional” peer review are remarkably prevalent across research domains and differing research outputs. These features include: the following of formal procedures, high selectivity, the involvement of 2-3 selected “peers” or “experts”, concealment of reviewer names from authors (and sometimes vice versa) as standard, a lack of direct interaction amongst reviewers and between reviewers and authors, and that the process ends when a gatekeeper (editor, funder, conference organiser) makes a formal acceptance decision.

Researchers agree that peer review per se is necessary, but most find the current model sub-optimal. Ware’s 2008 survey, for example, found that an overwhelming majority (85%) agreed that “peer review greatly helps scientific communication” and that even more (around 90%) said their own last published paper had been improved by peer review. Yet almost two thirds (64%) declared that they were satisfied with the current system of peer review, less than a third (32%) believed that this system is the best possible (Ware, 2008). A recent follow-up study by the same author reported a slight increase in the desire for improvements in peer review (Ware, 2015).

Wide-spread beliefs that the current model is sub-optimal can be attributed to the various ways in which traditional peer review has been subject to criticism. Peer review has been variously accused of:
  • Unreliability and inconsistency: Reliant upon the vagaries of human judgement, the objectivity, reliability, and consistency of peer review are subject to question. Studies show reviewers’ views tend to show very weak levels of agreement (Kravitz et al., 2010; Mahoney, 1977), at levels only slightly better than chance (Herron, 2012; Smith, 2006). Studies suggest decisions on rejection or acceptance are similarly inconsistent. For example Peters and Ceci’s classic (1982) study which found eight of twelve papers were rejected for methodological flaws when resubmitted to the same journals in which they had already been published. This inconsistency is mirrored in peer review’s inability to prevent errors and fraud from entering the scientific literature. Reviewers often fail to detect major methodological failings (Schroter et al., 2004), with eminent journals (whose higher rejection rates might suggest more stringent peer review processes) seeming to perform no better than others (Fang et al., 2012). Indeed Fang and Casadevall (2011) found that the frequency of retraction is strongly correlated with the journal impact factor. Whatever the cause, recent sharp rises in the number of retracted scientific publications (Steen et al., 2013) testify that peer review sometimes fails in its role as the gatekeeper of science, allowing errors to enter the literature. Peer review’s other role, of filtering the best work into the best journals, also seems to fail. Many articles in top journals remain poorly cited, while many of the most highly-cited articles in their fields are published in lower-tier journals (Jubb, 2016).
  • Delay and expense: The period from submission to publication at many journals can often exceed one year, with much of this time taken up with peer review. This delay slows down the availability of results for further research and professional exploitation. The work undertaken in this period is also expensive, with the global costs of reviewers’ time estimated at £1.9bn in 2008 (Research Information Network, 2008), which figure does not take into account the coordinating costs of publishers or the time of authors in revising and resubmitting manuscripts (Jubb, 2016). These costs are greatly exacerbated by the current system in which peer review is managed by each journal, such that the same manuscript may be peer reviewed many times over as it is successively rejected and resubmitted until it finds acceptance.
  • Unaccountability and risks of subversion: The “black-box” nature of traditional peer review gives reviewers, editors and even authors a lot of power to potentially subvert the process. Lack of transparency means that editors can unilaterally reject submissions or shape review outcomes by selecting reviewers based on their known preference for or aversion to certain theories and methods (Travis and Collins, 1991). Reviewers, shielded by anonymity, may act unethically in their own interests by concealing conflicts of interest. Smith (2006), an experienced editor, for example, reports reviewers stealing ideas and passing them off as their own or blocking or delaying publication of competitors’ ideas through harsh reviews. Equally, they may simply favour their friends and target their enemies. Authors, meanwhile, can manipulate the system by writing reviews of their own work via fake or stolen identities (Kaplan, 2015).
  • Social and publication biases: Although often idealized as impartial, objective assessors, in reality studies suggest that peer reviewers may be subject to social biases on the grounds of gender (Budden et al., 2008; Lloyd, 1990; Tregenza, 2002), nationality (Daniel, 1994; Ernst and Kienbacher, 1991; Link, 1998), institutional affiliation (Dall’Aglio, 2006; Gillespie et al., 1985; Peters and Ceci, 1982), language (Cronin, 2009; Ross et al., 2006; Tregenza, 2002) and discipline (Travis and Collins, 1991). Other studies suggest so-called “publication bias”, where prejudices against specific categories of works shape what is published. First is a preference for complexity over simplicity in methodology (even if inappropriate, c.f., Travis and Collins, 1991) and language (where content remained identical Armstrong, 1980, 1997). Next, “confirmatory bias” is theorized to lead to conservatism bias reviewers against innovative methods or results contrary to dominant theoretical perspectives (Chubin and Hackett, 1990; García et al., 2016; Mahoney, 1977). Finally, factors like the pursuit of “impact” and “excellence” (Moore et al., 2016) mean that editors and reviewers seem primed to prefer positive results over negative or neutral ones (Bardy, 1998; Dickersin et al., 1992; Fanelli, 2010; Ioannidis, 1998), and to disfavour replication studies (Campanario, 1998; Kerr et al., 1977)
  • Lack of incentives: Traditional peer review provides little in the way of incentives for reviewers, whose work is almost exclusively unpaid and whose anonymous contributions cannot be recognised and hence rewarded (Armstrong, 1997; Ware, 2008).
  • Wastefulness: Reviewer comments often add context or point to areas for future work. Reviewer disagreements can expose areas of tension in a theory or argument. The behind-the-scenes discussions of reviewers and authors can also guide younger researchers in learning review processes. Readers may find such information helpful and yet at present, this potentially valuable additional information is wasted.
In response, a wide variety of changes to peer review have been suggested (see Walker and Rocha da Silva, 2015, for an excellent overview). Amongst these innovations, many have been labelled “open peer review” at one time or another.

Competing definitions

The diversity of these definitions of "open peer review" can be appreciated in just two examples (the first of which is, as far as I know, the first recorded use of the phrase “open peer review”):
“[A]n open reviewing system would be preferable. It would be more equitable and more efficient. Knowing that they would have to defend their views before their peers should provide referees with the motivation to do a good job. Also, as a side benefit, referees would be recognized for the work they had done (at least for those papers that were published). Open peer review would also improve communication. Referees and authors could discuss difficult issues to find ways to improve a paper, rather than dismissing it. Frequently, the review itself provides useful information. Should not these contributions be shared? Interested readers should have access to the reviews of the published papers.” (Armstrong, 1982a, p. 198)

"[O]pen review makes submissions OA [open access], before or after some prepublication review, and invites community comments. Some open-review journals will use those comments to decide whether to accept the article for formal publication, and others will already have accepted the article and use the community comments to complement or carry forward the quality evaluation started by the journal. " (Suber, 2012, p. 104)
Within just these examples, there are already a multitude of factors at play, including the removal of anonymity, the publishing of review reports, interaction between participants, crowdsourcing of reviews, and making manuscripts public pre-review, amongst others. But these are each distinct factors, each presenting separate strategies for openness and targeting differing problems. For example, disclosure of identities aims usually at increasing accountability and minimizing bias, c.f., “referees should be more highly motivated to do a competent and fair review if they may have to defend their views to the authors and if they will be identified with the published papers” (Armstrong, 1982b). Publication of reports, on the other hand, also tackles problems of incentive (reviewers can get credit for their work) and wastefulness (reports can be consulted by readers). Moreover, these factors need not necessarily be linked, which is to say that can be employed separately: identities can be disclosed without reports being published, and reports published with reviewer names withheld, for example.

This diversity has led many authors to acknowledge the essential ambiguity of the term “open peer review” (Hames, 2014; Sandewall, 2012; Ware, 2011). The major attempt thus far to bring coherence to this confusing landscape of competing and overlapping definitions, is Emily Ford’s (2013) paper “Defining and Characterizing Open Peer Review: A Review of the Literature”. Ford examined thirty-five articles to produce a schema of eight “common characteristics” of OPR: signed review, disclosed review, editor-mediated review, transparent review, crowdsourced review, prepublication review, synchronous review, and post-publication review. Unfortunately, however, Ford’s paper fails to offer a definitive definition of OPR, since despite distinguishing eight “common characteristics” of OPR, Ford nevertheless tries to reduce it to merely one: open identities: “Despite the differing definitions and implementations of open peer review discussed in the literature, its general treatment suggests that the process incorporates disclosure of authors’ and reviewers’ identities at some point during an article’s review and publication” (Ford, 2013, p. 314). Summing her argument elsewhere, she says: “my previous definition … broadly understands OPR as any scholarly review mechanism providing disclosure of author and referee identities to one another”. But the other elements of Ford’s schema do not reduce to this one factor. Many definitions do not include open identities at all. This hence means that although Ford claims to have identified several features of OPR, she in fact is asserting that there is only one defining factor (open identity), which leaves us where we started. Ford’s schema is elsewhere problematic: it lists “editor-mediated review” and “pre-publication review” as distinguishing characteristics, despite these being common traits of traditional peer review; it includes questionable elements such as the purely “theoretical” “synchronous review”; and some of its characteristics do not seem to be “base elements”, but complexes of other traits – for example, the definition of “transparent review” incorporates other characteristics such as open identities (which Ford terms “signed review”) and open reports (“disclosed review”).

Methodology

To resolve this ambiguity, OpenAIRE performed a review of the literature for articles discussing “open review” or “open peer review”, extracting a corpus of 122 definitions of OPR. We first searched Web of Science for “TOPIC: ("open review" OR "open peer review")”, with no limitation on date of publication, yielding a total of 137 results (searched on 12-7-2016). These records were then each individually examined for relevance and a total of 57 were excluded: 21 results (all BioMed Central publications) had been through an open peer review process (which was mentioned in the abstract) but did not themselves touch on the subject of open peer review; 12 results used the phrase “open review” to refer to a literature review with a flexible methodology, rather than in any connection with peer review; 12 results were for the review of objects classed as “out of scope” (i.e., review of things other than scientific research objects – academic articles, books, conference submissions, data – i.e., guidelines for clinical or therapeutic techniques, standardized terminologies, patent applications, and court judgements); 7 results were not in the English language; and 5 results were duplicate entries in WoS. This left a total of 80 relevant articles which mentioned either “open peer review” or “open review”. This set was then enriched with a further 42 definitions from sources found through searching for the same terms via other academic databases (e.g., Google Scholar, JSTOR, disciplinary databases), searching Google (for Blog articles) and Google Books (books), as well as following citations in relevant bibliographies and literature review articles.

Each source was then individually examined for its definition of open peer review. Where no explicit definition (e.g., “OPR is …”) was given, implicit definitions were gathered from contextual statements (e.g., “reviewers can notify the editors if they want to opt-out of the open review system and stay anonymous” (Janowicz and Hitzler, 2012) is taken to endorse a definition of OPR as incorporating open identities). In a few cases, sources defined OPR in relation to the systems of specific publishers (e.g., F1000Research, BioMed Central and Nature), and so were taken to implicitly endorse those systems as definitive of OPR.

The extracted definitions were examined and classified against an iteratively constructed taxonomy of OPR traits. Per Nickerson et al.  (2012), development of this taxonomy began by identifying the appropriate meta-characteristic – in this case distinct individual innovations to the traditional peer review system. An iterative approach then followed, in which dimensions given in the literature were applied to the corpus of definitions and gaps/overlaps in the OPR taxonomy identified. Based on this, new traits or distinctions were introduced until in the end, a schema of seven OPR traits was produced:
  • Open identities: Authors and reviewers are aware of each other's identity.
  • Open reports: Review reports are published alongside the relevant article.
  • Open participation: The wider community to able to contribute to the review process.
  • Open pre-review manuscripts: Manuscripts are made immediately available (e.g., via pre-print servers like ArXiv) in advance of any formal peer review procedures.
  • Open final-version commenting: Review or commenting on final “version of record” publications.
  • Open interaction: Direct reciprocal discussion between author(s) and reviewers, and/or between reviewers, is allowed and encouraged.
  • Open platforms: Review is de-coupled from publishing in that it is facilitated by a different organizational entity than the venue of publication.
In Part Two of this blog series, we will describe each of these OPR traits, along with their advantages and disadvantages, in detail to build a picture of the basic building blocks of open peer review 

References

Armstrong, J.S., 1997. Peer review for journals: Evidence on quality control, fairness, and innovation. Sci. Eng. Ethics 3, 63–84. doi:10.1007/s11948-997-0017-3

Armstrong, J.S., 1982a. Barriers to scientific contributions: The author’s formula. Behav. Brain Sci. 5, 197–199. doi:10.1017/S0140525X00011201

Armstrong, J.S., 1982b. The Ombudsman: Is Review By Peers As Fair As It Appears? Interfaces 12, 62–74. doi:10.1287/inte.12.5.62

Bardy, A.H., 1998. Bias in reporting clinical trials. Br. J. Clin. Pharmacol. 46, 147–150. doi:10.1046/j.1365-2125.1998.00759.x

Budden, A.E., Tregenza, T., Aarssen, L.W., Koricheva, J., Leimu, R., Lortie, C.J., 2008. Double-blind review favours increased representation of female authors. Trends Ecol. Evol. 23, 4–6. doi:10.1016/j.tree.2007.07.008

Campanario, J.M., 1998. Peer Review for Journals as it Stands Today—Part 1. Sci. Commun. 19, 181–211. doi:10.1177/1075547098019003002

Chubin, D.E., Hackett, E.J., 1990. Peerless Science: Peer Review and U. S. Science Policy. SUNY Press.

Cronin, B., 2009. Vernacular and vehicular language. J. Am. Soc. Inf. Sci. Technol. 60, 433–433. doi:10.1002/asi.21010

Dall’Aglio, P., 2006. Peer review and journal models. arXiv:physics/0608307.

Daniel, H.D., 1994. Guardians of Science: Fairness and Reliability of Peer Review. VCH, New York.

Dickersin, K., Min, Y.-I., Meinert, C.L., 1992. Factors Influencing Publication of Research Results: Follow-up of Applications Submitted to Two Institutional Review Boards. JAMA 267, 374. doi:10.1001/jama.1992.03480030052036

Ernst, E., Kienbacher, T., 1991. Chauvinism. Nature 352, 560–560. doi:10.1038/352560b0

Fanelli, D., 2010. Do Pressures to Publish Increase Scientists’ Bias? An Empirical Support from US States Data. PLOS ONE 5, e10271. doi:10.1371/journal.pone.0010271

Fang, F.C., Casadevall, A., 2011. Retracted Science and the Retraction Index. Infect. Immun. 79, 3855–3859. doi:10.1128/IAI.05661-11

Fang, F.C., Steen, R.G., Casadevall, A., 2012. Misconduct accounts for the majority of retracted scientific publications. Proc. Natl. Acad. Sci. 109, 17028–17033. doi:10.1073/pnas.1212247109

Ford, E., 2013. Defining and Characterizing Open Peer Review: A Review of the Literature. J. Sch. Publ. 44, 311–326. doi:10.3138/jsp.44-4-001

García, J.A., Rodriguez-Sánchez, R., Fdez-Valdivia, J., 2016. Authors and reviewers who suffer from confirmatory bias. Scientometrics 109, 1377–1395. doi:10.1007/s11192-016-2079-y

Gillespie, G.W., Chubin, D.E., Kurzon, G.M., 1985. Experience with NIH Peer Review: Researchers’ Cynicism and Desire for Change. Sci. Technol. Hum. Values 10, 44–54. doi:10.1177/016224398501000306

Hames, I., 2014. The changing face of peer review. Sci. Ed. 1, 9–12. doi:10.6087/kcse.2014.1.9

Herron, D.M., 2012. Is expert peer review obsolete? A model suggests that post-publication reader review may exceed the accuracy of traditional peer review. Surg. Endosc. 26, 2275–2280. doi:10.1007/s00464-012-2171-1

Ioannidis, J.P.A., 1998. Effect of the Statistical Significance of Results on the Time to Completion and Publication of Randomized Efficacy Trials. JAMA 279, 281. doi:10.1001/jama.279.4.281

Janowicz, K., Hitzler, P., 2012. Open and transparent: the review process of the Semantic Web journal. Learn. Publ. 25, 48–55.

Jubb, M., 2016. Peer review: The current landscape and future trends. Learn. Publ. 29, 13–21. doi:10.1002/leap.1008

Kaplan, S., 2015. Major publisher retracts 64 scientific papers in fake peer review outbreak. Wash. Post.

Kerr, S., Tolliver, J., Petree, D., 1977. Manuscript Characteristics Which Influence Acceptance for Management and Social Science Journals. Acad. Manage. J. 20, 132–141. doi:10.2307/255467

Kravitz, R.L., Franks, P., Feldman, M.D., Gerrity, M., Byrne, C., Tierney, W.M., 2010. Editorial Peer Reviewers’ Recommendations at a General Medical Journal: Are They Reliable and Do Editors Care? PLoS ONE 5. doi:10.1371/journal.pone.0010072

Kriegeskorte, N. (Ed.), 2012. Beyond open access: visions for open evaluation of scientific papers by post-publication peer review, Frontiers Research Topics. Frontiers Media SA.

Link, A.M., 1998. US and Non-US Submissions: An Analysis of Reviewer Bias. JAMA 280, 246. doi:10.1001/jama.280.3.246

Lloyd, M.E., 1990. Gender factors in reviewer recommendations for manuscript publication. J. Appl. Behav. Anal. 23, 539–543. doi:10.1901/jaba.1990.23-539

Mahoney, M.J., 1977. Publication prejudices: An experimental study of confirmatory bias in the peer review system. Cogn. Ther. Res. 1, 161–175. doi:10.1007/BF01173636

Moore, S., Neylon, C., Eve, M.P., O’Donnell, D., Pattinson, D., 2016. Excellence R Us: University Research and the Fetishisation of Excellence.

Nickerson, R.C., Varshney, U., Muntermann, J., 2012. A method for taxonomy development and its application in information systems. Eur. J. Inf. Syst. 22, 336–359. doi:10.1057/ejis.2012.26

P2P Foundation, 2016. Open Peer Review [WWW Document]. P2P Found. Wiki. URL https://wiki.p2pfoundation.net/Open_Peer_Review

Peters, D.P., Ceci, S.J., 1982. Peer-review practices of psychological journals: The fate of published articles, submitted again. Behav. Brain Sci. 5, 187–195. doi:10.1017/S0140525X00011183

Pontika, N., Knoth, P., Cancellieri, M., Pearce, S., 2015. Fostering Open Science to Research Using a Taxonomy and an eLearning Portal, in: Proceedings of the 15th International Conference on Knowledge Technologies and Data-Driven Business, I-KNOW ’15. ACM, New York, NY, USA, p. 11:1–11:8. doi:10.1145/2809563.2809571

Research Information Network, 2008. Activities, costs and funding flows in the scholarly communications system in the UK: Report commissioned by the Research Information Network (RIN).

Ross, J.S., Gross, C.P., Desai, M.M., Hong, Y., Grant, A.O., Daniels, S.R., Hachinski, V.C., Gibbons, R.J., Gardner, T.J., Krumholz, H.M., 2006. Effect of Blinded Peer Review on Abstract Acceptance. JAMA 295, 1675. doi:10.1001/jama.295.14.1675

Sandewall, E., 2012. Maintaining Live Discussion in Two-Stage Open Peer Review. Front. Comput. Neurosci. 6. doi:10.3389/fncom.2012.00009

Schroter, S., Black, N., Evans, S., Carpenter, J., Godlee, F., Smith, R., 2004. Effects of training on quality of peer review: randomised controlled trial. BMJ 328, 673. doi:10.1136/bmj.38023.700775.AE

Smith, R., 2006. Peer review: a flawed process at the heart of science and journals. J. R. Soc. Med. 99, 178–182. doi:10.1258/jrsm.99.4.178

Spier, R., 2002. The history of the peer-review process. Trends Biotechnol. 20, 357–358. doi:10.1016/S0167-7799(02)01985-6

Steen, R.G., Casadevall, A., Fang, F.C., 2013. Why Has the Number of Scientific Retractions Increased? PLOS ONE 8, e68397. doi:10.1371/journal.pone.0068397

Suber, P., 2012. Open Access. MIT Press, Cambridge, MA.

Travis, G.D.L., Collins, H.M., 1991. New Light on Old Boys: Cognitive and Institutional Particularism in the Peer Review System. Sci. Technol. Hum. Values 16, 322–341. doi:10.1177/016224399101600303

Tregenza, T., 2002. Gender bias in the refereeing process? Trends Ecol. Evol. 17, 349–350. doi:10.1016/S0169-5347(02)02545-4

Walker, R., Rocha da Silva, P., 2015. Emerging trends in peer review - a survey. Front. Neurosci. 9. doi:10.3389/fnins.2015.00169

Ware, M., 2016. Peer Review Survey 2015.

Ware, M., 2011. Peer review: recent experience and future directions. New Rev. Inf. Netw. 16, 23–53.

Ware, M., 2008. Peer review: benefits, perceptions and alternatives. Publ. Res. Consort. 4.

 

Notes

[i] The provenance of this quote is uncertain, even to Suber himself, who recently advised (personal correspondence, 19.8.2016): “I might have said it in an email (as noted). But I can't confirm that, since all my emails from before 2009 are on an old computer in a different city. It sounds like something I could have said in 2007. If you want to use it and attribute it to me, please feel free to note my own uncertainty!”

 
 
×
Stay Informed

When you subscribe to the blog, we will send you an e-mail when there are new updates on the site so you wouldn't miss them.

Related Posts