Gathering again at the bi-yearly event in Geneva, a diverse and yes, innovative, programme was presented. Note: remove CERN from the title, now known as the 'Geneva Workshop on Innovation in Scholarly Communication'. All slides and video of the presentations are available online
- Make it efficient for the researcher but also to explore their ongoing responsibilities in an open science arena
- OA altruism isn’t necessarily going to win hearts and minds with scientists
- Good to see an increase in work looking at researcher-practices presented
- Open Science Infrastructures and services should focus on the reuse of data, not necessarily documentation or repeatability
- Day One -Open Access to what exactly?
The first day excellent opening speech by Michael Nielsen took us beyond traditional forms of publishing, looking rather at the opportunities science has to create new forms, those beyond journal articles. A media form to take the reader in an informative way through it and give us new ways of interacting. These can amplify our intelligence
and could include executable, live versions of a model, a concrete version. This evolving to 'wilder' forms is important
: beggaring the question - will OA policies be crafted to ensure there is no inhibition to developing these practices?
Barriers and Impact
were the focus of the next strand:
Erin Mckienan looked at the impact of OA on a researchers career: we need to change the way we are evaluating researchers
. At the same time librarians need to provide as much information and options to researchers.
The OA Button
has had significant impact. Not just opening our eyes to how the publishing system works. But how a knowledge of these paywalled publications can also inform us on author behaviors.
Daniel Mietchen of Wikimedia
: One of the main messages was that as a community, we need to extract useful content
out of repositories, and then merge it with the contents of Wikimedia. Repositories need to provide standardized output however, such as xml to make it machine readable.
- Day Two -Infrastructures to support OA
started the next day: The US CHORUS and SHARE initiatives were presented. CHORUS’s Howard Ratner takes a pragmatic approach to their solutions (unlike some ‘dreamers’ in the room (!!) )
According to CHORUS, this is what librarians want from any OA infrastructure service: article information via metadata, register DOI on acceptance, machine content access. There wasn't much detail on tackling research data, but good to see digital preservation on the agenda. CHORUS use LOCKSS and Portico for solutions.
Thorny audience question:
Will this publisher-driven initiative raise the issue of TDM and publishers? Answer: we need to pressure publishers.
SHARE’s emphasis is on providing information via APIs
. Not just on publications, but on a range of research resources. 42 providers have provided one million records so far.
that the community faces to extract information from repositories:
- Metadata is inconsistent!
- Providers not sure of legal rights! i.e. ‘not sure if I can share metadata with you’
- Solutions need improvements to policy and technical infrastructure
And like OpenAIRE, they too face challenges to streamline content: Duplication, normalisation, inference, curation.
Jeroen Bosman and Bianca Kramer from Utrecht presented the rising sun of research tools
(see below photo for the overview or their poster here
). This was a survey that sought to find goals, needs and frustrations. While the traditional research life-cycle exists , this could change in the future with the advent of the nano publication. There is a clear shift towards OA leading to diminishing importance of trad journals, and a shift from journals to individual publishing units
Three issues that all tools should address:
Open science and how to conduct it, e.g providing open lab books, not just OA to publications.
Tools to encompass technical changes, speed of publication.
Tools that ensure research is proper, e.g reproducibility. Avoiding fraud, quality checks.
Thorny audience question:
But as a community how can we control these tools? Ensure they aren’t bought up by commercial companies?
In the panel discussion, good old sustainability came up: Membership models are the most solid solution.
Alternative idea: build a community-based workflow such as Wikipedia.
After lunch, there was an excellent, eye-opening session on quality assurance and peer review – about which Tony will be blogging in more detail in a separate post.
The ebb and flow of OA and Research Data Management continued in many breakout groups. I attended the Data Management Session which was kick-started by an excellent collection of resources and tools https://mensuel.framapad.org/p/RDM_Services
which I recommend as an overview of services to pick from. Feel free to add!
- Day Three -
The first plenary took as its theme the institution as publisher. Catriona Maccallum from PLOS discussed the need for transparency, or ‘intelligent openness’
in scholarly communication to better enable us to evaluate its effectiveness in areas like pricing, review, collaboration, evaluation and assessment. Then Rupert Gatti from the need for research centres to assume more active responsibility
for the dissemination of their research. Finally, Victoria Tsoukala gave an excellent presentation on university-based and library-led OA publishing initiatives
, including at her own institution, EKT. Sweeping changes are facing the institution and OA publishing, particularly on the OA front.
The final plenary, on the digital curation/preservation
, began with a discussion from the University of Geneva on the difficulties of implementing systems to preserve large and complex scientific objects. Next, Andreas Rauber from the Technical University of Vienna shocked everyone by discussing how findings
derived from data can differ based on which hardware/software is used
, and hence the need to preserve that information as well to ensure reproducibility. See his impressive list of publications here: https://www.sba-research.org/team/key-researcher/andreas-rauber/
. Finally, CERN presented their Invenio software (on which Zenodo runs) and their Open Data portal (http://opendata.cern.ch/), which amongst other things promises access to the raw data from the large hadron collider!