Aggregation and content provision workflows

Index and stats update:

Next update scheduled to start on: 2019-07-03

Available on the portal

Start date        


Available soon 2019-06-19

 Quality issues have been addressed and content is being regenerated.

N/A 2019-06-03

 Records from re-harvested. De-duplication and inference algorithms are running (info added on 2019-06-06). Content could not be published because of some quality issues.

 N/A 2019-05-23

 Content not published due to a loss of metadata records from

 2019-05-14 2019-05-06

 Statistics have not been updated.

Upgraded Solr server in use since May 22nd 2019.

 2019-04-08 2019-04-03

 Statistics have not been updated.

 2019-03-28 2019-03-11

 Statistics have not been updated.

 2019-02-28 2019-02-20


 2019-02-11 2019-01-28

New funder available: Academy of Finland (AKA). 

Publishing has been delayed to fix a temporary loss of links to SNSF projects.

 2019-01-04 2018-12-27  
 2018-12-13 2018-12-04 All types of research products have been de-duplicated.
2018-11-19 2018-11-12

Content from Portuguese repositories re-aggregated

Inference and de-duplication algorithms have been re-run to solve the issues about lost links.

As a consequence of the new algorithm run and of the increase of available full-texts, we note a general increase of links to projects of all funders.

N/A 2018-10-30

Content generated cannot be published as we noticed

  • a loss of records from Portuguese repositories (they are not exposing the openaire OAI set anymore)
  • a loss of links to projects involving more than 200 repositories.
The technical team is analysing the information space and investigating the issues. 
2018-10-16 2018-10-10 Updated mapping for new research object types
2018-10-10 2018-10-01  
2018-09-10  2018-08-28 The harmonisation of SNSF publication metadata is still ongoing.
 2018-08-01 2018-07-27

We noticed a decrease of SNSF publications due to a change in the resource types in the records collected from the SNSF P3 publication database.
This will be fixed in the next update.

2018-07-10 2018-06-27 Updated version of OpenAIRE mining algorithms processed more than 400K additional full-texts from Springer Open Access.
 2018-06-08 2018-06-05  
 2018-05-28 2018-05-22  
 2018-05-15 2018-05-08 More than 200K additional full-texts processed by OpenAIRE mining algorithms.
 2018-04-16 2018-04-10  
 2018-03-30 2018-03-26 New research community: Research Data Alliance
 2018-03-20 2018-03-13 Content update delayed because of technical issues
2018-02-20 2018-02-16  
2018-02-09 2018-02-05  
 2018-01-30 2018-01-17

updated version of mining algorithm

updated data model (see detaiils at

2017-12-28 2017-12-22  
2017-12-15 2017-12-11 FCT naming not fixed yet.
 2017-11-27 2017-11-19  Portuguese funder FCT appears twice. Once with a wrong name.
 2017-11-13 2017-11-03 

Added projects of funders RCUK and Turkey.


OpenAIRE makes openly accessible a rich Information Space Graph (ISG) where products of the research life-cycle (e.g. scientific literature, research data, project, software) are semantically linked to each other. The ISG is constructed via a set of autonomic, orchestrated workflows operating in a regimen of continuous data integration. [1]


What does OpenAIRE collect?

The OpenAIRE technical infrastructure collects information about objects of the research life-cycle compliant to the OpenAIRE acquisition policy [5] from different types of data sources [2]:
  1. Scientific literature metadata and full-texts from institutional and thematic repositories, Open Access journals and publishers;
  2. Dataset metadata from data repositories and data journals;
  3. Scientific literature, data and software metadata from Zenodo;
  4. Metadata about data sources, organizations, projects, and funding programs from entity registries, i.e. authoritative sources such as CORDA and other funder databases for projects, OpenDOAR for publication repositories, re3data for data repositories, DOAJ for Open Access journals;
  5. Coming soon: metadata of open source research software from software repositories (currently available only on
  6. Coming soon: metadata about other types of research products, like workflow, protocols, methods, research packages (currently available only on
  7. Coming soon: metadata about scientific literature, datasets, persons, organisations, projects, funding, equipment and services are collected through CRIS (Common Research Information Systems)
Relationships between objects are collected from the data sources, but also automatically detected by inference algorithms [3] and added by authenticated users, who can insert links between literature, datasets, software and projects via the “claiming” procedure available from the OpenAIRE web portal [4].

What kind of data sources are in OpenAIRE?

Objects and relationships in the OpenAIRE ISG are extracted from information packages, i.e. metadata records, collected from data sources of the following kinds:
  • Institutional or thematic repositories: Information systems where scientists upload the bibliographic metadata and full-texts of their articles, due to obligations from their organization or due to community practices (e.g. ArXiv, Europe PMC);
  • Open Access Publishers and journals: Information system of open access publishers or relative journals, which offer bibliographic metadata and PDFs of their published articles;
  • Data archives: Information systems where scientists deposit descriptive metadata and files about their research data (also known as scientific data, datasets, etc.).;
  • Hybrid repositories/archives: information systems where scientists deposit metadata and file of any kind of scientific products, incuding scientific literature, research data and research software (e.g. Zenodo)
  • Aggregator services: Information systems that collect descriptive metadata about publications or datasets from multiple sources in order to enable cross-data source discovery of given research products. Examples are DataCite, BASE, DOAJ;
  • Entity Registries: Information systems created with the intent of maintaining authoritative registries of given entities in the scholarly communication, such as OpenDOAR for the institutional repositories, re3data for the data repositories, CORDA and other funder databases for projects and funding information;
  • CRIS (coming soon): Information systems adopted by research and academic organizations to keep track of their research administration records and relative results; examples of CRIS content are articles or datasets funded by projects, their principal investigators, facilities acquired thanks to funding, etc.. 
  • Information spaces: services that maintain an information space of (possibly interlinked) scholalrly communication objects. Examples are CrossRef, ScholeXplorer and OpenAIRE itself.

How does OpenAIRE collect metadata records?

As of October 2018, OpenAIRE aggregates more than 27 millions of metadata records from more than 13,000 data sources.

OpenAIRE features three workflows for metadata aggregation:
  1. for the aggregation from data sources whose content is known to comply with the OpenAIRE content acquisition policy(*),
  2. for the aggregation of content that is not known to be eligible according to the policy,
  3. for the aggregation of information packages from entity registries.

(*) Please note that a new, opener, content acquisition policy [5] has been defined and published by the OpenAIRE consortium in October 2018. This is currently under implementation and this documentation will soon reflect the change.

Workflow for OpenAIRE compliant data sources

This workflow is for data sources that comply with the OpenAIRE guidelines and thus it is executed for the majority of data sources.

The workflow consists of two phases: collection and transformation.

The collection phase collects information packages in form of XML metadata records from an OAI-PMH endpoint of the data source (as the OpenAIRE guidelines mandate) and stores them in a metadata store.

The transformation phase transforms the collected records according to the OpenAIRE internal data model and stores them in another metadata store, ready to be read for populating the OpenAIRE ISG.

Workflow for data sources with unknown compliance

This workflow applies to data sources that are registered into OpenAIRE but are not known to be OpenAIRE compliant. This is the typical case for aggregators of data repositories (e.g. Datacite).

According to the content acquisition policies OpenAIRE can include a dataset into the ISG only if it has a link to an object  (project or publication) already in the ISG.

Therefore, OpenAIRE collects all metadata records and transforms them according to the internal OpenAIRE data model. Inference algorithms process the records and mark those that satisfy the content acquisition policy, so that they are eligible to enter in the ISG.

Workflow for entity registries

This workflow applies to data sources offering authoritative lists of entities.

The workflow consists of two phases: collection and transformation.

The collection phase collects information packages in the form of files in some machine readable format (e.g. XML, JSON, CSV) via one of the supported exchange protocols (OAI-PMH, SFTP, FTP(S), HTTP, REST).

The transformation phase transforms the packages according to the OpenAIRE internal data model and stores them into a metadata store ready to be read for populating the OpenAIRE ISG.

For additional details about the aggregation workflows, please refer to [7].

What does OpenAIRE do to enrich the collected metadata records?

Once the ISG is populated, OpenAIRE performs de-duplication of organizations and publications [8] and runs inference algorithms [3] to enrich the graph with additional information extracted from the publications' full-texts, namely:
  • subjects
  • links to datasets
  • links to projects
  • links to research communities
  • links to publications
  • links to software
  • links to biological entities (e.g. PDB)
  • Citations
All other information (e.g. access rights, titles, authors, URLs to web resources) are collected from data sources. Whenever the de-duplication algorithm finds duplicates of the same publication, all information from all of the duplicates is kept. OpenAIRE keeps track of the provenance of information (i.e. if it has been inferred by mining algorithm, if it has been claimed by authenticated portal users or if it was present in the metadata record collected from a data source).

How is the enriched OpenAIRE graph published?

The deduplicated and enriched ISG is materialized by the data publishing workflow into four ISG projections:
  1. a full-text index to support search and browse queries from the OpenAIRE portal and to expose subsets of the ISG on the OpenAIRE search API [9],
  2. a E-R database and a dedicated key-value cache for statistics,
  3. a NoSQL document storage in order to support OAI-PMH bulk export of subsets of the ISG in XML format [9],
  4. a triple store in order to expose the ISG as LOD via a SPARQL endpoint (currently in beta) [10]
Every time the data publishing workflow executes, four new ISG projections are generated and placed in a “pre-public status”  before being accessible by the general public.
The switch from pre-public to public, meaning that the currently accessible ISG projections and statistics will be dismissed and the new versions will take their place, is still manual for safety reasons.
Pre-public ISG projections are subject to a set of semi-automatic checks for quality control [11].
Those quality check are needed to evaluate whether the switch to public can be performed or some regressions in the overall data quality need to be addressed first.

How often is the OpenAIRE graph published?

The ISG is published about once every two weeks unless critical quality issues arise in the quality check phase.

Whenever minor issues occur, the ISG is published anyway and details about the issues are
  • tracked via the private ticketing system of the OpenAIRE technical team
  • if the issue depends on the original collected content, it is notified to the affected data source
  • briefly described in the table above, which keeps track of the index and statistics update


[1] Manghi P. et al. (2014) "The D-NET software toolkit: A framework for the realization, maintenance, and operation of aggregative infrastructures", Program, Vol. 48 Issue: 4, pp.322-354,

[2] Check the data provider page ( for the complete list of sources

[3] Bolikowski L. (2015) Text mining services in OpenAIRE:

[4] OpenAIRE claiming functionality:

[5] The OpenAIRE content acquisition policy:

[6] Check which funders are affiliated with OpenAIRE:

[7] Atzori, Claudio, Bardi, Alessia, Manghi, Paolo, & Mannocci, Andrea. (2017). The OpenAIRE workflows for data management. Zenodo.

[8] Manghi P. (2015) On de-duplication in the OpenAIRE infrastructure:

[9] OpenAIRE API documentation:

[10] OpenAIRE Linked Open Data:

[11]  Mannocci, A., & Manghi, P. (2016, September). DataQ: A Data Flow Quality Monitoring System for Aggregative Data Infrastructures. In International Conference on Theory and Practice of Digital Libraries (pp. 357-369). Springer International Publishing.

Tags: content providers

  • Last updated on .
flag black white lowOpenAIRE-Advance receives funding from the European Union's Horizon 2020 Research and Innovation programme under Grant Agreement No. 777541.


  Unless otherwise indicated, all materials created by OpenAIRE are licenced under CC ATTRIBUTION 4.0 INTERNATIONAL LICENSE.
OpenAIRE uses cookies in order to function properly. By using the OpenAIRE portal you accept our use of cookies.
More information Ok