Peters - GEODEEPDIVE: AUTOMATING THE LOCATION AND EXTRACTION OF DATA AND INFORMATION FROM DIGITAL PUBLICATIONS
|Authors:||Peters, Shanan E.|
|Resource type:||Composite Resource|
|Created:||Dec 06, 2018 at 6:40 p.m.|
|Last updated:||Dec 06, 2018 at 6:41 p.m. by Leslie Hsu|
PETERS, Shanan E.1, ROSS, Ian2, CZAPLEWSKI, John3 and LIVNY, Miron2, (1)Department of Geoscience, University of Wisconsin–Madison, 1215 W. Dayton St, Madison, WI 53706, (2)Computer Sciences, University of Wisconsin-Madison, Madison, WI 53706, (3)Department of Geoscience, University of Wisconsin-Madison, 1215 W Dayton St, Madison, WI 53706
Modern scientific databases simplify access to data and information, but a large body of knowledge remains within the published literature and is therefore difficult to access and leverage at scale in scientific workflows. Recent advances in machine reading and learning approaches to converting unstructured text, tables, and figures into structured knowledge bases are promising, but these software tools cannot be deployed for scientific research purposes without access to new and old publications and computing resources. Automation of such approaches is also necessary in order to keep pace with the ever-growing scientific literature. GeoDeepDive bridges the gap between scientists needing to locate and extract information from large numbers of publications and the millions of documents that are distributed by multiple different publishers every year. As of August 2018, GeoDeepDive (GDD) had ingested over 7.4 million full-text documents from multiple commercial, professional society, and open-access publishers. In accordance with GDD-negotiated publisher agreements, original documents and citation metadata are stored locally and prepared for common data mining activities by running software tools that parse and annotate their contents linguistically (natural language processing) and visually (optical character recognition). Vocabularies of terms in domain-specific databases can be labeled throughout the full-text of documents, with results exposed to users via an API. New vocabularies and versions of parsing and annotation tools can be deployed rapidly across all original documents using the distributed computing capacities provided by HTCondor. Downloading, storing, and pre-processing original PDF content from distributed publishers and making these data products available to user applications provides new mechanisms for discovering and using information in publications, augmenting existing databases with new information, and reducing time-to-science.
How to cite
This resource is shared under the Creative Commons Attribution CC BY.http://creativecommons.org/licenses/by/4.0/
|Peters, Shanan E.||Department of Geoscience, University of Wisconsin|
Select content in the file browser to see metadata specific to that content. Metadata will only display here when the the content is selected above. Content specific metadata does not display on the Discover page.
Please wait for the process to complete.