--- http_interactions: - request: method: get uri: https://rogue-scholar.org/api/blogs/tyfqw20 body: encoding: UTF-8 string: '' headers: Connection: - close Host: - rogue-scholar.org User-Agent: - http.rb/5.1.1 response: status: code: 200 message: OK headers: Age: - '0' Cache-Control: - public, max-age=0, must-revalidate Content-Length: - '162607' Content-Type: - application/json; charset=utf-8 Date: - Thu, 15 Jun 2023 20:39:01 GMT Etag: - '"6w7me0q1i23h72"' Server: - Vercel Strict-Transport-Security: - max-age=63072000 X-Matched-Path: - "/api/blogs/[slug]" X-Vercel-Cache: - MISS X-Vercel-Id: - fra1::iad1::r8zhw-1686861540247-e74748738960 Connection: - close body: encoding: UTF-8 string: '{"id":"tyfqw20","title":"iPhylo","description":"Rants, raves (and occasionally considered opinions) on phyloinformatics, taxonomy, and biodiversity informatics. For more ranty and less considered opinions, see my Twitter feed.
ISSN 2051-8188. Written content on this site is licensed under a Creative Commons Attribution 4.0 International license.","language":"en","favicon":null,"feed_url":"https://iphylo.blogspot.com/feeds/posts/default?alt=rss","feed_format":"application/rss+xml","home_page_url":"https://iphylo.blogspot.com/","indexed_at":"2023-02-06","modified_at":"2023-05-31T17:26:00+00:00","license":"https://creativecommons.org/licenses/by/4.0/legalcode","generator":"Blogger","category":"Natural Sciences","backlog":true,"prefix":"10.59350","items":[{"id":"https://doi.org/10.59350/btdk4-42879","uuid":"3e1278f6-e7c0-43e1-bb54-6829e1344c0d","url":"https://iphylo.blogspot.com/2022/09/the-ideal-taxonomic-journal.html","title":"The ideal taxonomic journal","summary":"This is just some random notes on an “ideal” taxonomic journal, inspired in part by some recent discussions on “turbo-taxonomy” (e.g., https://doi.org/10.3897/zookeys.1087.76720 and https://doi.org/10.1186/1742-9994-10-15), and also examples such as the Australian Journal of Taxonomy https://doi.org/10.54102/ajt.qxi3r which seems well-intentioned but limited. XML One approach is to have highly structured text that embeds detailed markup, and ideally a tool that generates markup in XML. This is...","date_published":"2022-09-29T14:00:00Z","date_modified":null,"authors":[{"url":null,"name":"noreply@blogger.com (Roderic Page)"}],"image":null,"content_html":"

This is just some random notes on an “ideal” taxonomic journal, inspired in part by some recent discussions on “turbo-taxonomy” (e.g., https://doi.org/10.3897/zookeys.1087.76720 and https://doi.org/10.1186/1742-9994-10-15), and also examples such as the Australian Journal of Taxonomy https://doi.org/10.54102/ajt.qxi3r which seems well-intentioned but limited.

\n

XML

\n

One approach is to have highly structured text that embeds detailed markup, and ideally a tool that generates markup in XML. This is the approach taken by Pensoft. There is an inevitable trade-off between the burden on authors of marking up text versus making the paper machine readable. In some ways this seems misplaced effort given that there is little evidence that publications by themselves have much value (see The Business of Extracting Knowledge from Academic Publications). “Value” in this case means as a source of data or factual statements that we can compute over. Human-readable text is not a good way to convey this sort of information.

\n

It’s also interesting that many editing tools are going in the opposite direction, for example there are minimalist tools using Markdown where the goal is to get out of the author’s way, rather than impose a way of writing. Text is written by humans for humans, so the tools should be human-friendly.

\n

The idea of publishing using XML is attractive in that it gives you XML that can be archived by, say, PubMed Central, but other than that the value seems limited. A cursory glance at download stats for journals that provide PDF and XML downloads, such as PLoS One and ZooKeys, PDF is by far the more popular format. So arguably there is little value in providing XML. Those who have tried to use JATS-XML as an authoring tool have not had a happy time: How we tried to JATS XML. However, there are various tools to help with the process, such as docxToJats,
\ntexture, and jats-xml-to-pdf if this is the route one wants to take.

\n

Automating writing manuscripts

\n

The dream, of course, is to have a tool where you store all your taxonomic data (literature, specimens, characters, images, sequences, media files, etc.) and at the click of a button generate a paper. Certainly some of this can be automated, much nomenclatural and specimen information could be converted to human-readable text. Ideally this computer-generated text would not be edited (otherwise it could get out of sync with the underlying data). The text should be transcluded. As an aside, one way to do this would be to include things such as lists of material examined as images rather than text while the manuscript is being edited. In the same way that you (probably) wouldn’t edit a photograph within your text editor, you shouldn’t be editing data. When the manuscript is published the data-generated portions can then be output as text.

\n

Of course all of this assumes that we have taxonomic data in a database (or some other storage format, including plain text and Mark-down, e.g. Obsidian, markdown, and taxonomic trees) that can generate outputs in the various formats that we need.

\n

Archiving data and images

\n

One of the really nice things that Plazi do is have a pipeline that sends taxonomic descriptions and images to Zenodo, and similar data to GBIF. Any taxonomic journal should be able to do this. Indeed, arguably each taxonomic treatment within the paper should be linked to the Zenodo DOI at the time of publication. Indeed, we could imagine ultimately having treatments as transclusions within the larger manuscript. Alternatively we could store the treatments as parts of the larger article (rather like chapters in a book), each with a CrossRef DOI. I’m still sceptical about whether these treatments are as important as we make out, see Does anyone cite taxonomic treatments?. But having machine-readable taxonomic data archived and accessible is a good thing. Uploading the same data to GBIF makes much of that data immediately accessible. Now that GBIF offers hosted portals there is the possibility of having custom interfaces to data from a particular journal.

\n

Name and identifier registration

\n

We would also want automatic registration of new taxonomic names, for which there are pipelines (see “A common registration-to-publication automated pipeline for nomenclatural acts for higher plants (International Plant Names Index, IPNI), fungi (Index Fungorum, MycoBank) and animals (ZooBank)” https://doi.org/10.3897/zookeys.550.9551). These pipelines do not seem to be documented in much detail, and the data formats differ across registration agencies (e.g., IPNI and ZooBank). For example, ZooBank seems to require TaxPub XML.

\n

Registration of names and identifiers, especially across multiple registration agencies (ZooBank, CrossRef, DataCite, etc.) requires some coordination, especially when one registration agency requires identifiers from another.

\n

Summary

\n

If data is key, then the taxonomic paper itself becomes something of a wrapper around that data. It still serves the function of being human-readable, providing broader context for the work, and as an archive that conforms to currently accepted ways to publish taxonomic names. But in some ways it is the last interesting part of the process.

\n
\n

Written with StackEdit.

\n
","tags":[],"language":"en","references":null},{"id":"https://doi.org/10.59350/37y2z-gre70","uuid":"f3629c86-06e0-42c0-844a-266b03a91ef1","url":"https://iphylo.blogspot.com/2023/05/ten-years-and-million-links.html","title":"Ten years and a million links","summary":"As trailed on a Twitter thread last week I’ve been working on a manuscript describing the efforts to map taxonomic names to their original descriptions in the taxonomic literature. Putting together a manuscript on linking taxonomic names to the primary literature, basically “um, what, exactly, have you been doing all these years?”. TL;DR Across fungi, plants, and animals approx 1.3 million names have been linked to a persistent identifier for a publication.— Roderic Page (@rdmpage) May 25,...","date_published":"2023-05-31T17:26:00Z","date_modified":null,"authors":[{"url":null,"name":"noreply@blogger.com (Roderic Page)"}],"image":null,"content_html":"

As trailed on a Twitter thread last week I’ve been working on a manuscript describing the efforts to map taxonomic names to their original descriptions in the taxonomic literature.

\n

Putting together a manuscript on linking taxonomic names to the primary literature, basically “um, what, exactly, have you been doing all these years?”. TL;DR Across fungi, plants, and animals approx 1.3 million names have been linked to a persistent identifier for a publication.

— Roderic Page (@rdmpage) May 25, 2023
\n

The preprint is on bioRxiv doi:10.1101/2023.05.29.542697

\n
\n

A major gap in the biodiversity knowledge graph is a connection between taxonomic names and the taxonomic literature. While both names and publications often have persistent identifiers (PIDs), such as Life Science Identifiers (LSIDs) or Digital Object Identifiers (DOIs), LSIDs for names are rarely linked to DOIs for publications. This article describes efforts to make those connections across three large taxonomic databases: Index Fungorum, International Plant Names Index (IPNI), and the Index of Organism Names (ION). Over a million names have been matched to DOIs or other persistent identifiers for taxonomic publications. This represents approximately 36% of names for which publication data is available. The mappings between LSIDs and publication PIDs are made available through ChecklistBank. Applications of this mapping are discussed, including a web app to locate the citation of a taxonomic name, and a knowledge graph that uses data on researcher’s ORCID ids to connect taxonomic names and publications to authors of those names.

\n
\n

Much of the work has been linking taxa to names, which still has huge gaps. There are also interesting differences in coverage between plants, animals, and fungi (see preprint for details).

\n\n
\"\"
\"\"
\"\"
\n\n\nThere is also a simple app to demonstrate these links, see https://species-cite.herokuapp.com.\n\n\n\n
\n

Written with StackEdit.

\n
","tags":[],"language":"en","references":null},{"id":"https://doi.org/10.59350/92rdb-5fe58","uuid":"d33d4f49-b281-4997-9eb9-dbad1e52d9bd","url":"https://iphylo.blogspot.com/2022/09/local-global-identifiers-for.html","title":"Local global identifiers for decentralised wikis","summary":"I''ve been thinking a bit about how one could use a Markdown wiki-like tool such as Obsidian to work with taxonomic data (see earlier posts Obsidian, markdown, and taxonomic trees and Personal knowledge graphs: Obsidian, Roam, Wikidata, and Xanadu). One \"gotcha\" would be how to name pages. If we treat the database as entirely local, then the page names don''t matter, but what if we envisage sharing the database, or merging it with others (for example, if we divided a taxon up into chunks, and...","date_published":"2022-09-08T16:09:00Z","date_modified":null,"authors":[{"url":null,"name":"noreply@blogger.com (Roderic Page)"}],"image":null,"content_html":"

I''ve been thinking a bit about how one could use a Markdown wiki-like tool such as Obsidian to work with taxonomic data (see earlier posts Obsidian, markdown, and taxonomic trees and Personal knowledge graphs: Obsidian, Roam, Wikidata, and Xanadu).

\n\n

One \"gotcha\" would be how to name pages. If we treat the database as entirely local, then the page names don''t matter, but what if we envisage sharing the database, or merging it with others (for example, if we divided a taxon up into chunks, and different people worked on those different chunks)?

\n\n

This is the attraction of globally unique identifiers. You and I can independently work on the same thing, such as data linked to scientific paper, safe in the knowledge that if we both use the DOI for that paper we can easily combine what we''ve done. But global identifiers can also be a pain, especially if we need to use a service to look them up (\"is there a DOI for this paper?\", \"what is the LSID for this taxonomic name?\").

\n\n

Life would be easier if we could generate identifiers \"locally\", but had some assurance that they would be globally unique, and that anyone else generating an identifier for the same thing would arrive at the same identifier (this eliminates things such as UUIDs which are intentionally designed to prvent people genrrating the same identifier). One approach is \"content addressing\" (see, e.g. Principles of Content Addressing - dead link but in the Wayabck Machine, see also btrask/stronglink). For example, we can generate a cryptographic hash of a file (such as a PDF) and use that as the identifier.

\n\n

Now the problem is that we have globally unique, but ugly and unfriendly identifiers (such as \"6c98136eba9084ea9a5fc0b7693fed8648014505\"). What we need are nice, easy to use identifiers we can use as page names. Wikispecies serves as a possible role model, where taxon names serve as page names, as do simplified citations (e.g., authors and years). This model runs into the problem that taxon names aren''t unique, nor are author + year combinations. In Wikispecies this is resolved by having a centralised database where it''s first come, first served. If there is a name clash you have to create a new name for your page. This works, but what if you have multiple databases un by different people? How do we ensure the identifiers are the same?

\n\n

Then I remembered Roger Hyam''s flight of fantasy over a decade ago: SpeciesIndex.org – an impractical, practical solution. He proposed the following rules to generate a unique URI for a taxonomic name:\n\n

\n

\n\n

Roger is basically arging that while names aren''t unique (i.e., we have homonyms such as Abronia) they are pretty close to being so, and with a few tweaks we can come up with a unique representation. Another way to think about this if we had a database of all taxonomics, we could construct a trie and for each name find the shortest set of name parts (genus, species, etc), year, and code that gave us a unique string for that name. In many cases the species name may be all we need, in other cases we may need to add year and/or nomenclatural code to arrive at a unique string. \n\n

\n\n

What about bibliographic references? Well many of us will have databases (e.g., Endnote, Mendeley, Zotero, etc.) which generate \"cite keys\". These are typically short, memorable identifiers for a reference that are unique within that database. There is an interesting discussion on the JabRef forum regarding a \"Universal Citekey Generator\", and source code is available cparnot/universal-citekey-js. I''ve yet to explore this in detail, but it looks a promising way to generate unique identifiers from basic metadata (echos of more elaborate schemes such as SICIs). For example,\n\n

Senna AR, Guedes UN, Andrade LF, Pereira-Filho GH. 2021. A new species of amphipod Pariphinotus Kunkel, 1910 (Amphipoda: Phliantidae) from Southwestern Atlantic. Zool Stud 60:57. doi:10.6620/ZS.2021.60-57.
\n\nbecomes \"Senna:2021ck\". So if two people have the same, core, metadata for a paper they can generate the same key.

\n\n

Hence it seems with a few conventions (and maybe some simple tools to support them) we could have decentralised wiki-like tools that used the same identifiers for the same things, and yet those identfiiers were short and human-friendly.

","tags":["citekey","identfiiers","markdown","obsidian","Roger Hyam"],"language":"en","references":null},{"id":"https://doi.org/10.59350/j77nc-e8x98","uuid":"c6b101f4-bfbc-4d01-921d-805c43c85757","url":"https://iphylo.blogspot.com/2022/08/linking-taxonomic-names-to-literature.html","title":"Linking taxonomic names to the literature","summary":"Just some thoughts as I work through some datasets linking taxonomic names to the literature. In the diagram above I''ve tried to capture the different situatios I encounter. Much of the work I''ve done on this has focussed on case 1 in the diagram: I want to link a taxonomic name to an identifier for the work in which that name was published. In practise this means linking names to DOIs. This has the advantage of linking to a citable indentifier, raising questions such as whether citations...","date_published":"2022-08-22T17:19:00Z","date_modified":null,"authors":[{"url":null,"name":"noreply@blogger.com (Roderic Page)"}],"image":null,"content_html":"Just some thoughts as I work through some datasets linking taxonomic names to the literature.\n\n
\"\"
\n\n

In the diagram above I''ve tried to capture the different situatios I encounter. Much of the work I''ve done on this has focussed on case 1 in the diagram: I want to link a taxonomic name to an identifier for the work in which that name was published. In practise this means linking names to DOIs. This has the advantage of linking to a citable indentifier, raising questions such as whether citations of taxonmic papers by taxonomic databases could become part of a taxonomist''s Google Scholar profile.

\n\n

In many taxonomic databases full work-level citations are not the norm, instead taxonomists cite one or more pages within a work that are relevant to a taxonomic name. These \"microcitations\" (what the U.S. legal profession refer to as \"point citations\" or \"pincites\", see What are pincites, pinpoints, or jump legal references?) require some work to map to the work itself (which is typically the thing that has a citatble identifier such as a DOI).

\n\n

Microcitations (case 2 in the diagram above) can be quite complex. Some might simply mention a single page, but others might list a series of (not necessarily contiguous) pages, as well as figures, plates etc. Converting these to citable identifiers can be tricky, especially as in most cases we don''t have page-level identifiers. The Biodiversity Heritage Library (BHL) does have URLs for each scanned page, and we have a standard for referring to pages in a PDF (page=<pageNum>, see RFC 8118). But how do we refer to a set of pages? Do we pick the first page? Do we try and represent a set of pages, and if so, how?

\n\n

Another issue with page-level identifiers is that not everything on a given page may be relevant to the taxonomic name. In case 2 above I''ve shaded in the parts of the pages and figure that refer to the taxonomic name. An example where this can be problematic is the recent test case I created for BHL where a page image was included for the taxonomic name Aphrophora impressa. The image includes the species description and a illustration, as well as text that relates to other species.

\n\n
\"\"
\n\n

Given that not everything on a page need be relevant, we could extract just the relevant blocks of text and illustrations (e.g., paragraphs of text, panels within a figure, etc.) and treat that set of elements as the thing to cite. This is, of course, what Plazi are doing. The set of extracted blocks is glued together as a \"treatment\", assigned an identifier (often a DOI), and treated as a citable unit. It would be interesting to see to what extent these treatments are actually cited, for example, do subsequent revisions that cite work that include treatments cite those treatments, or just the work itself? Put another way, are we creating \"threads\" between taxonomic revisions?

\n\n

One reason for these notes is that I''m exploring uploading taxonomic name - literature links to ChecklistBank and case 1 above is easy, as is case 3 (if we have treatment-level identifiers). But case 2 is problematic because we are linking to a set of things that may not have an identifier, which means a decision has to be made about which page to link to, and how to refer to that page.

","tags":[],"language":"en","references":null},{"id":"https://doi.org/10.59350/w18j9-v7j10","uuid":"d811172e-7798-403c-a83d-3d5317a9657e","url":"https://iphylo.blogspot.com/2022/08/papers-citing-data-that-cite-papers.html","title":"Papers citing data that cite papers: CrossRef, DataCite, and the Catalogue of Life","summary":"Quick notes to self following on from a conversation about linking taxonomic names to the literature. Is there a way to turn those links into countable citations (even if just one per database) for Google Scholar?— Wayne Maddison (@WayneMaddison) August 3, 2022 There are different sorts of citation: Paper cites another paper Paper cites a dataset Dataset cites a paper Citation type (1) is largely a solved problem (although there are issues of the ownership and use of this...","date_published":"2022-08-03T11:33:00Z","date_modified":null,"authors":[{"url":null,"name":"noreply@blogger.com (Roderic Page)"}],"image":null,"content_html":"Quick notes to self following on from a conversation about linking taxonomic names to the literature.\n\n

Is there a way to turn those links into countable citations (even if just one per database) for Google Scholar?

— Wayne Maddison (@WayneMaddison) August 3, 2022
\n\nThere are different sorts of citation:\n\n
    \n
  1. Paper cites another paper
  2. \n
  3. Paper cites a dataset
  4. \n
  5. Dataset cites a paper
  6. \n
\n\nCitation type (1) is largely a solved problem (although there are issues of the ownership and use of this data, see e.g. Zootaxa has no impact factor.\n\nCitation type (2) is becoming more widespread (but not perfect as GBIF''s #citethedoi campaign demonstrates. But the idea is well accepted and there are guides to how to do it, e.g.:\n\n
\nCousijn, H., Kenall, A., Ganley, E. et al. A data citation roadmap for scientific publishers. Sci Data 5, 180259 (2018). https://doi.org/10.1038/sdata.2018.259\n
\n\nHowever, things do get problematic because most (but not all) DOIs for publications are managed by CrossRef, which has an extensive citation database linking papers to other paopers. Most datasets have DataCite DOIs, and DataCite manages its own citations links, but as far as I''m aware these two systems don''t really taklk to each other.\n\nCitation type (3) is the case where a database is largely based on the literature, which applies to taxonomy. Taxonomic databases are essentially collections of literature that have opinions on taxa, and the database may simply compile those (e.g., a nomenclator), or come to some view on the applicability of each name. In an ideal would, each reference included in a taxonomic database would gain a citation, which would help better reflect the value of that work (a long standing bone of contention for taxonomists).\n\nIt would be interesting to explore these issues further. CrossRef and DataCite do share Event Data (see also DataCite Event Data). Can this track citations of papers by a dataset?\n \n \nMy take on Wayne''s question:\n\n
\n Is there a way to turn those links into countable citations (even if just one per database) for Google Scholar?\n
\n\nis that he''s is after type 3 citations, which I don''t think we have a way to handle just yet (but I''d need to look at Event Data a bit more). Google Scholar is a black box, and the academic coimmunity''s reliance on it for metrics is troubling. But it would be interetsing to try and figure out if there is a way to get Google Scholar to index the citations of taxonomic papers by databases. For instance, the Catalogue of Life has an ISSN 2405-884X so it can be treated as a publication. At the moment its web pages have lots of identifiers for people managing data and their organisations (lots of ORCIDs and RORs, and DOIs for individual datasets (e.g., checklistbank.org) but precious little in the way of DOIs for publications (or, indeed, ORCIDs for taxonomists). What would it take for taxonomic publications in the Catalogue of Life to be treated as first class citations?","tags":["Catalogue of Life","citation","CrossRef","DataCite","DOI"],"language":"en","references":null},{"id":"https://doi.org/10.59350/ws094-1w310","uuid":"6bed78ec-0029-4096-b1c3-48a55a9fdb3b","url":"https://iphylo.blogspot.com/2023/04/chatgpt-of-course.html","title":"ChatGPT, of course","summary":"I haven’t blogged for a while, work and other reasons have meant I’ve not had much time to think, and mostly I blog to help me think. ChatGPT is obviously a big thing at the moment, and once we get past the moral panic (“students can pass exams using AI!”) there are a lot of interesting possibilities to explore. Inspired by essays such as How Q&A systems based on large language models (eg GPT4) will change things if they become the dominant search paradigm — 9 implications for libraries...","date_published":"2023-04-03T12:52:00Z","date_modified":null,"authors":[{"url":null,"name":"noreply@blogger.com (Roderic Page)"}],"image":null,"content_html":"

I haven’t blogged for a while, work and other reasons have meant I’ve not had much time to think, and mostly I blog to help me think.

\n

ChatGPT is obviously a big thing at the moment, and once we get past the moral panic (“students can pass exams using AI!”) there are a lot of interesting possibilities to explore. Inspired by essays such as How Q&A systems based on large language models (eg GPT4) will change things if they become the dominant search paradigm — 9 implications for libraries and Cheating is All You Need, as well as [Paul Graham GPT](https://paul-graham-gpt.vercel.app) I thought I’d try a few things and see where this goes.

\n

ChatGPT can do some surprising things.

\n

Parse bibliographic data

\n

I spend a LOT of time working with bibliographic data, trying to parse it into structured data. ChatGPT can do this:

\n\n
\"\"
\n\n

Note that it does more than simply parse the strings, it expands journal abbreviations such as “J. Malay Brch. R. Asiat. Soc.” to the full name “Journal of the Malayan Branch of the Royal Asiatic Society”. So we can get clean, parsed data in a range of formats.

\n

Parse specimens

\n

Based on the success with parsing bibliographic strings I wondered how well it could handle citation software specimens (“material examined”). Elsewhere I’ve been critical of Plazi’s ability to do this, see Problems with Plazi parsing: how reliable are automated methods for extracting specimens from the literature?.

\n

For example, given this specimen record on p. 130 of doi:10.5852/ejt.2021.775.1553

\n
\n

LAOS • Kammoune Province, Bunghona Market, 7 km Nof Xe Bangfai River;
\n17.13674° N, 104.98591° E; E. Jeratthitikul, K. Wisittikoson, A. Fanka, N. Wutthituntisil and P. Prasankok leg.; sold by local people;
\nMUMNH-UNI2831.

\n
\n

ChatGPT extracted a plausible Darwin Core record:

\n\n
\"\"
\n\n

I’ve been experimenting with parsing specimen records using the same machine learning approach for bibliographic data (e.g., Citation parsing tool released), perhaps it’s time to rethink that idea.

\n

Correct OCR errors

\n

OCR errors are a well known limitation of the Biodiversity Heritage Library (BHL), see for example OCR Improvements: An Early Analysis. Given that ChatGPT is built on top of a large language model it seems obvious to see how it handles OCR errors. I tried the following snippet of text from BHL page 4312227:

\n
\n

i BREVIORA No. 185

\n

Jiiaii Jr., 16, who always join him and Hedriek in th(> search for
\nfrogs. Iloraeio Mayorga, research assistant to tlie author, has also
\nbeen (wtreniely useful, not only in field work, but also in the
\ntedious curatorial and laboratory work that usually fol- lows. Dr.
\nJohn Randall kindly took tlie photouraplis of E. hedricki, and the
\nGaliiiaues family of San Juan made their El Verde house available as
\na (MMiter of activities for the colleetiuff party. To all these
\njx’opic, the autlioi- feels (iee])ly indebted.

\n

Elet’tiierodactylus iiKnRrcKi s]). n.

\n

Type. Museum of Coniparative Zoology Xo. ;}()!)()8, c^ from El Verde,
\nwest flank of El Yuiuinc Puerto Hico. I.IOO ft. Coll. J. A. Kivero,
\n11 Aug-. I!)(i2.

\n
\n

ChatGPT returns this:

\n
\n

i BREVIORA No. 185

\n

Juan Jr., 16, who always joins him and Hedrick in the search for
\nfrogs. Horacio Mayorga, research assistant to the author, has also
\nbeen extremely useful, not only in field work, but also in the tedious
\ncuratorial and laboratory work that usually follows. Dr. John Randall
\nkindly took the photographs of E. hedricki, and the Galinanes family
\nof San Juan made their El Verde house available as a center of
\nactivities for the collecting party. To all these people, the author
\nfeels deeply indebted.

\n

Eleutherodactylus hedricki sp. n.

\n

Type. Museum of Comparative Zoology No. 30908, c? from El Verde, west
\nflank of El Yunque, Puerto Rico. 1100 ft. Coll. J. A. Rivero, 11 Aug.
\n1962.

\n
\n\n
\"\"
\n\n

Comparing this to the scanned image ChatGPT it does pretty well, for example the gobbledegook “Elet’tiierodactylus iiKnRrcKi” is correctly translated as “Eleutherodactylus hedricki”. Running all of BHL through ChatGPT probably isn’t feasible, but one could imagine targeted cleaning of key papers.

\n

Summary

\n

These small experiments are fairly trivial, but they are the sort of tedious tasks that would otherwise require significant programming (or other resources) to solve. But ChatGPT can do rather more, as I hope to discuss in the next post.

\n
\n

Written with StackEdit.

\n
","tags":[],"language":"en","references":null},{"id":"https://doi.org/10.59350/7esgr-61v1","uuid":"96fa91d5-459c-482f-aa38-dda6e0a30e20","url":"https://iphylo.blogspot.com/2022/01/large-graph-viewer-experiments.html","title":"Large graph viewer experiments","summary":"I keep returning to the problem of viewing large graphs and trees, which means my hard drive has accumulated lots of failed prototypes. Inspired by some recent discussions on comparing taxonomic classifications I decided to package one of these (wildly incomplete) prototypes up so that I can document the idea and put the code somewhere safe. Very cool, thanks for sharing this-- the tree diff is similar to what J Rees has been cooking up lately with his 'cl diff' tool. I'll tag...","date_published":"2022-01-02T11:25:00Z","date_modified":null,"authors":[{"url":null,"name":"noreply@blogger.com (Roderic Page)"}],"image":null,"content_html":"

I keep returning to the problem of viewing large graphs and trees, which means my hard drive has accumulated lots of failed prototypes. Inspired by some recent discussions on comparing taxonomic classifications I decided to package one of these (wildly incomplete) prototypes up so that I can document the idea and put the code somewhere safe.

\n\n

Very cool, thanks for sharing this-- the tree diff is similar to what J Rees has been cooking up lately with his 'cl diff' tool. I'll tag @beckettws in here too so he can see potential crossover. The goal is autogenerate diffs like this as 1st step to mapping taxo name-to concept

— Nate Upham (@n8_upham) December 28, 2021
\n\n

Google Maps-like viewer

\n\n
\"\"
\n\n

I''ve created a simple viewer that uses a tiled map viewer (like Google Maps) to display a large graph. The idea is to draw the entire graph scaled to a 256 x 256 pixel tile. The graph is stored in a database that supports geospatial queries, which means the queries to retrieve the individual tiles need to display the graph at different levels of resolution are simply bounding box queries to a database. I realise that this description is cryptic at best. The GitHub repository https://github.com/rdmpage/gml-viewer has more details and the code itself. There''s a lot to do, especially adding support for labels(!) which presents some interesting challenges (levels of detail and generalization). The code doesn''t do any layout of the graph itself, instead I''ve used the yEd tool to compute the x,y coordinates of the graph.

\n\n

Since this exercise was inspired by a discussion of the ASM Mammal Diversity Database, the graph I''ve used for the demonstration above is the ASM classification of extant mammals. I guess I need to solve the labelling issue fairly quickly!

","tags":["Google Maps","graph","Mammal Species of the World","mammals","taxonomy"],"language":"en","references":null},{"id":"https://doi.org/10.59350/m48f7-c2128","uuid":"8aea47e4-f227-45f4-b37b-0454a8a7a3ff","url":"https://iphylo.blogspot.com/2023/04/chatgpt-semantic-search-and-knowledge.html","title":"ChatGPT, semantic search, and knowledge graphs","summary":"One thing about ChatGPT is it has opened my eyes to some concepts I was dimly aware of but am only now beginning to fully appreciate. ChatGPT enables you ask it questions, but the answers depend on what ChatGPT “knows”. As several people have noted, what would be even better is to be able to run ChatGPT on your own content. Indeed, ChatGPT itself now supports this using plugins. Paul Graham GPT However, it’s still useful to see how to add ChatGPT functionality to your own content from...","date_published":"2023-04-03T15:30:00Z","date_modified":null,"authors":[{"url":null,"name":"noreply@blogger.com (Roderic Page)"}],"image":null,"content_html":"

One thing about ChatGPT is it has opened my eyes to some concepts I was dimly aware of but am only now beginning to fully appreciate. ChatGPT enables you ask it questions, but the answers depend on what ChatGPT “knows”. As several people have noted, what would be even better is to be able to run ChatGPT on your own content. Indeed, ChatGPT itself now supports this using plugins.

\n

Paul Graham GPT

\n

However, it’s still useful to see how to add ChatGPT functionality to your own content from scratch. A nice example of this is Paul Graham GPT by Mckay Wrigley. Mckay Wrigley took essays by Paul Graham (a well known venture capitalist) and built a question and answer tool very like ChatGPT.

\n\n

Because you can send a block of text to ChatGPT (as part of the prompt) you can get ChatGPT to summarise or transform that information, or answer questions based on that information. But there is a limit to how much information you can pack into a prompt. You can’t put all of Paul Graham’s essays into a prompt for example. So a solution is to do some preprocessing. For example, given a question such as “How do I start a startup?” we could first find the essays that are most relevant to this question, then use them to create a prompt for ChatGPT. A quick and dirty way to do this is simply do a text search over the essays and take the top hits. But we aren’t searching for words, we are searching for answers to a question. The essay with the best answer might not include the phrase “How do I start a startup?”.

\n

Semantic search

\n

Enter Semantic search. The key concept behind semantic search is that we are looking for documents with similar meaning, not just similarity of text. One approach to this is to represent documents by “embeddings”, that is, a vector of numbers that encapsulate features of the document. Documents with similar vectors are potentially related. In semantic search we take the query (e.g., “How do I start a startup?”), compute its embedding, then search among the documents for those with similar embeddings.

\n

To create Paul Graham GPT Mckay Wrigley did the following. First he sent each essay to the OpenAI API underlying ChatGPT, and in return he got the embedding for that essay (a vector of 1536 numbers). Each embedding was stored in a database (Mckay uses Postgres with pgvector). When a user enters a query such as “How do I start a startup?” that query is also sent to the OpenAI API to retrieve its embedding vector. Then we query the database of embeddings for Paul Graham’s essays and take the top five hits. These hits are, one hopes, the most likely to contain relevant answers. The original question and the most similar essays are then bundled up and sent to ChatGPT which then synthesises an answer. See his GitHub repo for more details. Note that we are still using ChatGPT, but on a set of documents it doesn’t already have.

\n

Knowledge graphs

\n

I’m a fan of knowledge graphs, but they are not terribly easy to use. For example, I built a knowledge graph of Australian animals Ozymandias that contains a wealth of information on taxa, publications, and people, wrapped up in a web site. If you want to learn more you need to figure out how to write queries in SPARQL, which is not fun. Maybe we could use ChatGPT to write the SPARQL queries for us, but it would be much more fun to be simply ask natural language queries (e.g., “who are the experts on Australian ants?”). I made some naïve notes on these ideas Possible project: natural language queries, or answering “how many species are there?” and Ozymandias meets Wikipedia, with notes on natural language generation.

\n

Of course, this is a well known problem. Tools such as RDF2vec can take RDF from a knowledge graph and create embeddings which could in tern be used to support semantic search. But it seems to me that we could simply this process a bit by making use of ChatGPT.

\n

Firstly we would generate natural language statements from the knowledge graph (e.g., “species x belongs to genus y and was described in z”, “this paper on ants was authored by x”, etc.) that cover the basic questions we expect people to ask. We then get embeddings for these (e.g., using OpenAI). We then have an interface where people can ask a question (“is species x a valid species?”, “who has published on ants”, etc.), we get the embedding for that question, retrieve natural language statements that the closest in embedding “space”, package everything up and ask ChatGPT to summarise the answer.

\n

The trick, of course, is to figure out how t generate natural language statements from the knowledge graph (which amounts to deciding what paths to traverse in the knowledge graph, and how to write those paths is something approximating English). We also want to know something about the sorts of questions people are likely to ask so that we have a reasonable chance of having the answers (for example, are people going to ask about individual species, or questions about summary statistics such as numbers of species in a genus, etc.).

\n

What makes this attractive is that it seems a straightforward way to go from a largely academic exercise (build a knowledge graph) to something potentially useful (a question and answer machine). Imagine if something like the defunct BBC wildlife site (see Blue Planet II, the BBC, and the Semantic Web: a tale of lessons forgotten and opportunities lost) revived here had a question and answer interface where we could ask questions rather than passively browse.

\n

Summary

\n

I have so much more to learn, and need to think about ways to incorporate semantic search and ChatGPT-like tools into knowledge graphs.

\n
\n

Written with StackEdit.

\n
","tags":[],"language":"en","references":null},{"id":"https://doi.org/10.59350/rfxj3-x6739","uuid":"6a4d5c44-f4a9-4d40-a32c-a4d5e512c55a","url":"https://iphylo.blogspot.com/2022/05/thoughts-on-treebase-dying.html","title":"Thoughts on TreeBASE dying(?)","summary":"@rvosa is Naturalis no longer hosting Treebase? https://t.co/MBRgcxaBmR— Hilmar Lapp (@hlapp) May 10, 2022 So it looks like TreeBASE is in trouble, it''s legacy Java code a victim of security issues. Perhaps this is a chance to rethink TreeBASE, assuming that a repository of published phylogenies is still considered a worthwhile thing to have (and I think that question is open). Here''s what I think could be done. The data (individual studies with trees and data) are packaged into...","date_published":"2022-05-11T16:53:00Z","date_modified":null,"authors":[{"url":null,"name":"noreply@blogger.com (Roderic Page)"}],"image":null,"content_html":"

@rvosa is Naturalis no longer hosting Treebase? https://t.co/MBRgcxaBmR

— Hilmar Lapp (@hlapp) May 10, 2022
\n\n

So it looks like TreeBASE is in trouble, it''s legacy Java code a victim of security issues. Perhaps this is a chance to rethink TreeBASE, assuming that a repository of published phylogenies is still considered a worthwhile thing to have (and I think that question is open).

\n\n

Here''s what I think could be done.

\n\n
    \n
  1. \nThe data (individual studies with trees and data) are packaged into whatever format is easiest (NEXUS, XML, JSON) and uploaded to a repository such as Zenodo for long term storage. They get DOIs for citability. This becomes the default storage for TreeBASE.\n
  2. \n
  3. \nThe data is transformed into JSON and indexed using Elasticsearch. A simple web interface is placed on top so that people can easily find trees (never a strong point of the original TreeBASE). Trees are displayed natively on the web using SVG. The number one goal is for people to be able to find trees, view them, and download them.\n
  4. \n
  5. \nTo add data to TreeBASE the easiest way would be for people to upload them direct to Zenodo and tag them \"treebase\". A bot then grabs a feed of these datasets and adds them to the search engine in (1) above. As time allows, add an interface where people upload data directly, it gets curated, then deposited in Zenodo. This presupposes that there are people available to do curation. Maybe have \"stars\" for the level of curation so that users know whether anyone has checked the data.\n
  6. \n
\n\n

There''s lots of details to tweak, for example how many of the existing URLs for studies are preserved (some URL mapping), and what about the API? And I''m unclear about the relationship with Dryad.

\n\n

My sense is that the TreeBASE code is very much of its time (10-15 years ago), a monolithic block of code with SQL, Java, etc. If one was starting from scratch today I don''t think this would be the obvious solution. Things have trended towards being simpler, with lots of building blocks now available in the cloud. Need a search engine? Just spin up a container in the cloud and you have one. More and more functionality can be devolved elsewhere.

\n\n

Another other issue is how to support TreeBASE. It has essentially been a volunteer effort to date, with little or no funding. One reason I think having Zenodo as a storage engine is that it takes care of long term sustainability of the data.

\n\n

I realise that this is all wild arm waving, but maybe now is the time to reinvent TreeBASE?

\n\n

Updates

\n\n

It''s been a while since I''ve paid a lot of attention to phylogenetic databases, and it shows. There is a file-based storage system for phylogenies phylesystem (see \"Phylesystem: a git-based data store for community-curated phylogenetic estimates\" https://doi.org/10.1093/bioinformatics/btv276) that is sort of what I had in mind, although long term persistence is based on GitHub rather than a repository such as Zenodo. Phylesystem uses a truly horrible-looking JSON transformation of NeXML (NeXML itself is ugly), and TreeBASE also supports NeXML, so some form of NeXML or a JSON transformation seems the obvious storage format. It will probably need some cleaning and simplification if it is to be indexed easily. Looking back over the long history of TreeBASE and phylogenetic databases I''m struck by how much complexity has been introduced over time. I think the tech has gotten in the way sometimes (which might just be another way of saying that I''m not smart enough to make sense of it all.

\n\n

So we could imagine a search engine that covers both TreeBASE and Open Tree of Life studies.

\n\n

Basic metadata-based searches would be straightforward, and we could have a user interface that highlights the trees (I think TreeBASE''s biggest search rival is a Google image search). The harder problem is searching by tree structure, for which there is an interesting literature without any decent implementations that I''m aware of (as I said, I''ve been out of this field a while).

\n\n

So my instinct is we could go a long way with simply indexing JSON (CouchDB or Elasticsearch), then need to think a bit more cleverly about higher taxon and tree based searching. I''ve always thought that one killer query would be not so much \"show me all the trees for my taxon\" but \"show me a synthesis of the trees for my taxon\". Imagine a supertree of recent studies that we could use as a summary of our current knowledge, or a visualisation that summarises where there are conflicts among the trees.

\n\n

Relevant code and sites

\n\n","tags":["phylogeny","TreeBASE"],"language":"en","references":null},{"id":"https://doi.org/10.59350/jzvs4-r9559","uuid":"23fa1dd8-5c6b-4aa9-9cad-c6f6b14ae9e0","url":"https://iphylo.blogspot.com/2021/08/json-ld-in-wild-examples-of-how.html","title":"JSON-LD in the wild: examples of how structured data is represented on the web","summary":"I''ve created a GitHub repository so that I can keep track of the examples of JSON-LD that I''ve seen being actively used, for example embedded in web sites, or accessed using an API. The repository is https://github.com/rdmpage/wild-json-ld. The list is by no means exhaustive, I hope to add more examples as I come across them. One reason for doing this is to learn what others are doing. For example, after looking at SciGraph''s JSON-LD I now see how an ordered list can be modelled in RDF in...","date_published":"2021-08-27T13:20:00Z","date_modified":null,"authors":[{"url":null,"name":"noreply@blogger.com (Roderic Page)"}],"image":null,"content_html":"

I''ve created a GitHub repository so that I can keep track of the examples of JSON-LD that I''ve seen being actively used, for example embedded in web sites, or accessed using an API. The repository is https://github.com/rdmpage/wild-json-ld. The list is by no means exhaustive, I hope to add more examples as I come across them.

\n\n

One reason for doing this is to learn what others are doing. For example, after looking at SciGraph''s JSON-LD I now see how an ordered list can be modelled in RDF in such a way that the list of authors in a JSON-LD document for, say a scientific paper, is correct. By default RDF has no notion of ordered lists, so if you do a SPARQL query to get the authors of a paper, the order of the authors returned in the query will be arbitrary. There are various ways to try and tackle this. In my Ozymandias knowledge graph I used \"roles\" to represent order (see Figure 2 in the Ozymandias paper). I then used properties of the role to order the list of authors.

\n\n

Another approach is to use rdf:lists (see RDF lists and SPARQL and Is it possible to get the position of an element in an RDF Collection in SPARQL? for an introduction to lists). SciGraph uses this approach. The value for schema:author is not an author, but a blank node (bnode), and this bnode has two predicates, rdf:first and rdf:rest. One points to an author, the other points to another bnode. This pattern repeats until we encounter a value of rdf:nil for rdf:rest.

\n\n
\"\"
\n\n

This introduces some complexity, but the benefit is that the JSON-LD version of the RDF will have the authors in the correct order, and hence any client that is using JSON will be able to treat the array of authors as ordered. Without some means of ordering the client could not make this assumption, hence the first author in the list might not actually be the first author of the paper.

","tags":["JSON-LD","RDF"],"language":"en","references":null},{"id":"https://doi.org/10.59350/zc4qc-77616","uuid":"30c78d9d-2e50-49db-9f4f-b3baa060387b","url":"https://iphylo.blogspot.com/2022/09/does-anyone-cite-taxonomic-treatments.html","title":"Does anyone cite taxonomic treatments?","summary":"Taxonomic treatments have come up in various discussions I''m involved in, and I''m curious as to whether they are actually being used, in particular, whether they are actually being cited. Consider the following quote: The taxa are described in taxonomic treatments, well defined sections of scientific publications (Catapano 2019). They include a nomenclatural section and one or more sections including descriptions, material citations referring to studied specimens, or notes ecology and...","date_published":"2022-09-01T16:49:00Z","date_modified":null,"authors":[{"url":null,"name":"noreply@blogger.com (Roderic Page)"}],"image":null,"content_html":"
\"\"
\nTaxonomic treatments have come up in various discussions I''m involved in, and I''m curious as to whether they are actually being used, in particular, whether they are actually being cited. Consider the following quote:\n\n
\nThe taxa are described in taxonomic treatments, well defined sections of scientific publications (Catapano 2019). They include a nomenclatural section and one or more sections including descriptions, material citations referring to studied specimens, or notes ecology and behavior. In case the treatment does not describe a new discovered taxon, previous treatments are cited in the form of treatment citations. This citation can refer to a previous treatment and add additional data, or it can be a statement synonymizing the taxon with another taxon. This allows building a citation network, and ultimately is a constituent part of the catalogue of life. - Taxonomic Treatments as Open FAIR Digital Objects https://doi.org/10.3897/rio.8.e93709\n
\n\n

\n \"Traditional\" academic citation is from article to article. For example, consider these two papers:\n\n

\nLi Y, Li S, Lin Y (2021) Taxonomic study on fourteen symphytognathid species from Asia (Araneae, Symphytognathidae). ZooKeys 1072: 1-47. https://doi.org/10.3897/zookeys.1072.67935\n
\n\n
\nMiller J, Griswold C, Yin C (2009) The symphytognathoid spiders of the Gaoligongshan, Yunnan, China (Araneae: Araneoidea): Systematics and diversity of micro-orbweavers. ZooKeys 11: 9-195. https://doi.org/10.3897/zookeys.11.160\n
\n

\n\n

Li et al. 2021 cites Miller et al. 2009 (although Pensoft seems to have broken the citation such that it does appear correctly either on their web page or in CrossRef).

\n\n

So, we have this link: [article]10.3897/zookeys.1072.67935 --cites--> [article]10.3897/zookeys.11.160. One article cites another.

\n\n

In their 2021 paper Li et al. discuss Patu jidanweishi Miller, Griswold & Yin, 2009:\n\n

\"\"
\n\n

There is a treatment for the original description of Patu jidanweishi at https://doi.org/10.5281/zenodo.3792232, which was created by Plazi with a time stamp \"2020-05-06T04:59:53.278684+00:00\". The original publication date was 2009, the treatments are being added retrospectively.

\n\n

In an ideal world my expectation would be that Li et al. 2021 would have cited the treatment, instead of just providing the text string \"Patu jidanweishi Miller, Griswold & Yin, 2009: 64, figs 65A–E, 66A, B, 67A–D, 68A–F, 69A–F, 70A–F and 71A–F (♂♀).\" Isn''t the expectation under the treatment model that we would have seen this relationship:

\n\n

[article]10.3897/zookeys.1072.67935 --cites--> [treatment]https://doi.org/10.5281/zenodo.3792232

\n\n

Furthermore, if it is the case that \"[i]n case the treatment does not describe a new discovered taxon, previous treatments are cited in the form of treatment citations\" then we should also see a citation between treatments, in other words Li et al.''s 2021 treatment of Patu jidanweishi (which doesn''t seem to have a DOI but is available on Plazi'' web site as https://tb.plazi.org/GgServer/html/1CD9FEC313A35240938EC58ABB858E74) should also cite the original treatment? It doesn''t - but it does cite the Miller et al. paper.

\n\n

So in this example we don''t see articles citing treatments, nor do we see treatments citing treatments. Playing Devil''s advocate, why then do we have treatments? Does''t the lack of citations suggest that - despite some taxonomists saying this is the unit that matters - they actually don''t. If we pay attention to what people do rather than what they say they do, they cite articles.

\n\n

Now, there are all sorts of reasons why we don''t see [article] -> [treatment] citations, or [treatment] -> [treatment] citations. Treatments are being added after the fact by Plazi, not by the authors of the original work. And in many cases the treatments that could be cited haven''t appeared until after that potentially citing work was published. In the example above the Miller et al. paper dates from 2009, but the treatment extracted only went online in 2020. And while there is a long standing culture of citing publications (ideally using DOIs) there isn''t an equivalent culture of citing treatments (beyond the simple text strings).

\n\n

Obviously this is but one example. I''d need to do some exploration of the citation graph to get a better sense of citations patterns, perhaps using CrossRef''s event data. But my sense is that taxonomists don''t cite treatments.

\n\n

I''m guessing Plazi would respond by saying treatments are cited, for example (indirectly) in GBIF downloads. This is true, although arguably people aren''t citing the treatment, they''re citing specimen data in those treatments, and that specimen data could be extracted at the level of articles rather than treatments. In other words, it''s not the treatments themselves that people are citing.

\n\n

To be clear, I think there is value in being able to identify those \"well defined sections\" of a publication that deal with a given taxon (i.e., treatments), but it''s not clear to me that these are actually the citable units people might hope them to be. Likewise, journals such as ZooKeys have DOIs for individual figures. Does anyone actually cite those?

","tags":[],"language":"en","references":null},{"id":"https://doi.org/10.59350/en7e9-5s882","uuid":"20b9d31e-513f-496b-b399-4215306e1588","url":"https://iphylo.blogspot.com/2022/04/obsidian-markdown-and-taxonomic-trees.html","title":"Obsidian, markdown, and taxonomic trees","summary":"Returning to the subject of personal knowledge graphs Kyle Scheer has an interesting repository of Markdown files that describe academic disciplines at https://github.com/kyletscheer/academic-disciplines (see his blog post for more background). If you add these files to Obsidian you get a nice visualisation of a taxonomy of academic disciplines. The applications of this to biological taxonomy seem obvious, especially as a tool like Obsidian enables all sorts of interesting links to be added...","date_published":"2022-04-07T21:07:00Z","date_modified":null,"authors":[{"url":null,"name":"noreply@blogger.com (Roderic Page)"}],"image":null,"content_html":"

Returning to the subject of personal knowledge graphs Kyle Scheer has an interesting repository of Markdown files that describe academic disciplines at https://github.com/kyletscheer/academic-disciplines (see his blog post for more background).

\n\n

If you add these files to Obsidian you get a nice visualisation of a taxonomy of academic disciplines. The applications of this to biological taxonomy seem obvious, especially as a tool like Obsidian enables all sorts of interesting links to be added (e.g., we could add links to the taxonomic research behind each node in the taxonomic tree, the people doing that research, etc. - although that would mean we''d no longer have a simple tree).

\n\n

The more I look at these sort of simple Markdown-based tools the more I wonder whether we could make more use of them to create simple but persistent databases. Text files seem the most stable, long-lived digital format around, maybe this would be a way to minimise the inevitable obsolescence of database and server software. Time for some experiments I feel... can we take a taxonomic group, such as mammals, and create a richly connected database purely in Markdown?

\n\n
","tags":["markdown","obsidian"],"language":"en","references":null},{"id":"https://doi.org/10.59350/m7gb7-d7c49","uuid":"7d814863-43b5-4faf-a475-da8de5efd3ef","url":"https://iphylo.blogspot.com/2022/02/duplicate-dois-again.html","title":"Duplicate DOIs (again)","summary":"This blog post provides some background to a recent tweet where I expressed my frustration about the duplication of DOIs for the same article. I''m going to document the details here. The DOI that alerted me to this problem is https://doi.org/10.2307/2436688 which is for the article Snyder, W. C., & Hansen, H. N. (1940). THE SPECIES CONCEPT IN FUSARIUM. American Journal of Botany, 27(2), 64–67. This article is hosted by JSTOR at https://www.jstor.org/stable/2436688 which displays the DOI...","date_published":"2022-02-08T15:06:00Z","date_modified":null,"authors":[{"url":null,"name":"noreply@blogger.com (Roderic Page)"}],"image":null,"content_html":"

This blog post provides some background to a recent tweet where I expressed my frustration about the duplication of DOIs for the same article. I''m going to document the details here.

\n\n

The DOI that alerted me to this problem is https://doi.org/10.2307/2436688 which is for the article

\n\n
\nSnyder, W. C., & Hansen, H. N. (1940). THE SPECIES CONCEPT IN FUSARIUM. American Journal of Botany, 27(2), 64–67.\n
\n\n

This article is hosted by JSTOR at https://www.jstor.org/stable/2436688 which displays the DOI https://doi.org/10.2307/2436688 .

\n\n

This same article is also hosted by Wiley at https://bsapubs.onlinelibrary.wiley.com/doi/abs/10.1002/j.1537-2197.1940.tb14217.x with the DOI https://doi.org/10.1002/j.1537-2197.1940.tb14217.x.

\n\n

Expected behaviour

\n\n

What should happen is if Wiley is going to be the publisher of this content (taking over from JSTOR), the DOI 10.2307/2436688 should be redirected to the Wiley page, and the Wiley page displays this DOI (i.e., 10.2307/2436688). If I want to get metadata for this DOI, I should be able to use CrossRef''s API to retrieve that metadata, e.g. https://api.crossref.org/v1/works/10.2307/2436688 should return metadata for the article.

\n\n

What actually happens

\n\n

Wiley display the same article on their web site with the DOI 10.1002/j.1537-2197.1940.tb14217.x. They have minted a new DOI for the same article! The original JSTOR DOI now resolves to the Wiley page (you can see this using the Handle Resolver), which is what is supposed to happen. However, Wiley should have reused the original DOI rather than mint their own.

\n\n

Furthermore, while the original DOI still resolves in a web browser, I can''t retrieve metadata about that DOI from CrossRef, so any attempt to build upon that DOI fails. However, I can retrieve metadata for the Wiley DOI, i.e. https://api.crossref.org/v1/works/10.1002/j.1537-2197.1940.tb14217.x works, but https://api.crossref.org/v1/works/10.2307/2436688 doesn''t.

\n\n

Why does this matter?

\n\n

For anyone using DOIs as stable links to the literature the persistence of DOIs is something you should be able to rely upon, both for people clicking on links in web browsers and developers getting metadata from those DOIs. The whole rationale of the DOI system is a single, globally unique identifier for each article, and that these DOIs persist even when the publisher of the content changes. If this property doesn''t hold, then why would a developer such as myself invest effort in linking using DOIs?

\n\n

Just for the record, I think CrossRef is great and is a hugely important part of the scholarly landscape. There are lots of things that I do that would be nearly impossible without CrossRef and its tools. But cases like this where we get massive duplication of DOIs when a publishers takes over an existing journal fundamentally breaks the underlying model of stable, persistent identifiers.

","tags":["CrossRef","DOI","duplicates"],"language":"en","references":null},{"id":"https://doi.org/10.59350/d3dc0-7an69","uuid":"545c177f-cea5-4b79-b554-3ccae9c789d7","url":"https://iphylo.blogspot.com/2021/10/reflections-on-macroscope-tool-for-21st.html","title":"Reflections on \"The Macroscope\" - a tool for the 21st Century?","summary":"This is a guest post by Tony Rees. It would be difficult to encounter a scientist, or anyone interested in science, who is not familiar with the microscope, a tool for making objects visible that are otherwise too small to be properly seen by the unaided eye, or to reveal otherwise invisible fine detail in larger objects. A select few with a particular interest in microscopy may also have encountered the Wild-Leica \"Macroscope\", a specialised type of benchtop microscope optimised for...","date_published":"2021-10-07T12:38:00Z","date_modified":null,"authors":[{"url":null,"name":"noreply@blogger.com (Roderic Page)"}],"image":null,"content_html":"

\"YtNkVT2U\" This is a guest post by Tony Rees.

\n\n

It would be difficult to encounter a scientist, or anyone interested in science, who is not familiar with the microscope, a tool for making objects visible that are otherwise too small to be properly seen by the unaided eye, or to reveal otherwise invisible fine detail in larger objects. A select few with a particular interest in microscopy may also have encountered the Wild-Leica \"Macroscope\", a specialised type of benchtop microscope optimised for low-power macro-photography. However in this overview I discuss the \"Macroscope\" in a different sense, which is that of the antithesis to the microscope: namely a method for visualizing subjects too large to be encompassed by a single field of vision, such as the Earth or some subset of its phenomena (the biosphere, for example), or conceptually, the universe.

\n\n

\"\"
My introduction to the term was via addresses given by Jesse Ausubel in the formative years of the 2001-2010 Census of Marine Life, for which he was a key proponent. In Ausubel''s view, the Census would perform the function of a macroscope, permitting a view of everything that lives in the global ocean (or at least, that subset which could realistically be sampled in the time frame available) as opposed to more limited subsets available via previous data collection efforts. My view (which could, of course, be wrong) was that his thinking had been informed by a work entitled \"Le macroscope, vers une vision globale\" published in 1975 by the French thinker Joël de Rosnay, who had expressed such a concept as being globally applicable in many fields, including the physical and natural worlds but also extending to human society, the growth of cities, and more. Yet again, some ecologists may also have encountered the term, sometimes in the guise of \"Odum''s macroscope\", as an approach for obtaining \"big picture\" analyses of macroecological processes suitable for mathematical modelling, typically by elimination of fine detail so that only the larger patterns remain, as initially advocated by Howard T. Odum in his 1971 book \"Environment, Power, and Society\".

\n\n

From the standpoint of the 21st century, it seems that we are closer to achieving a \"macroscope\" (or possibly, multiple such tools) than ever before, based on the availability of existing and continuing new data streams, improved technology for data assembly and storage, and advanced ways to query and combine these large streams of data to produce new visualizations, data products, and analytical findings. I devote the remainder of this article to examples where either particular workers have employed \"macroscope\" terminology to describe their activities, or where potentially equivalent actions are taking place without the explicit \"macroscope\" association, but are equally worthy of consideration. To save space here, references cited here (most or all) can be found via a Wikipedia article entitled \"Macroscope (science concept)\" that I authored on the subject around a year ago, and have continued to add to on occasion as new thoughts or information come to hand (see edit history for the article).

\n\n

First, one can ask, what constitutes a macroscope, in the present context? In the Wikipedia article I point to a book \"Big Data - Related Technologies, Challenges and Future Prospects\" by Chen et al. (2014) (doi:10.1007/978-3-319-06245-7), in which the \"value chain of big data\" is characterised as divisible into four phases, namely data generation, data acquisition (aka data assembly), data storage, and data analysis. To my mind, data generation (which others may term acquisition, differently from the usage by Chen et al.) is obviously the first step, but does not in itself constitute the macroscope, except in rare cases - such as Landsat imagery, perhaps - where on its own, a single co-ordinated data stream is sufficient to meet the need for a particular type of \"global view\". A variant of this might be a coordinated data collection program - such as that of the ten year Census of Marine Life - which might produce the data required for the desired global view; but again, in reality, such data are collected in a series of discrete chunks, in many and often disparate data formats, and must be \"wrangled\" into a more coherent whole before any meaningful \"macroscope\" functionality becomes available.

\n\n

Here we come to what, in my view, constitutes the heart of the \"macroscope\": an intelligently organized (i.e. indexable and searchable), coherent data store or repository (where \"data\" may include imagery and other non numeric data forms, but much else besides). Taking the Census of Marine Life example, the data repository for that project''s data (plus other available sources as inputs) is the Ocean Biodiversity Information System or OBIS (previously the Ocean Biogeographic Information System), which according to this view forms the \"macroscope\" for which the Census data is a feed. (For non habitat-specific biodiversity data, GBIF is an equivalent, and more extensive, operation). Other planetary scale \"macroscopes\", by this definition (which may or may not have an explicit geographic, i.e. spatial, component) would include inventories of biological taxa such as the Catalogue of Life and so on, all the way back to the pioneering compendia published by Linnaeus in the eighteenth century; while for cartography and topographic imagery, the current \"blockbuster\" of Google Earth and its predecessors also come well into public consciousness.

\n\n

In the view of some workers and/or operations, both of these phases are precursors to the real \"work\" of the macroscope which is to reveal previously unseen portions of the \"big picture\" by means either of the availability of large, synoptic datasets, or fusion between different data streams to produce novel insights. Companies such as IBM and Microsoft have used phraseology such as:

\n\n
By 2022 we will use machine-learning algorithms and software to help us organize information about the physical world, helping bring the vast and complex data gathered by billions of devices within the range of our vision and understanding. We call this a \"macroscope\" – but unlike the microscope to see the very small, or the telescope that can see far away, it is a system of software and algorithms to bring all of Earth''s complex data together to analyze it by space and time for meaning.\" (IBM)
\n\n
As the Earth becomes increasingly instrumented with low-cost, high-bandwidth sensors, we will gain a better understanding of our environment via a virtual, distributed whole-Earth \"macroscope\"... Massive-scale data analytics will enable real-time tracking of disease and targeted responses to potential pandemics. Our virtual \"macroscope\" can now be used on ourselves, as well as on our planet.\" (Microsoft) (references available via the Wikipedia article cited above).
\n\n

Whether or not the analytical capabilities described here are viewed as being an integral part of the \"macroscope\" concept, or are maybe an add-on, is ultimately a question of semantics and perhaps, personal opinion. Continuing the Census of Marine Life/OBIS example, OBIS offers some (arguably rather basic) visualization and summary tools, but also makes its data available for download to users wishing to analyse it further according to their own particular interests; using OBIS data in this manner, Mark Costello et al. in 2017 were able to demarcate a finite number of data-supported marine biogeographic realms for the first time (Costello et al. 2017: Nature Communications. 8: 1057. doi:10.1038/s41467-017-01121-2), a project which I was able to assist in a small way in an advisory capacity. In a case such as this, perhaps the final function of the macroscope, namely data visualization and analysis, was outsourced to the authors'' own research institution. Similarly at an earlier phase, \"data aggregation\" can also be virtual rather than actual, i.e. avoiding using a single physical system to hold all the data, enabled by open web mapping standards WMS (web map service) and WFS (web feature service) to access a set of distributed data stores, e.g. as implemented on the portal for the Australian Ocean Data Network.

\n\n

So, as we pass through the third decade of the twenty first century, what developments await us in the \"macroscope\" area\"? In the biodiversity space, one can reasonably presume that the existing \"macroscopic\" data assembly projects such as OBIS and GBIF will continue, and hopefully slowly fill current gaps in their coverage - although in the marine area, strategic new data collection exercises may be required (Census 2020, or 2025, anyone?), while (again hopefully), the Catalogue of Life will continue its progress towards a \"complete\" species inventory for the biosphere. The Landsat project, with imagery dating back to 1972, continues with the launch of its latest satellite Landsat 9 just this year (21 September 2021) with a planned mission duration for the next 5 years, so the \"macroscope\" functionality of that project seems set to continue for the medium term at least. Meanwhile the ongoing development of sensor networks, both on land and in the ocean, offers an exciting new method of \"instrumenting the earth\" to obtain much more real time data than has ever been available in the past, offering scope for many more, use case-specific \"macroscopes\" to be constructed that can fuse (e.g.) satellite imagery with much more that is happening at a local level.

\n\n

So, the \"macroscope\" concept appears to be alive and well, even though the nomenclature can change from time to time (IBM''s \"Macroscope\", foreshadowed in 2017, became the \"IBM Pairs Geoscope\" on implementation, and is now simply the \"Geospatial Analytics component within the IBM Environmental Intelligence Suite\" according to available IBM publicity materials). In reality this illustrates a new dichotomy: even if \"everyone\" in principle has access to huge quantities of publicly available data, maybe only a few well funded entities now have the computational ability to make sense of it, and can charge clients a good fee for their services...

\n\n

I present this account partly to give a brief picture of \"macroscope\" concepts today and in the past, for those who may be interested, and partly to present a few personal views which would be out of scope in a \"neutral point of view\" article such as is required on Wikipedia; also to see if readers of this blog would like to contribute further to discussion of any of the concepts traversed herein.

","tags":["guest post","macroscope"],"language":"en","references":null},{"id":"https://doi.org/10.59350/2b1j9-qmw12","uuid":"37538c38-66e6-4ac4-ab5c-679684622ade","url":"https://iphylo.blogspot.com/2022/05/round-trip-from-identifiers-to.html","title":"Round trip from identifiers to citations and back again","summary":"Note to self (basically rewriting last year''s Finding citations of specimens). Bibliographic data supports going from identifier to citation string and back again, so we can do a \"round trip.\" 1. Given a DOI we can get structured data with a simple HTTP fetch, then use a tool such as citation.js to convert that data into a human-readable string in a variety of formats. Identifier ⟶ Structured data ⟶ Human readable string 10.7717/peerj-cs.214 HTTP with...","date_published":"2022-05-27T16:34:00Z","date_modified":null,"authors":[{"url":null,"name":"noreply@blogger.com (Roderic Page)"}],"image":null,"content_html":"

Note to self (basically rewriting last year''s Finding citations of specimens).

\n\n

Bibliographic data supports going from identifier to citation string and back again, so we can do a \"round trip.\"

\n\n

1.

\n\n

Given a DOI we can get structured data with a simple HTTP fetch, then use a tool such as citation.js to convert that data into a human-readable string in a variety of formats.

\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n
\nIdentifier\n\n⟶\n\nStructured data\n\n⟶\n\nHuman readable string\n
\n10.7717/peerj-cs.214\n\nHTTP with content-negotiation\n\nCSL-JSON\n\nCSL templates\n\nWillighagen, L. G. (2019). Citation.js: a format-independent, modular bibliography tool for the browser and command line. PeerJ Computer Science, 5, e214. https://doi.org/10.7717/peerj-cs.214\n
\n\n

2.

\n\n

Going in the reverse direction (string to identifier) is a little more challenging. In the \"old days\" a typical strategy was to attempt to parse the citation string into structured data (see AnyStyle for a nice example of this), then we could extract a truple of (journal, volume, starting page) and use that to query CrossRef to find if there was an article with that tuple, which gave us the DOI.

\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n
\nIdentifier\n\n⟵\n\nStructured data\n\n⟵\n\nHuman readable string\n
\n10.7717/peerj-cs.214\n\nOpenURL query\n\njournal, volume, start page\n\nCitation parser \n\nWillighagen, L. G. (2019). Citation.js: a format-independent, modular bibliography tool for the browser and command line. PeerJ Computer Science, 5, e214. https://doi.org/10.7717/peerj-cs.214\n
\n\n

3.

\n\n

Another strategy is to take all the citations strings for each DOI, index those in a search engine, then just use a simple search to find the best match to your citation string, and hence the DOI. This is what https://search.crossref.org does.

\n\n\n\n\n\n\n\n\n\n\n\n\n
\nIdentifier\n\n⟵\n\nHuman readable string\n
\n10.7717/peerj-cs.214\n\nsearch\n\nWillighagen, L. G. (2019). Citation.js: a format-independent, modular bibliography tool for the browser and command line. PeerJ Computer Science, 5, e214. https://doi.org/10.7717/peerj-cs.214\n
\n\n

At the moment my work on material citations (i.e., lists of specimens in taxonomic papers) is focussing on 1 (generating citations from specimen data in GBIF) and 2 (parsing citations into structured data).

","tags":["citation","GBIF","material examined","specimen codes"],"language":"en","references":null},{"id":"https://doi.org/10.59350/3s376-6bm21","uuid":"62e7b438-67a3-44ac-a66d-3f5c278c949e","url":"https://iphylo.blogspot.com/2022/02/deduplicating-bibliographic-data.html","title":"Deduplicating bibliographic data","summary":"There are several instances where I have a collection of references that I want to deduplicate and merge. For example, in Zootaxa has no impact factor I describe a dataset of the literature cited by articles in the journal Zootaxa. This data is available on Figshare (https://doi.org/10.6084/m9.figshare.c.5054372.v4), as is the equivalent dataset for Phytotaxa (https://doi.org/10.6084/m9.figshare.c.5525901.v1). Given that the same articles may be cited many times, these datasets have lots of...","date_published":"2022-02-03T15:09:00Z","date_modified":null,"authors":[{"url":null,"name":"noreply@blogger.com (Roderic Page)"}],"image":null,"content_html":"

There are several instances where I have a collection of references that I want to deduplicate and merge. For example, in Zootaxa has no impact factor I describe a dataset of the literature cited by articles in the journal Zootaxa. This data is available on Figshare (https://doi.org/10.6084/m9.figshare.c.5054372.v4), as is the equivalent dataset for Phytotaxa (https://doi.org/10.6084/m9.figshare.c.5525901.v1). Given that the same articles may be cited many times, these datasets have lots of duplicates. Similarly, articles in Wikispecies often have extensive lists of references cited, and the same reference may appear on multiple pages (for an initial attempt to extract these references see https://doi.org/10.5281/zenodo.5801661 and https://github.com/rdmpage/wikispecies-parser).

\n\n

There are several reasons I want to merge these references. If I want to build a citation graph for Zootaxa or Phytotaxa I need to merge references that are the same so that I can accurate count citations. I am also interested in harvesting the metadata to help find those articles in the Biodiversity Heritage Library (BHL), and the literature cited section of scientific articles is a potential goldmine of bibliographic metadata, as is Wikispecies.

\n\n

After various experiments and false starts I''ve created a repository https://github.com/rdmpage/bib-dedup to host a series of PHP scripts to deduplicate bibliographics data. I''ve settled on using CSL-JSON as the format for bibliographic data. Because deduplication relies on comparing pairs of references, the standard format for most of the scripts is a JSON array containing a pair of CSL-JSON objects to compare. Below are the steps the code takes.

\n\n

Generating pairs to compare

\n\n

The first step is to take a list of references and generate the pairs that will be compared. I started with this approach as I wanted to explore machine learning and wanted a simple format for training data, such as an array of two CSL-JSON objects and an integer flag representing whether the two references were the same of different.

\n\n

There are various ways to generate CSL-JSON for a reference. I use a tool I wrote (see Citation parsing tool released) that has a simple API where you parse one or more references and it returns that reference as structured data in CSL-JSON.

\n\n

Attempting to do all possible pairwise comparisons rapidly gets impractical as the number of references increases, so we need some way to restrict the number of comparisons we make. One approach I''ve explored is the “sorted neighbourhood method” where we sort the references 9for example by their title) then move a sliding window down the list of references, comparing all references within that window. This greatly reduces the number of pairwise comparisons. So the first step is to sort the references, then run a sliding window over them, output all the pairs in each window (ignoring in pairwise comparisons already made in a previous window). Other methods of \"blocking\" could also be used, such as only including references in a particular year, or a particular journal.

\n\n

So, the output of this step is a set of JSON arrays, each with a pair of references in CSL-JSON format. Each array is stored on a single line in the same file in line-delimited JSON (JSONL).

\n\n

Comparing pairs

\n\n

The next step is to compare each pair of references and decide whether they are a match or not. Initially I explored a machine learning approach used in the following paper:

\n\n
\nWilson DR. 2011. Beyond probabilistic record linkage: Using neural networks and complex features to improve genealogical record linkage. In: The 2011 International Joint Conference on Neural Networks. 9–14. DOI: 10.1109/IJCNN.2011.6033192\n
\n\n

Initial experiments using https://github.com/jtet/Perceptron were promising and I want to play with this further, but I deciding to skip this for now and just use simple string comparison. So for each CSL-JSON object I generate a citation string in the same format using CiteProc, then compute the Levenshtein distance between the two strings. By normalising this distance by the length of the two strings being compared I can use an arbitrary threshold to decide if the references are the same or not.

\n\n

Clustering

\n\n

For this step we read the JSONL file produced above and record whether the two references are a match or not. Assuming each reference has a unique identifier (needs only be unique within the file) then we can use those identifier to record the clusters each reference belongs to. I do this using a Disjoint-set data structure. For each reference start with a graph where each node represents a reference, and each node has a pointer to a parent node. Initially the reference is its own parent. A simple implementation is to have an array index by reference identifiers and where the value of each cell in the array is the node''s parent.

\n\n

As we discover pairs we update the parents of the nodes to reflect this, such that once all the comparisons are done we have a one or more sets of clusters corresponding to the references that we think are the same. Another way to think of this is that we are getting the components of a graph where each node is a reference and pair of references that match are connected by an edge.

\n\n

In the code I''m using I write this graph in Trivial Graph Format (TGF) which can be visualised using a tools such as yEd.

\n\n

Merging

\n\n

Now that we have a graph representing the sets of references that we think are the same we need to merge them. This is where things get interesting as the references are similar (by definition) but may differ in some details. The paper below describes a simple Bayesian approach for merging records:

\n\n
\nCouncill IG, Li H, Zhuang Z, Debnath S, Bolelli L, Lee WC, Sivasubramaniam A, Giles CL. 2006. Learning Metadata from the Evidence in an On-line Citation Matching Scheme. In: Proceedings of the 6th ACM/IEEE-CS Joint Conference on Digital Libraries. JCDL ’06. New York, NY, USA: ACM, 276–285. DOI: 10.1145/1141753.1141817.\n
\n\n

So the next step is to read the graph with the clusters, generate the sets of bibliographic references that correspond to each cluster, then use the method described in Councill et al. to produce a single bibliographic record for that cluster. These records could then be used to, say locate the corresponding article in BHL, or populate Wikidata with missing references.

\n\n

Obviously there is always the potential for errors, such as trying to merge references that are not the same. As a quick and dirty check I flag as dubious any cluster where the page numbers vary among members of the cluster. More sophisticated checks are possible, especially if I go down the ML route (i.e., I would have evidence for the probability that the same reference can disagree on some aspects of metadata).

\n\n

Summary

\n\n

At this stage the code is working well enough for me to play with and explore some example datasets. The focus is on structured bibliographic metadata, but I may simplify things and have a version that handles simple string matching, for example to cluster together different abbreviations of the same journal name.

","tags":["data cleaning","deduplication","Phytotaxa","Wikispecies","Zootaxa"],"language":"en","references":null},{"id":"https://doi.org/10.59350/ndtkv-6ve80","uuid":"e8e95aaf-bacb-4b5a-bf91-54e903526ab2","url":"https://iphylo.blogspot.com/2021/11/revisiting-rss-to-monitor-latests.html","title":"Revisiting RSS to monitor the latest taxonomic research","summary":"Over a decade ago RSS (RDF Site Summary or Really Simple Syndication) was attracting a lot of interest as a way to integrate data across various websites. Many science publishers would provide a list of their latest articles in XML in one of three flavours of RSS (RDF, RSS, Atom). This led to tools such as uBioRSS [1] and my own e-Biosphere Challenge: visualising biodiversity digitisation in real time. It was a time of enthusiasm for aggregating lots of data, such as the ill-fated PLoS...","date_published":"2021-11-23T20:53:00Z","date_modified":null,"authors":[{"url":null,"name":"noreply@blogger.com (Roderic Page)"}],"image":null,"content_html":"
\"\"
\n

Over a decade ago RSS (RDF Site Summary or Really Simple Syndication) was attracting a lot of interest as a way to integrate data across various websites. Many science publishers would provide a list of their latest articles in XML in one of three flavours of RSS (RDF, RSS, Atom). This led to tools such as uBioRSS [1] and my own e-Biosphere Challenge: visualising biodiversity digitisation in real time. It was a time of enthusiasm for aggregating lots of data, such as the ill-fated PLoS Biodiversity Hub [2].

\n\n

Since I seem to be condemned to revisit old ideas rather than come up with anything new, I''ve been looking at providing a tool like the now defunct uBioRSS. The idea is to harvest RSS feeds from journals (with an emphasis on taxonomic and systematic journals), aggregate the results, and make them browsable by taxon and geography. Here''s a sneak peak:

\n\n
\"\"
\n\n

What seems like a straightforward task quickly became a bit of a challenge. Not all journals have RSS feeds (they seem to have become less widely supported over time) so I need to think of alternative ways to get lists of recent articles. These lists also need to be processed in various ways. There are three versions of RSS, each with their own idiosyncracies, so I need to standardise things like dates. I also want to augment them with things like DOIs (often missing from RSS feeds) and thumbnails for the articles (often available on publisher websites but not the feeds). Then I need to index the content by taxon and geography. For taxa I use a version of Patrick Leary''s \"taxonfinder\" (see https://right-frill.glitch.me) to find names, then the Global Names Index to assign names found to the GBIF taxonomic hierarchy.

\n\n

Indexing by geography proved harder. Typically geoparsing involves taking a body of text and doing the following:\n

\n\n

An example of such a parser is the Edinburgh Geoparser. Typically geoparsing software can be large and tricky to install, especially if you are looking to make your installation publicly accessible. Geoparsing services seem to have a short half-life (e.g., Geoparser.io), perhaps because they are so useful they quickly get swamped by users.

\n\n

Bearing this in mind, the approach I’ve taken here is to create a very simple geoparser that is focussed on fairly large areas, especially those relevant to biodiversity, and is aimed at geoparsing text such as abstracts of scientific papers. I''ve created a small database of places by harvesting data from Wikidata, then I use the \"flash text\" algorithm [3] to find geographic places. This approach uses a trie to store the place names. All I do is walk through the text seeing whether the current word matches a place name (or the start of one) in the trie, then moving on. This is very quick and seems to work quite well.

\n\n

Given that I need to aggregate data from a lot of sources, apply various transformations to that data, then merge it, there are a lot of moving parts. I started playing with a \"NoCode\" platform for creating workflows, in this case n8n (in many ways reminiscent of the now defunct Yahoo Pipes). This was quite fun for a while, but after lots of experimentation I moved back to writing code to aggregate the data into a CouchDB database. CouchDB is one of the NoSQL databases that I really like as it has a great interface, and makes queries very easy to do once you get your head around how it works.

\n\n

So the end result of this is \"BioRSS\" https://biorss.herokuapp.com. The interface comprises a stream of articles listed from newest to oldest, with a treemap and a geographic map on the left. You can use these to filter the articles by taxonomic group and/or country. For example the screen shot is showing arthropods from China (in this case from a month or two ago in the journal ZooKeys). As much fun as the interface has been to construct, in many ways I don''t really want to spend time making an interface. For each combination of taxon and country I provide a RSS feed so if you have a favour feed reader you can grab the feed and view it there. As BioRSS updates the data your feed reader should automatically update the feed. This means that you can have a feed that monitors, say, new papers on spiders in China.

\n\n

In the spirit of \"release early and release often\" this is an early version of this app. I need to add a lot more feeds, back date them to bring in older content, and I also want to make use of aggregators such as PubMed, CrossRef, and Google Scholar. The existence of these tools is, I suspect, one reason why RSS feeds are less common than they used to be.

\n\n

So, if this sounds useful please take it for a spin at https://biorss.herokuapp.com. Feedback is welcome, especially suggestions for journals to harvest and add to the news feed. Ultimately I''d like to have sufficient coverage of the taxonomic literature so that BioRSS becomes a place where we can go to find the latest papers on any taxon of interest.

\n\n

References

\n\n
\n1. Patrick R. Leary, David P. Remsen, Catherine N. Norton, David J. Patterson, Indra Neil Sarkar, uBioRSS: Tracking taxonomic literature using RSS, Bioinformatics, Volume 23, Issue 11, June 2007, Pages 1434–1436, https://doi.org/10.1093/bioinformatics/btm109\n
\n\n
2. Mindell, D. P., Fisher, B. L., Roopnarine, P., Eisen, J., Mace, G. M., Page, R. D. M., & Pyle, R. L. (2011). Aggregating, Tagging and Integrating Biodiversity Research. PLoS ONE, 6(8), e19491. doi:10.1371/journal.pone.0019491\n
\n\n
3. Singh, V. (2017). Replace or Retrieve Keywords In Documents at Scale. CoRR, abs/1711.00046. http://arxiv.org/abs/1711.00046\n\n
","tags":["geocoding","NoCode","RSS"],"language":"en","references":[{"doi":"https://doi.org/10.1093/bioinformatics/btm109","key":"ref1"},{"doi":"https://doi.org/10.1371/journal.pone.0019491","key":"ref2"},{"key":"ref3","url":"http://arxiv.org/abs/1711.00046"}]},{"id":"https://doi.org/10.59350/gf1dw-n1v47","uuid":"a41163e0-9c9a-41e0-a141-f772663f2f32","url":"https://iphylo.blogspot.com/2023/03/dugald-stuart-page-1936-2022.html","title":"Dugald Stuart Page 1936-2022","summary":"My dad died last weekend. Below is a notice in today''s New Zealand Herald. I''m in New Zealand for his funeral. Don''t really have the words for this right now.","date_published":"2023-03-14T03:00:00Z","date_modified":null,"authors":[{"url":null,"name":"noreply@blogger.com (Roderic Page)"}],"image":null,"content_html":"
\"\"
\n\nMy dad died last weekend. Below is a notice in today''s New Zealand Herald. I''m in New Zealand for his funeral. Don''t really have the words for this right now.\n\n
\"\"
","tags":[],"language":"en","references":null},{"id":"https://doi.org/10.59350/c79vq-7rr11","uuid":"3cb94422-5506-4e24-a41c-a250bb521ee0","url":"https://iphylo.blogspot.com/2021/12/graphql-for-wikidata-wikicite.html","title":"GraphQL for WikiData (WikiCite)","summary":"I''ve released a very crude GraphQL endpoint for WikiData. More precisely, the endpoint is for a subset of the entities that are of interest to WikiCite, such as scholarly articles, people, and journals. There is a crude demo at https://wikicite-graphql.herokuapp.com. The endpoint itself is at https://wikicite-graphql.herokuapp.com/gql.php. There are various ways to interact with the endpoint, personally I like the Altair GraphQL Client by Samuel Imolorhe. As I''ve mentioned earlier it''s taken...","date_published":"2021-12-20T13:16:00Z","date_modified":null,"authors":[{"url":null,"name":"noreply@blogger.com (Roderic Page)"}],"image":null,"content_html":"
\"\"

I''ve released a very crude GraphQL endpoint for WikiData. More precisely, the endpoint is for a subset of the entities that are of interest to WikiCite, such as scholarly articles, people, and journals. There is a crude demo at https://wikicite-graphql.herokuapp.com. The endpoint itself is at https://wikicite-graphql.herokuapp.com/gql.php. There are various ways to interact with the endpoint, personally I like the Altair GraphQL Client by Samuel Imolorhe.

\n\n

As I''ve mentioned earlier it''s taken me a while to see the point of GraphQL. But it is clear it is gaining traction in the biodiversity world (see for example the GBIF Hosted Portals) so it''s worth exploring. My take on GraphQL is that it is a way to create a self-describing API that someone developing a web site can use without them having to bury themselves in the gory details of how data is internally modelled. For example, WikiData''s query interface uses SPARQL, a powerful language that has a steep learning curve (in part because of the administrative overhead brought by RDF namespaces, etc.). In my previous SPARQL-based projects such as Ozymandias and ALEC I have either returned SPARQL results directly (Ozymandias) or formatted SPARQL results as schema.org DataFeeds (equivalent to RSS feeds) (ALEC). Both approaches work, but they are project-specific and if anyone else tried to build based on these projects they might struggle for figure out what was going on. I certainly struggle, and I wrote them!

\n\n

So it seems worthwhile to explore this approach a little further and see if I can develop a GraphQL interface that can be used to build the sort of rich apps that I want to see. The demo I''ve created uses SPARQL under the hood to provide responses to the GraphQL queries. So in this sense it''s not replacing SPARQL, it''s simply providing a (hopefully) simpler overlay on top of SPARQL so that we can retrieve the data we want without having to learn the intricacies of SPARQL, nor how Wikidata models publications and people.

","tags":["GraphQL","SPARQL","WikiCite","Wikidata"],"language":"en","references":null},{"id":"https://doi.org/10.59350/ymc6x-rx659","uuid":"0807f515-f31d-4e2c-9e6f-78c3a9668b9d","url":"https://iphylo.blogspot.com/2022/09/dna-barcoding-as-intergenerational.html","title":"DNA barcoding as intergenerational transfer of taxonomic knowledge","summary":"I tweeted about this but want to bookmark it for later as well. The paper “A molecular-based identification resource for the arthropods of Finland” doi:10.1111/1755-0998.13510 contains the following: …the annotated barcode records assembled by FinBOL participants represent a tremendous intergenerational transfer of taxonomic knowledge … the time contributed by current taxonomists in identifying and contributing voucher specimens represents a great gift to future generations who will benefit...","date_published":"2022-09-14T10:12:00Z","date_modified":null,"authors":[{"url":null,"name":"noreply@blogger.com (Roderic Page)"}],"image":null,"content_html":"

I tweeted about this but want to bookmark it for later as well. The paper “A molecular-based identification resource for the arthropods of Finland” doi:10.1111/1755-0998.13510 contains the following:

\n
\n

…the annotated barcode records assembled by FinBOL participants represent a tremendous intergenerational transfer of taxonomic knowledge … the time contributed by current taxonomists in identifying and contributing voucher specimens represents a great gift to future generations who will benefit from their expertise when they are no longer able to process new material.

\n
\n

I think this is a very clever way to characterise the project. In an age of machine learning this may be commonest way to share knowledge , namely as expert-labelled training data used to build tools for others. Of course, this means the expertise itself may be lost, which has implications for updating the models if the data isn’t complete. But it speaks to Charles Godfrey’s theme of “Taxonomy as information science”.

\n

Note that the knowledge is also transformed in the sense that the underlying expertise of interpreting morphology, ecology, behaviour, genomics, and the past literature is not what is being passed on. Instead it is probabilities that a DNA sequence belongs to a particular taxon.

\n

This feels is different to, say iNaturalist, where there is a machine learning model to identify images. In that case, the model is built on something the community itself has created, and continues to create. Yes, the underlying idea is that same: “experts” have labelled the data, a model is trained, the model is used. But the benefits of the iNaturalist model are immediately applicable to the people whose data built the model. In the case of barcoding, because the technology itself is still not in the hands of many (relative to, say, digital imaging), the benefits are perhaps less tangible. Obviously researchers working with environmental DNA will find it very useful, but broader impact may await the arrival of citizen science DNA barcoding.

\n

The other consideration is whether the barcoding helps taxonomists. Is it to be used to help prioritise future work (“we are getting lots of unknown sequences in these taxa, lets do some taxonomy there”), or is it simply capturing the knowledge of a generation that won’t be replaced:

\n
\n

The need to capture such knowledge is essential because there are, for example, no young Finnish taxonomists who can critically identify species in many key groups of ar- thropods (e.g., aphids, chewing lice, chalcid wasps, gall midges, most mite lineages).

\n
\n

The cycle of collect data, test and refine model, collect more data, rinse and repeat that happens with iNaturalist creates a feedback loop. It’s not clear that a similar cycle exists for DNA barcoding.

\n
\n

Written with StackEdit.

\n
","tags":[],"language":"en","references":null},{"id":"https://doi.org/10.59350/enxas-arj18","uuid":"ab5a6e04-d55e-4901-8269-9eea65ce7178","url":"https://iphylo.blogspot.com/2022/08/can-we-use-citation-graph-to-measure.html","title":"Can we use the citation graph to measure the quality of a taxonomic database?","summary":"More arm-waving notes on taxonomic databases. I''ve started to add data to ChecklistBank and this has got me thinking about the issue of data quality. When you add data to ChecklistBank you are asked to give a measure of confidence based on the Catalogue of Life Checklist Confidence system of one - five stars: ★ - ★★★★★. I''m scepetical about the notion of confidence or \"trust\" when it is reduced to a star system (see also Can you trust EOL?). I could literally pick any number of stars, there''s...","date_published":"2022-08-24T14:33:00Z","date_modified":null,"authors":[{"url":null,"name":"noreply@blogger.com (Roderic Page)"}],"image":null,"content_html":"

More arm-waving notes on taxonomic databases. I''ve started to add data to ChecklistBank and this has got me thinking about the issue of data quality. When you add data to ChecklistBank you are asked to give a measure of confidence based on the Catalogue of Life Checklist Confidence system of one - five stars: ★ - ★★★★★. I''m scepetical about the notion of confidence or \"trust\" when it is reduced to a star system (see also Can you trust EOL?). I could literally pick any number of stars, there''s no way to measure what number of stars is appropriate. This feeds into my biggest reservation about the Catalogue of Life, it''s almost entirely authority based, not evidence based. That is, rather than give us evidence for why a particular taxon is valid, we are (mostly) just given a list of taxa are asked to accept those as gospel, based on assertions by one or more authorities. I''m not necessarly doubting the knowledge of those making these lists, it''s just that I think we need to do better than \"these are the accepted taxa because I say so\" implict in the Catalogue of Life.\n

\n\n

So, is there any way we could objectively measure the quality of a particular taxonomic checklist? Since I have a long standing interest in link the primary taxonomic litertaure to names in databases (since that''s where the evidence is), I keep wondering whether measures based on that literture could be developed. \n

\n

\nI recently revisited the fascinating (and quite old) literature on rates of synonymy:\n

\n
\nGaston Kevin J. and Mound Laurence A. 1993 Taxonomy, hypothesis testing and the biodiversity crisisProc. R. Soc. Lond. B.251139–142\nhttp://doi.org/10.1098/rspb.1993.0020\n
\n \n
\n Andrew R. Solow, Laurence A. Mound, Kevin J. Gaston, Estimating the Rate of Synonymy, Systematic Biology, Volume 44, Issue 1, March 1995, Pages 93–96, https://doi.org/10.1093/sysbio/44.1.93\n
\n\n

\n\n

\nA key point these papers make is that the observed rate of synonymy is quite high (that is, many \"new species\" end up being merged with already known species), and that because it can take time to discover that a species is a synonym the actual rate may be even higher. In other words, in diagrams like the one reproduced below, the reason the proportion of synonyms declines the nearer we get to the present day (this paper came out in 1995) is not because are are creating fewer synonyms but because we''ve not yet had time to do the work to uncover the remaining synonyms.\n

\n \n
\"\"
\n\n

Put another way, these papers are arguing that real work of taxonomy is revision, not species discovery, especially since it''s not uncommon for > 50% of species in a taxon to end up being synonymised. Indeed, if a taxoomic group has few synonyms then these authors would argue that''s a sign of neglect. More revisionary work would likely uncover additional synonyms. So, what we need is a way to measure the amount of research on a taxonomic group. It occurs to me that we could use the citation graph as a way to tackle this. Lets imagine we have a set of taxa (say a family) and we have all the papers that described new species or undertook revisions (or both). The extensiveness of that work could be measured by the citation graph. For example, build the citation graph for those papers. How many original species decsriptions are not cited? Those species have been potentially neglected. How many large-scale revisions have there been (as measured by the numbers of taxonomic papers those revisions cite)? There are some interesting approaches to quantifying this, such as using hubs and authorities.

\n \n \n

I''m aware that taxonomists have not had the happiest relationship with citations:\n \n

\nPinto ÂP, Mejdalani G, Mounce R, Silveira LF, Marinoni L, Rafael JA. Are publications on zoological taxonomy under attack? R Soc Open Sci. 2021 Feb 10;8(2):201617. doi: 10.1098/rsos.201617. PMID: 33972859; PMCID: PMC8074659.\n
\n\nStill, I think there is an intriguing possibility here. For this approach to work, we need to have linked taxonomic names to publications, and have citation data for those publications. This is happening on various platforms. Wikidata, for example, is becoming a repository of the taxonomic literature, some of it with citation links.\n\n
\nPage RDM. 2022. Wikidata and the bibliography of life. PeerJ 10:e13712 https://doi.org/10.7717/peerj.13712\n
\n\nTime for some experiments.\n

","tags":["Bibliography of Life","citation","synonymy","taxonomic databases"],"language":"en","references":null},{"id":"https://doi.org/10.59350/cbzgz-p8428","uuid":"a93134aa-8b33-4dc7-8cd4-76cdf64732f4","url":"https://iphylo.blogspot.com/2023/04/library-interfaces-knowledge-graphs-and.html","title":"Library interfaces, knowledge graphs, and Miller columns","summary":"Some quick notes on interface ideas for digital libraries and/or knowledge graphs. Recently there’s been something of an explosion in bibliographic tools to explore the literature. Examples include: Elicit which uses AI to search for and summarise papers _scite which uses AI to do sentiment analysis on citations (does paper A cite paper B favourably or not?) ResearchRabbit which uses lists, networks, and timelines to discover related research Scispace which navigates connections between...","date_published":"2023-04-25T13:01:00Z","date_modified":null,"authors":[{"url":null,"name":"noreply@blogger.com (Roderic Page)"}],"image":null,"content_html":"

Some quick notes on interface ideas for digital libraries and/or knowledge graphs.

\n

Recently there’s been something of an explosion in bibliographic tools to explore the literature. Examples include:

\n\n

As an aside, I think these (and similar tools) are a great example of how bibliographic data such as abstracts, the citation graph and - to a lesser extent - full text - have become commodities. That is, what was once proprietary information is now free to anyone, which in turns means a whole ecosystem of new tools can emerge. If I was clever I’d be building a Wardley map to explore this. Note that a decade or so ago reference managers like Zotero were made possible by publishers exposing basic bibliographic data on their articles. As we move to open citations we are seeing the next generation of tools.

\n

Back to my main topic. As usual, rather than focus on what these tools do I’m more interested in how they look. I have history here, when the iPad came out I was intrigued by the possibilities it offered for displaying academic articles, as discussed here, here, here, here, and here. ResearchRabbit looks like this:

\n
\n

Scispace’s “trace” view looks like this:

\n
\n

What is interesting about both is that they display content from left to right in vertical columns, rather than the more common horizontal rows. This sort of display is sometimes called Miller columns or a cascading list.

\n\n
\"\"
\n\n

By Gürkan Sengün (talk) - Own work, Public Domain, https://commons.wikimedia.org/w/index.php?curid=594715

\n

I’ve always found displaying a knowledge graph to be a challenge, as discussed elsewhere on this blog and in my paper on Ozymandias. Miller columns enable one to drill down in increasing depth, but it doesn’t need to be a tree, it can be a path within a network. What I like about ResearchRabbit and the original Scispace interface is that they present the current item together with a list of possible connections (e.g., authors, citations) that you can drill down on. Clicking on these will result in a new column being appended to the right, with a view (typically a list) of the next candidates to visit. In graph terms, these are adjacent nodes to the original item. The clickable badges on each item can be thought of as sets of edges that have the same label (e.g., “authored by”, “cites”, “funded”, “is about”, etc.). Each of these nodes itself becomes a starting point for further exploration. Note that the original starting point isn’t privileged, other than being the starting point. That is, each time we drill down we are seeing the same type of information displayed in the same way. Note also that the navigation can be though of as a card for a node, with buttons grouping the adjacent nodes. When we click on an individual button, it expands into a list in the next column. This can be thought of as a preview for each adjacent node. Clicking on an element in the list generates a new card (we are viewing a single node) and we get another set of buttons corresponding to the adjacent nodes.

\n

One important behaviour in a Miller column interface is that the current path can be pruned at any point. If we go back (i.e., scroll to the left) and click on another tab on an item, everything downstream of that item (i.e., to the right) gets deleted and replaced by a new set of nodes. This could make retrieving a particular history of browsing a bit tricky, but encourages exploration. Both Scispace and ResearchRabbit have the ability to add items to a collection, so you can keep track of things you discover.

\n

Lots of food for thought, I’m assuming that there is some user interface/experience research on Miller columns. One thing to remember is that Miller columns are most often associated with trees, but in this case we are exploring a network. That means that potentially there is no limit to the number of columns being generated as we wander through the graph. It will be interesting to think about what the average depth is likely to be, in other words, how deep down the rabbit hole will be go?

\n\n

Update

\n

Should add link to David Regev''s explorations of Flow Browser.\n\n

\n

Written with StackEdit.

\n
","tags":["cards","flow","Knowledge Graph","Miller column","RabbitResearch"],"language":"en","references":null},{"id":"https://doi.org/10.59350/t6fb9-4fn44","uuid":"8bc3fea6-cb86-4344-8dad-f312fbf58041","url":"https://iphylo.blogspot.com/2021/12/the-business-of-extracting-knowledge.html","title":"The Business of Extracting Knowledge from Academic Publications","summary":"Markus Strasser (@mkstra write a fascinating article entitled \"The Business of Extracting Knowledge from Academic Publications\". I spent months working on domain-specific search engines and knowledge discovery apps for biomedicine and eventually figured that synthesizing "insights" or building knowledge graphs by machine-reading the academic literature (papers) is *barely useful* :https://t.co/eciOg30Odc— Markus Strasser (@mkstra) December 7, 2021 His TL;DR: TL;DR: I...","date_published":"2021-12-11T00:01:00Z","date_modified":null,"authors":[{"url":null,"name":"noreply@blogger.com (Roderic Page)"}],"image":null,"content_html":"

Markus Strasser (@mkstra write a fascinating article entitled \"The Business of Extracting Knowledge from Academic Publications\".

\n\n

I spent months working on domain-specific search engines and knowledge discovery apps for biomedicine and eventually figured that synthesizing "insights" or building knowledge graphs by machine-reading the academic literature (papers) is *barely useful* :https://t.co/eciOg30Odc

— Markus Strasser (@mkstra) December 7, 2021
\n\n

His TL;DR:

\n\n

\nTL;DR: I worked on biomedical literature search, discovery and recommender web applications for many months and concluded that extracting, structuring or synthesizing \"insights\" from academic publications (papers) or building knowledge bases from a domain corpus of literature has negligible value in industry.

\n\n

Close to nothing of what makes science actually work is published as text on the web.\n

\n\n

After recounting the many problems of knowledge extraction - including a swipe at nanopubs which \"are ... dead in my view (without admitting it)\" - he concludes:

\n\n

\nI’ve been flirting with this entire cluster of ideas including open source web annotation, semantic search and semantic web, public knowledge graphs, nano-publications, knowledge maps, interoperable protocols and structured data, serendipitous discovery apps, knowledge organization, communal sense making and academic literature/publishing toolchains for a few years on and off ... nothing of it will go anywhere.

\n\n

Don’t take that as a challenge. Take it as a red flag and run. Run towards better problems.\n

\n\n

Well worth a read, and much food for thought.

","tags":["ai","business model","text mining"],"language":"en","references":null},{"id":"https://doi.org/10.59350/463yw-pbj26","uuid":"dc829ab3-f0f1-40a4-b16d-a36dc0e34166","url":"https://iphylo.blogspot.com/2022/12/david-remsen.html","title":"David Remsen","summary":"I heard yesterday from Martin Kalfatovic (BHL) that David Remsen has died. Very sad news. It''s starting to feel like iPhylo might end up being a list of obituaries of people working on biodiversity informatics (e.g., Scott Federhen). I spent several happy visits at MBL at Woods Hole talking to Dave at the height of the uBio project, which really kickstarted large scale indexing of taxonomic names, and the use of taxonomic name finding tools to index the literature. His work on uBio with David...","date_published":"2022-12-16T17:54:00Z","date_modified":null,"authors":[{"url":null,"name":"noreply@blogger.com (Roderic Page)"}],"image":null,"content_html":"

I heard yesterday from Martin Kalfatovic (BHL) that David Remsen has died. Very sad news. It''s starting to feel like iPhylo might end up being a list of obituaries of people working on biodiversity informatics (e.g., Scott Federhen).

\n\n

I spent several happy visits at MBL at Woods Hole talking to Dave at the height of the uBio project, which really kickstarted large scale indexing of taxonomic names, and the use of taxonomic name finding tools to index the literature. His work on uBio with David (\"Paddy\") Patterson led to the Encyclopedia of Life (EOL).

\n\n

A number of the things I''m currently working on are things Dave started. For example, I recently uploaded a version of his dataset for Nomenclator Zoologicus[1] to ChecklistBank where I''m working on augmenting that original dataset by adding links to the taxonomic literature. My BioRSS project is essentially an attempt to revive uBioRSS[2] (see Revisiting RSS to monitor the latest taxonomic research).

\n\n

I have fond memories of those visits to Woods Hole. A very sad day indeed.

\n\n

Update: The David Remsen Memorial Fund has been set up on GoFundMe.

\n\n

1. Remsen, D. P., Norton, C., & Patterson, D. J. (2006). Taxonomic Informatics Tools for the Electronic Nomenclator Zoologicus. The Biological Bulletin, 210(1), 18–24. https://doi.org/10.2307/4134533

\n\n

2. Patrick R. Leary, David P. Remsen, Catherine N. Norton, David J. Patterson, Indra Neil Sarkar, uBioRSS: Tracking taxonomic literature using RSS, Bioinformatics, Volume 23, Issue 11, June 2007, Pages 1434–1436, https://doi.org/10.1093/bioinformatics/btm109

","tags":["David Remsen","obituary","uBio"],"language":"en","references":null},{"id":"https://doi.org/10.59350/pmhat-5ky65","uuid":"5891c709-d139-440f-bacb-06244424587a","url":"https://iphylo.blogspot.com/2021/10/problems-with-plazi-parsing-how.html","title":"Problems with Plazi parsing: how reliable are automated methods for extracting specimens from the literature?","summary":"The Plazi project has become one of the major contributors to GBIF with some 36,000 datasets yielding some 500,000 occurrences (see Plazi''s GBIF page for details). These occurrences are extracted from taxonomic publication using automated methods. New data is published almost daily (see latest treatments). The map below shows the geographic distribution of material citations provided to GBIF by Plazi, which gives you a sense of the size of the dataset. By any metric Plazi represents a...","date_published":"2021-10-25T11:10:00Z","date_modified":null,"authors":[{"url":null,"name":"noreply@blogger.com (Roderic Page)"}],"image":null,"content_html":"
\"\"

The Plazi project has become one of the major contributors to GBIF with some 36,000 datasets yielding some 500,000 occurrences (see Plazi''s GBIF page for details). These occurrences are extracted from taxonomic publication using automated methods. New data is published almost daily (see latest treatments). The map below shows the geographic distribution of material citations provided to GBIF by Plazi, which gives you a sense of the size of the dataset.

\n\n
\"\"
\n\n

By any metric Plazi represents a considerable achievement. But often when I browse individual records on Plazi I find records that seem clearly incorrect. Text mining the literature is a challenging problem, but at the moment Plazi seems something of a \"black box\". PDFs go in, the content is mined, and data comes up to be displayed on the Plazi web site and uploaded to GBIF. Nowhere does there seem to be an evaluation of how accurate this text mining actually is. Anecdotally it seems to work well in some cases, but in others it produces what can only be described as bogus records.

\n\n

Finding errors

\n\n

A treatment in Plazi is a block of text (and sometimes illustrations) that refers to a single taxon. Often that text will include a description of the taxon, and list one or more specimens that have been examined. These lists of specimens (\"material citations\") are one of the key bits of information that Plaza extracts from a treatment as these citations get fed into GBIF as occurrences.

\n\n

To help explore treatments I''ve constructed a simple web site that takes the Plazi identifier for a treatment and displays that treatment with the material citations highlighted. For example, for the Plazi treatment 03B5A943FFBB6F02FE27EC94FABEEAE7 you can view the marked up version at https://plazi-tester.herokuapp.com/?uri=622F7788-F0A4-449D-814A-5B49CD20B228. Below is an example of a material citation with its component parts tagged:

\n\n
\"\"
\n\n

This is an example where Plazi has successfully parsed the specimen. But I keep coming across cases where specimens have not been parsed correctly, resulting in issues such as single specimens being split into multiple records (e.g., https://plazi-tester.herokuapp.com/?uri=5244B05EFFC8E20F7BC32056C178F496), geographical coordinates being misinterpreted (e.g., https://plazi-tester.herokuapp.com/?uri=0D228E6AFFC2FFEFFF4DE8118C4EE6B9), or collector''s initials being confused with codes for natural history collections (e.g., https://plazi-tester.herokuapp.com/?uri=252C87918B362C05FF20F8C5BFCB3D4E).

\n\n

Parsing specimens is a hard problem so it''s not unexpected to find errors. But they do seem common enough to be easily found, which raises the question of just what percentage of these material citations are correct? How much of the data Plazi feeds to GBIF is correct? How would we know?

\n\n

Systemic problems

\n\n

Some of the errors I''ve found concern the interpretation of the parsed data. For example, it is striking that despite including marine taxa no Plazi record has a value for depth below sea level (see GBIF search on depth range 0-9999 for Plazi). But many records do have an elevation, including records from marine environments. Any record that has a depth value is interpreted by Plazi as being elevation, so we have aerial crustacea and fish.

\n\n

Map of Plazi records with depth 0-9999m

\n
\"\"
\n\n

Map of Plazi records with elevation 0-9999m

\n
\"\"
\n\n

Anecdotally I''ve also noticed that Plazi seems to do well on zoological data, especially journals like Zootaxa, but it often struggles with botanical specimens. Botanists tend to cite specimens rather differently to zoologists (botanists emphasise collector numbers rather than specimen codes). Hence data quality in Plazi is likely to taxonomic biased.

\n\n

Plazi is using GitHub to track issues with treatments so feedback on erroneous records is possible, but this seems inadequate to the task. There are tens of thousands of data sets, with more being released daily, and hundreds of thousands of occurrences, and relying on GitHub issues devolves the responsibility for error checking onto the data users. I don''t have a measure of how many records in Plazi have problems, but because I suspect it is a significant fraction because for any given day''s output I can typically find errors.

\n\n

What to do?

\n\n

Faced with a process that generates noisy data there are several of things we could do:

\n\n
    \n
  1. Have tools to detect and flag errors made in generating the data.
  2. \n
  3. Have the data generator give estimates the confidence of its results.
  4. \n
  5. Improve the data generator.
  6. \n
\n\n

I think a comparison with the problem of parsing bibliographic references might be instructive here. There is a long history of people developing tools to parse references (I''ve even had a go). State-of-the art tools such as AnyStyle feature machine learning, and are tested against human curated datasets of tagged bibliographic records. This means we can evaluate the performance of a method (how well does it retrieve the same results as human experts?) and also improve the method by expanding the corpus of training data. Some of these tools can provide a measures of how confident they are when classifying a string as, say, a person''s name, which means we could flag potential issues for anyone wanting to use that record.

\n\n

We don''t have equivalent tools for parsing specimens in the literature, and hence have no easy way to quantify how good existing methods are, nor do we have a public corpus of material citations that we can use as training data. I blogged about this a few months ago and was considering using Plazi as a source of marked up specimen data to use for training. However based on what I''ve looked at so far Plazi''s data would need to be carefully scrutinised before it could be used as training data.

\n\n

Going forward, I think it would be desirable to have a set of records that can be used to benchmark specimen parsers, and ideally have the parsers themselves available as web services so that anyone can evaluate them. Even better would be a way to contribute to the training data so that these tools improve over time.

\n\n

Plazi''s data extraction tools are mostly desktop-based, that is, you need to download software to use their methods. However, there are experimental web services available as well. I''ve created a simple wrapper around the material citation parser, you can try it at https://plazi-tester.herokuapp.com/parser.php. It takes a single material citation and returns a version with elements such as specimen code and collector name tagged in different colours.

\n\n

Summary

\n\n

Text mining the taxonomic literature is clearly a gold mine of data, but at the same time it is potentially fraught as we try and extract structured data from semi-structured text. Plazi has demonstrated that it is possible to extract a lot of data from the literature, but at the same time the quality of that data seems highly variable. Even minor issues in parsing text can have big implications for data quality (e.g., marine organisms apparently living above sea level). Historically in biodiversity informatics we have favoured data quantity over data quality. Quantity has an obvious metric, and has milestones we can celebrate (e.g., one billion specimens). There aren''t really any equivalent metrics for data quality.

\n\n

Adding new types of data can sometimes initially result in a new set of quality issues (e.g., GBIF metagenomics and metacrap) that take time to resolve. In the case of Plazi, I think it would be worthwhile to quantify just how many records have errors, and develop benchmarks that we can use to test methods for extracting specimen data from text. If we don''t do this then there will remain uncertainty as to how much trust we can place in data mined from the taxonomic literature.

\n\n

Update

\n\nPlazi has responded, see Liberating material citations as a first step to more better data. My reading of their repsonse is that it essentially just reiterates Plazi''s approach and doesn''t tackle the underlying issue: their method for extracting material citations is error prone, and many of those errors end up in GBIF.","tags":["data quality","parsing","Plazi","specimen","text mining"],"language":"en","references":null}]}' recorded_at: Thu, 15 Jun 2023 20:39:01 GMT recorded_with: VCR 6.1.0