--- http_interactions: - request: method: get uri: https://rogue-scholar.org/api/posts/not_indexed/2023-01-01 body: encoding: UTF-8 string: '' headers: Connection: - close Host: - rogue-scholar.org User-Agent: - http.rb/5.1.1 response: status: code: 200 message: OK headers: Age: - '0' Cache-Control: - public, max-age=0, must-revalidate Content-Length: - '130605' Content-Type: - application/json; charset=utf-8 Date: - Sun, 18 Jun 2023 15:23:50 GMT Etag: - '"c0lgvxsmyg2sk1"' Server: - Vercel Strict-Transport-Security: - max-age=63072000 X-Matched-Path: - "/api/posts/not_indexed/[slug]" X-Vercel-Cache: - MISS X-Vercel-Id: - fra1::iad1::fhsk7-1687101829861-776a41c57e87 Connection: - close body: encoding: UTF-8 string: "[{\"id\":\"https://doi.org/10.59350/b5fwt-6b577\",\"uuid\":\"ca2a7df4-f3b9-487c-82e9-27f54de75ea8\",\"url\":\"https://svpow.com/2023/06/09/three-presentations-today-at-mte14-and-the-papers-that-go-with-them/\",\"title\":\"Three presentations today at MTE14, and the papers that go with them\",\"summary\":\"BIG day today. The 14th Symposium on Mesozoic Terrestrial Ecosystems and Biota (MTE14) is taking place in Salt Lake City this week. Normally I’d be there in a heartbeat, but my son is graduating from high school next week and I’m far too busy to get away. Still, I’m an author on one poster and two talks that are running today, along with the three associated short papers that are published in the conference volume in The Anatomical Record. I will be blogging about these things, and shortly,...\",\"date_published\":\"2023-06-09T13:02:26Z\",\"date_modified\":\"2023-06-09T13:02:26Z\",\"date_indexed\":\"1970-01-01T00:00:00+00:00\",\"authors\":[{\"url\":null,\"name\":\"Matt Wedel\"}],\"image\":null,\"content_html\":\"\\n
\\nBIG day today. The 14th Symposium on Mesozoic Terrestrial Ecosystems and Biota (MTE14) is taking place in Salt Lake City this week. Normally I’d be there in a heartbeat, but my son is graduating from high school next week and I’m far too busy to get away. Still, I’m an author on one poster and two talks that are running today, along with the three associated short papers that are published in the conference volume in The Anatomical Record.
\\nI will be blogging about these things, and shortly, but for now here are Wedel-related presentations and links to the papers, in chronological order. (The whole conference volume is available here, I just extracted the papers I’m on as separate PDFs to post in the links below.)
\\n1. Wedel and Atterholt on expanded neurocentral joints in sauropods — Jessie is presenting our poster, which should be up for most of the day. Citation and link to paper:
\\n\\n2. Curtice et al. on the first material of Haplocanthosaurus from Dry Mesa — I believe Brian Curtice and Colin Boisvert are tag-teaming this talk at 2:00 pm MDT.
\\n\\n3. Weil et al. on Morrison microvertebrates from the Oklahoma panhandle — Anne Weil is giving this talk at 2:15 pm MDT.
\\n\\nStand by for more info on all this stuff. And if you’re attending MTE14, go catch these presentations and say hi to all these excellent human beings!
\\n\",\"tags\":[\"#MTE14\",\"conferences\",\"navel blogging\",\"timely\"],\"language\":\"en\",\"references\":[],\"blog_id\":\"dkvra02\",\"blog\":{\"id\":\"dkvra02\",\"title\":\"Sauropod Vertebra Picture of the Week\",\"description\":\"SV-POW! ... All sauropod vertebrae, except when we're talking about Open Access\",\"language\":\"en\",\"favicon\":null,\"feed_url\":\"https://svpow.com/feed/atom/\",\"home_page_url\":\"https://svpow.com\",\"user_id\":\"8498eaf6-8c58-4b58-bc15-27eda292b1aa\",\"created_at\":\"2023-05-31T14:28:02+00:00\",\"indexed_at\":\"2023-02-01\",\"feed_format\":\"application/atom+xml\",\"license\":\"https://creativecommons.org/licenses/by/4.0/legalcode\",\"generator\":\"WordPress (.com)\",\"category\":\"Natural Sciences\",\"prefix\":\"10.59350\",\"modified_at\":\"2023-06-09T19:54:23+00:00\",\"version\":\"https://jsonfeed.org/version/1.1\",\"backlog\":true,\"authors\":null}},{\"id\":\"https://doi.org/10.59350/gybpr-7mq58\",\"uuid\":\"a3e4b240-26dd-486a-913a-325f697c6aef\",\"url\":\"https://svpow.com/2023/04/12/more-on-the-disturbing-plausibility-of-chatgpt/\",\"title\":\"More on the disturbing plausibility of ChatGPT\",\"summary\":\"Prompted by a post on Mastodon (which, like all Mastodon posts, I can no longer find), I asked ChatGPT to tell me about my own papers. The response started out well but quickly got much worse. I will indent my comments on its response. Q. What are some articles written by Michael P. Taylor? A. Michael P. Taylor is a vertebrate paleontologist and open science advocate. He has written numerous articles and papers on topics such as the taxonomy and systematics of sauropod dinosaurs, open access...\",\"date_published\":\"2023-04-12T22:40:38Z\",\"date_modified\":\"2023-04-12T22:40:38Z\",\"date_indexed\":\"1970-01-01T00:00:00+00:00\",\"authors\":[{\"url\":null,\"name\":\"Mike Taylor\"}],\"image\":null,\"content_html\":\"Prompted by a post on Mastodon (which, like all Mastodon posts, I can no longer find), I asked ChatGPT to tell me about my own papers. The response started out well but quickly got much worse. I will indent my comments on its response.
\\nQ. What are some articles written by Michael P. Taylor?
\\nA. Michael P. Taylor is a vertebrate paleontologist and open science advocate. He has written numerous articles and papers on topics such as the taxonomy and systematics of sauropod dinosaurs, open access publishing, and scholarly communication.
\\n\\nSo far, so good. In general, LLMs seem to be good at synthesizing overviews. It’s specific facts that trip them up.
Here are a few examples of his articles:
\\nTaylor, M. P. (2009). A re-evaluation of Brachiosaurus altithorax Riggs 1903 (Dinosauria, Sauropoda) and its generic separation from Giraffatitan brancai (Janensch 1914) (Dinosauria, Sauropoda). Journal of Vertebrate Paleontology, 29(3), 787-806.
\\n\\nThe first one gives is pretty good. The only mistake is that it’s oddly appended “(Dinosauria, Sauropoda)” to the end of the article title. The journal, volume, issue and page-numbers are all correct.
Taylor, M. P. (2011). Sauropod dinosaur research: a historical review. In A. W. A. Kellner, & D. A. S. Silva (Eds.), Vertebrate paleontology and sedimentology of the Jurassic-Cretaceous boundary in the northern and central parts of the Lusitanian Basin, Portugal (pp. 11-43). Sociedade de Geologia de Portugal.
\\n\\nI did write a book-chapter titled “Sauropod dinosaur research: a historical review”. But it came out in 2010, not 2011. And it ran from pages 361–386, not 11-43. And it was in Dinosaurs and Other Extinct Saurians: a Historical Perspective, not Vertebrate paleontology and sedimentology of the Jurassic-Cretaceous boundary in the northern and central parts of the Lusitanian Basin, Portugal (which doesn’t exist). Which was edited by Richard T. J. Moody, Eric Buffetaut, Darren Naish and David M. Martill, not A. W. A. Kellner & D. A. S. Silva. And published by the Geological Society of London, not Sociedade de Geologia de Portugal.
Taylor, M. P., & Wedel, M. J. (2018). Why sauropods had long necks; and why giraffes have short necks. PeerJ Preprints, 6, e27537v1.
\\n\\nI and Matt did co-write a paper with a similar title in PeerJ. But it’s Why sauropods had long necks; and why giraffes have short necks, not the other way around. And it came out in 2013, not 2018. And it was in PeerJ proper, not PeerJ Preprints.
Taylor, M. P. (2020). The open-access wars: How to free science from academic paywalls. Wired.
\\n\\nI never wrote any such paper. There is an article out there whose HTML title is the one given here, though the visible title is completely different: “The war to free science”. It’s in Vox, not Wired; it came out in 2019, not 2020. And it’s by Brian Resnick and Julia Belluz.
Taylor, M. P., & Knope, M. L. (2021). The future of scholarly publishing is in our hands. The Conversation.
\\n\\nAs far as I recall, I never wrote anything for The Conversation. There seems to be no article named “The future of scholarly publishing is in our hands”. And I have never met or corresponded with, far less co-authored with, anyone with the surname Knope.
All in all, a pretty horrible performance — but, and this is the scary part, one that even someone pretty familiar with my work could very easily have swallowed.
\\nI made a very similar point a few months ago in These new “artificial intelligence” programs don’t know what they’re talking about. If the responses were obviously wrong, there wouldn’t be too much of a problem here. But this is close enough to fool most people most of the time.
\\nAs more and more of what we read is likely to be written by LLMs in the next years, I am concerned that we’re going to be dealing with an awful lot of this kind of thing. The ability to pan gold out of the stream of lies is going to become really important.
\\n\",\"tags\":[\"Just Plain Wrong\",\"rants\"],\"language\":\"en\",\"references\":[],\"blog_id\":\"dkvra02\",\"blog\":{\"id\":\"dkvra02\",\"title\":\"Sauropod Vertebra Picture of the Week\",\"description\":\"SV-POW! ... All sauropod vertebrae, except when we're talking about Open Access\",\"language\":\"en\",\"favicon\":null,\"feed_url\":\"https://svpow.com/feed/atom/\",\"home_page_url\":\"https://svpow.com\",\"user_id\":\"8498eaf6-8c58-4b58-bc15-27eda292b1aa\",\"created_at\":\"2023-05-31T14:28:02+00:00\",\"indexed_at\":\"2023-02-01\",\"feed_format\":\"application/atom+xml\",\"license\":\"https://creativecommons.org/licenses/by/4.0/legalcode\",\"generator\":\"WordPress (.com)\",\"category\":\"Natural Sciences\",\"prefix\":\"10.59350\",\"modified_at\":\"2023-06-09T19:54:23+00:00\",\"version\":\"https://jsonfeed.org/version/1.1\",\"backlog\":true,\"authors\":null}},{\"id\":\"https://doi.org/10.59350/4d6s7-hve18\",\"uuid\":\"9ee85ccd-6e21-4e78-96dd-5123fc62afbc\",\"url\":\"https://tarleb.com/posts/quarto-sitemap/index.html\",\"title\":\"Generating a sitemap with Quarto\",\"summary\":\"Sitemaps are an easy way to list all pages that a search engine should crawl and index. Quarto supports the standard, XML based Sitemap Protocol, although that fact is a bit hidden in the docs. Quarto will automatically produce a sitemap if the website’s URL is given as site-url property: website: title: tarleb site-url: 'https://tarleb.com' \ site-path: '/' The URL will contain a colon :, which is why the YAML value must be put in quotes. You’ll find the sitemap.xml file in your _site...\",\"date_published\":\"2022-08-24T00:00:00Z\",\"date_modified\":null,\"date_indexed\":\"1970-01-01T00:00:00+00:00\",\"authors\":[{\"url\":null,\"name\":\"Albert Krewinkel\"}],\"image\":null,\"content_html\":\" \\n\\n\\n\\nSitemaps are an easy way to list all pages that a search engine should crawl and index. Quarto supports the standard, XML based Sitemap Protocol, although that fact is a bit hidden in the docs.
\\nQuarto
will automatically produce a sitemap if the website’s URL is given as site-url
property:
website:\\n title:
tarleb\\n site-url: 'https://tarleb.com'\\n site-path:
'/'
The
URL will contain a colon :
, which is why the YAML value must
be put in quotes.
You’ll find the sitemap.xml
file in
your _site
folder after re-rendering your pages with quarto
render
.
GigaScience Press is pleased to announce GigaByte journal is now indexed in PubMed and PubMed Central (PMC) databases run by the US National Library of Medicine. PubMed being a searchable database of biomedical citations and abstracts, and PMC being a free digital archive of the full-text of Open Access articles. While both increase the discoverability of our published papers, our now automatic inclusion into PMC is also necessary for many researchers to meet their research funders policies and mandates on Open Access. As Open Access advocates, now this has been added to our publication workflow our authors do not need to worry about depositing their publications themselves. We use CLOCKSS for the long-term digital preservation of our scholarly content, and archiving in PMC also provides an additional level of protection and survival of preserved digital content.
\\n\\n\\n\\nPubMed and PMC join googlescholar, CNKI, JGate and the Directory of Open Access Journals (DOAJ) as indexes to discover and browse our content. Our partners at River Valley Technologies have worked hard to produce the highest quality XML for our human and machine readers (see our text and data mining portal), and this has made the process of indexing easier. PMC, DOAJ and JGate using converted versions of our XML to mirror our Open Access content. The DOAJ awarding us their seal for journals that demonstrate best practice in open access publishing, and CrossRef giving us a near-100% score in their Participation Reports for the richness of our metadata (only being let down on the funding categories as our cost-effective APCs and waivers mean we publish quite a lot of number of papers without external funding).
\\n\\n\\n\\n\\n\\n\\n\\n\\n\\n\\n\\n\\n\\n\\n\\nNot only is our XML rich with this metadata, but we’ve been using it to capture multilingual versions of our content, and this also displays in the PMC versions of our papers (see our recent English/Spanish/Ukrainian paper in GigaByte and PMC). Further increasing the accessibility and global reach of our papers.
\\n\\n\\n\\n\\n\\n\\n\\n\\n\\n\\n\\nNow we have sufficient quantity and quality of publishing data for many other indexes we are continuing the process of submitting to these databases, so watch this space for updates on how that process goes.
\\nThe post GigaByte now indexed in PubMed and PubMed Central (PMC) appeared first on GigaBlog.
\\n\",\"tags\":[\"Open Access\",\"GigaByte\",\"Indexing\",\"open access\",\"pubmed\"],\"language\":\"en\",\"references\":[],\"blog_id\":\"3ffcd46\",\"blog\":{\"id\":\"3ffcd46\",\"title\":\"GigaBlog\",\"description\":\"Data driven blogging from the GigaScience editors\",\"language\":\"en\",\"favicon\":null,\"feed_url\":\"http://gigasciencejournal.com/blog/feed/atom/\",\"home_page_url\":\"http://gigasciencejournal.com/blog\",\"user_id\":\"8498eaf6-8c58-4b58-bc15-27eda292b1aa\",\"created_at\":\"2023-05-31T13:42:00+00:00\",\"indexed_at\":\"2023-02-05\",\"feed_format\":\"application/atom+xml\",\"license\":\"https://creativecommons.org/licenses/by/4.0/legalcode\",\"generator\":\"WordPress 6.2.2\",\"category\":\"Natural Sciences\",\"prefix\":\"10.59350\",\"modified_at\":\"2023-05-24T20:01:37+00:00\",\"version\":\"https://jsonfeed.org/version/1.1\",\"backlog\":true,\"authors\":null}},{\"id\":\"https://doi.org/10.53731/ejtg5cm-mtt292s\",\"uuid\":\"4eb9ac30-9edc-4830-b2ad-25183fb1548a\",\"url\":\"https://syldavia-gazette.org/data-sharing-challenger-expedition-research-software/\",\"title\":\"Data Sharing, Challenger Expedition, Research Software, Adornment, Prostate Cancer\",\"summary\":\"Ten (not so) simple rules for clinical trial data-sharingThe authors propose 10 (not so) simple rules for researchers wishing to share data, aligned on the clinical trial lifecycle.The Challenger Expedition - 1872Report of a BBC Radio episode about the 1872 four-year-long Challenger expedition to investigate the oceans of the earth.Defining the roles of research softwareWorkshop report that tries to define the various roles of research software.This 2,300-Year-Old Mummy Has a Heart (and...\",\"date_published\":\"2023-03-15T20:37:00Z\",\"date_modified\":\"2023-04-13T12:34:10Z\",\"date_indexed\":\"1970-01-01T00:00:00+00:00\",\"authors\":[{\"url\":\"https://orcid.org/0000-0003-1419-2405\",\"name\":\"Martin Fenner\"}],\"image\":\"https://digitalpress.fra1.cdn.digitaloceanspaces.com/gtpg506/2023/04/journal.pcbi.1010879.g001.png\",\"content_html\":\"The authors propose 10 (not so) simple rules for researchers wishing to share data, aligned on the clinical trial lifecycle.
Report of a BBC Radio episode about the 1872 four-year-long Challenger expedition to investigate the oceans of the earth.
Workshop report that tries to define the various roles of research software.
To acquire additional insight into the practice of adornment in ancient Egypt, three researchers recently \\\"digitally unwrapped\\\" the mummified body of a wealthy teenage boy from 2,300 years ago.
Fifteen-year outcomes after monitoring, surgery, or radiotherapy for localized prostate cancer. Prostate cancer–specific mortality was low regardless of the treatment assigned.
I tweeted about this but want to bookmark it for later as well. The paper “A molecular-based identification resource for the arthropods of Finland” doi:10.1111/1755-0998.13510 contains the following:
\\n\\n\\n…the annotated barcode records assembled by FinBOL participants represent a tremendous intergenerational transfer of taxonomic knowledge … the time contributed by current taxonomists in identifying and contributing voucher specimens represents a great gift to future generations who will benefit from their expertise when they are no longer able to process new material.
\\n
I think this is a very clever way to characterise the project. In an age of machine learning this may be commonest way to share knowledge , namely as expert-labelled training data used to build tools for others. Of course, this means the expertise itself may be lost, which has implications for updating the models if the data isn’t complete. But it speaks to Charles Godfrey’s theme of “Taxonomy as information science”.
\\nNote that the knowledge is also transformed in the sense that the underlying expertise of interpreting morphology, ecology, behaviour, genomics, and the past literature is not what is being passed on. Instead it is probabilities that a DNA sequence belongs to a particular taxon.
\\nThis feels is different to, say iNaturalist, where there is a machine learning model to identify images. In that case, the model is built on something the community itself has created, and continues to create. Yes, the underlying idea is that same: “experts” have labelled the data, a model is trained, the model is used. But the benefits of the iNaturalist model are immediately applicable to the people whose data built the model. In the case of barcoding, because the technology itself is still not in the hands of many (relative to, say, digital imaging), the benefits are perhaps less tangible. Obviously researchers working with environmental DNA will find it very useful, but broader impact may await the arrival of citizen science DNA barcoding.
\\nThe other consideration is whether the barcoding helps taxonomists. Is it to be used to help prioritise future work (“we are getting lots of unknown sequences in these taxa, lets do some taxonomy there”), or is it simply capturing the knowledge of a generation that won’t be replaced:
\\n\\n\\nThe need to capture such knowledge is essential because there are, for example, no young Finnish taxonomists who can critically identify species in many key groups of ar- thropods (e.g., aphids, chewing lice, chalcid wasps, gall midges, most mite lineages).
\\n
The cycle of collect data, test and refine model, collect more data, rinse and repeat that happens with iNaturalist creates a feedback loop. It’s not clear that a similar cycle exists for DNA barcoding.
\\n\\n\",\"tags\":[],\"language\":\"en\",\"references\":[],\"blog_id\":\"tyfqw20\",\"blog\":{\"id\":\"tyfqw20\",\"title\":\"iPhylo\",\"description\":\"Rants, raves (and occasionally considered opinions) on phyloinformatics, taxonomy, and biodiversity informatics. For more ranty and less considered opinions, see my Twitter feed.Written with StackEdit.
\\n
\\n\\nI made a small client for Blogger that takes a tag and transforms it into its own little blog: citation.js.org/blog/?post=542…. No metadata though, as it’s all client-side.
\\n
\\n— Lars Willighagen (@larswillighagen) \ August 6, 2018
I made a Material-themed
page showing Citation.js blog posts from Blogger. It supports pagination,
tags, search and linking individual posts. Since it’s a single, static page
I can’t support meta
and link
tags for metadata,
that would require JavaScript which indexers don’t run.
The great thing about the Blogger API is that you can generate feeds for single tags, like Citation.js for example, and search for tags and general queries within that tag. That’s what makes all this possible. The URL scheme is very simple:
\\n# Tag feed\\nhttps://$BLOG.blogspot.com/feeds/posts/default/-/$TAG\\n\\n# Tag-in-tag
feed\\nhttps://$BLOG.blogspot.com/feeds/posts/default/-/$TAG/$OTHER_TAG\\n\\n# Search-in-tag feed\\n# Note: don't copy this, there's a ZWS before ?q= for syntax highlighting\\nhttps://$BLOG.blogspot.com/feeds/posts/default/-/$TAG?q=$QUERY\\n\\n#
Post\\nhttps://$BLOG.blogspot.com/feeds/posts/default/$POST\\n
\\nPagination and response formats complicate things a little, and are dealt with in the code below.
\\nApart from the Material theme, it only uses vanilla JavaScript
to generate the pages. The search bar doesn’t even use JavaScript at all,
just good ol’ form
semantics. The JavaScript it does use is fairly
simple. First, the query is parsed and an API URL is generated.
window.onload = function () {\\n
\ var params = {}\\n \\n location.search.slice(1).split('&').map(function (pair) {\\n
\ pair = pair.split('=')\\n params[pair[0]] =
pair[1]\\n })\\n\\n var
url\\n\\n if (params.post) {\\n
\ url = 'https://larsgw.blogspot.com/feeds/posts/default/' + params.post
+ '?alt=json-in-script&callback=cb'\\n
\ } else
if (params.tag)
{\\n url = 'https://larsgw.blogspot.com/feeds/posts/default/-/Citation.js/'
+ params.tag
+ '?alt=json-in-script&callback=cb'\\n
\ } else
if (params.query)
{\\n url = 'https://larsgw.blogspot.com/feeds/posts/default/-/Citation.js/?q='
+ params.query
+ '&alt=json-in-script&callback=cb'\\n
\ } else
{\\n url = 'https://larsgw.blogspot.com/feeds/posts/default/-/Citation.js?alt=json-in-script&callback=cb'\\n
\ }\\n\\n var startIndex =
location.href.match(/start-index=(\\\\d+)/)\\n if
(startIndex) {\\n
\ url += '&' + startIndex[0]\\n }\\n\\n
\ load(url)\\n}\\n
\\nSince
the only JSON API for Blogger is JSON-in-script, we append a script element
loading the resource. This then calls the callback, cb
.
function cb (data)
{\\n content.innerHTML =
data.feed ? templates.feed(data.feed.entry) :
templates.feedItem(data.entry)\\n\\n
\ // pagination\\n if (data.feed)
{\\n var href = location.href\\n var
hasIndex = href.indexOf('start-index') >
-1\\n
\ var hasParams = href.indexOf('?')
> -1\\n var
indexPattern = /start-index=(\\\\d+)/\\n\\n var
prev = find(data.feed.link,
function (link) {
return link.rel ===
'previous' })\\n if
(prev)
{\\n prev = 'start-index='
+ prev.href.match(indexPattern)[1]\\n
\ var url = hasIndex ?
href.replace(indexPattern, prev)
: href + (hasParams
? '?'
: '') +
prev\\n paginatePrev.setAttribute('href',
url)\\n }\\n\\n var
next = find(data.feed.link,
function (link) {
return link.rel ===
'next' })\\n if
(next)
{\\n next = 'start-index='
+ next.href.match(indexPattern)[1]\\n
\ var url = hasIndex ?
href.replace(indexPattern, next)
: href + (hasParams
? '&'
: '?') +
next\\n paginateNext.setAttribute('href',
url)\\n }\\n }\\n}\\n\\nfunction
load (url) {\\n
\ loader.setAttribute('src',
url)\\n}\\n
\\nThe callback then uses simple templates, which are just JS functions taking in the API response and outputting HTML to show the results on the page. Then, it figures out the pagination. Below is an example template. It extracts the post id to make links and does some preprocessing, removing stackedit metadata and styling and lowering each heading two levels. Then, it puts together the HTML with some additional util functions and subtemplates.
\\n feedItem:
function (item) {\\n
\ var id = item.id.$t.replace(/^.*\\\\.post-(\\\\d+)$/, '$1')\\n var
content = item.content.$t\\n
\ .replace(/^[\\\\s\\\\S]*<div class=\\\"stackedit__html\\\">([\\\\s\\\\S]*)<\\\\/div>[\\\\s\\\\S]*$/, '$1')\\n .replace(/<(\\\\/?)h([1-6])/g, function
(match,
slash, level) {\\n
\ if (+level > 4) {\\n
\ return '<' + slash
+ 'b'\\n
\ } else {\\n
\ return '<' + slash
+ 'h'
+ (+level +
2)\\n
\ }\\n })\\n\\n
\ return '<div class=\\\"mdl-card mdl-shadow--2dp mdl-cell mdl-cell--12-col\\\">'
+\\n '<div class=\\\"mdl-card__title\\\">' +\\n '<h2 class=\\\"mdl-card__title-text\\\">'
+\\n '<a href=\\\"?post=' +
id + '\\\">'
+ item.title.$t +
'</a>' +\\n '</h2>'
+\\n '</div>' +\\n
\ '<div class=\\\"mdl-card__supporting-text
mdl-card--border\\\">' +\\n
\ '<p>' +\\n '<span><i
class=\\\"material-icons\\\">edit</i> ' +\\n templates.author(item.author[0]) +\\n
\ '</span>' +\\n '<span><i
class=\\\"material-icons\\\">access_time</i> ' +\\n formatDate(item.updated.$t)
+\\n '</span>' +\\n
\ '<span><i class=\\\"material-icons\\\">link</i>
<a href=\\\"' +\\n canonical(item.link)
+\\n '\\\">Original post</a></span>' +\\n '</p>'
+\\n '<p>' +\\n
\ '<span><i class=\\\"material-icons\\\">bookmark</i>
' +\\n map(item.category,
templates.tag).join(' ')
+\\n '</span>' +\\n
\ '</p>' +\\n '</div>'
+\\n '<div class=\\\"mdl-card__supporting-text\\\">' +\\n content +\\n '</div>'
+\\n '</div>'\\n
\ },\\n
\\nThe full source is available at here, and the page can be viewed here.
\\n
\\nBlog screenshot
The Rogue Scholar archive of scholarly blogs has grown to 34 blogs with about 420 blog posts. In order to implement DOI registration for these blog posts, I needed two things:
Initial work on DOI registration for blog posts focussed on exposing the relevant metadata on the blog landing page, using schema.org and/or HTML meta tags. While this approach worked well for this and similar blogs, it was too complicated and didn't scale well for the large and diverse number of blogs the Rogue Scholar aims to cover.
Therefore I implemented a different workflow taking advantage of the fact that all blogs come with RSS feeds that include content and metadata. More work was needed because there are different formats for these feeds (multiple flavors of RSS, as well as Atom, and the newer JSON Feed). Luckily, libraries in multiple programming languages exist to simplify the parsing of the various RSS Feed formats (I use the Javascript library feed-extractor).
The main challenge with metadata for blog posts – and with DOI metadata more general – is author names. They might not be natural names (for example mfenner instead of Martin Fenner), might be names for organizations and not people, the blogging platform might not support multiple authors, and some work is required to include the ORCID author identifier (or ROR institutional. identifier). The Atom format supports an author URL, which can hold the ORCID ID (or ROR ID), and Wordpress can be enhanced with the popular Co-Authors Plus plugin to support multiple authors.
The other challenge with DOI registration is keeping track of the content that has already been registered, and for this I launched a database, with one record for each post. I need the database also to enable full-text search across all blog posts, something I will implement in the coming weeks.
Will all the required pieces coming together, I was finally able to start DOI registrations yesterday. You will easily detect blog posts with a DOI on the Rogue Scholar website (there is a DOI icon next to the title, and the underlying link to the blog post is a DOI):
The process of DOI registration for all included blog posts should be concluded by the end of the month. There is more work needed to resolve issues with some author names, and DOI registration can be further automated (I am currently using GitHub Actions and a cronjob).
What also needs more work is getting the DOIs displayed on the blogs (the DOIs resolve to the blog post and not the Rogue Scholar archive). This is probably straightforward when using a static site generator, but requires more work when a database is involved (e.g. Wordpress). For Ghost blogs like this one, I found the canonical_url field to be a good place to store the DOI.
\",\"tags\":[\"Feature\"],\"language\":\"en\",\"references\":[],\"blog_id\":\"f0m0e38\",\"blog\":{\"id\":\"f0m0e38\",\"title\":\"Front Matter\",\"description\":\"The Front Matter Blog covers the intersection of science and technology since 2007.\",\"language\":\"en\",\"favicon\":\"https://blog.front-matter.io/favicon.png\",\"feed_url\":\"https://blog.front-matter.io/atom/\",\"home_page_url\":\"https://blog.front-matter.io\",\"user_id\":\"8498eaf6-8c58-4b58-bc15-27eda292b1aa\",\"created_at\":\"2023-05-31T07:20:33+00:00\",\"indexed_at\":\"2023-01-02\",\"feed_format\":\"application/atom+xml\",\"license\":\"https://creativecommons.org/licenses/by/4.0/legalcode\",\"generator\":\"Ghost 5.52\",\"category\":\"Engineering and Technology\",\"prefix\":\"10.53731\",\"modified_at\":\"2023-06-05T13:18:24+00:00\",\"version\":\"https://jsonfeed.org/version/1.1\",\"backlog\":true,\"authors\":null}},{\"id\":\"https://doi.org/10.53731/z9v2s-bh329\",\"uuid\":\"2a6fb006-ecb2-45d0-9d73-ca8a0e7f559e\",\"url\":\"https://blog.front-matter.io/posts/rogue-scholar-open-for-business/\",\"title\":\"The Rogue Scholar is now open for business\",\"summary\":\"The Rogue Scholar science blog archive launched with limited functionality on April 3rd. Interested science blogs can go to the sign-up page, provide some basic information via the sign-up form, and then will be added to the Rogue Scholar archive within two business days. To be included in the service, your blog needs to:be about science or scholarship and written in English or German (more languages will follow later, reach out to me if you can help),make the full-text content available via...\",\"date_published\":\"2023-04-04T08:43:36Z\",\"date_modified\":\"2023-04-04T09:31:14Z\",\"date_indexed\":\"1970-01-01T00:00:00+00:00\",\"authors\":[{\"url\":\"https://orcid.org/0000-0003-1419-2405\",\"name\":\"Martin Fenner\"}],\"image\":\"https://images.unsplash.com/photo-1575663620136-5ebbfcc2c597?crop=entropy&cs=tinysrgb&fit=max&fm=jpg&ixid=MnwxMTc3M3wwfDF8c2VhcmNofDR8fG9wZW4lMjBmb3IlMjBidXNpbmVzc3xlbnwwfHx8fDE2ODA1OTI3NTU&ixlib=rb-4.0.3&q=80&w=2000\",\"content_html\":\"The Rogue Scholar science blog archive launched with limited functionality on April 3rd. Interested science blogs can go to the sign-up page, provide some basic information via the sign-up form, and then will be added to the Rogue Scholar archive within two business days.
To be included in the service, your blog needs to:
Blogs that have signed up for the service (more than twenty so far) are listed in the Rogue Scholar catalog of science blogs that launched last week. And since yesterday summaries of the latest fifteen blog posts of each blog are also available.
These summaries (precisely the information you get in the RSS feed) serve two purposes:
The screenshot highlights several considerations when using the RSS Feed to archive a science blog in the Rogue Scholar:
The Rogue Scholar is now open for business, and I hope the limited functionality (or minimum viable product) launched this week makes it an attractive service for blog readers and authors to try out. The next big milestone is the launch of the full-text index for searching and archiving, and that is planned to happen within the next three months. Followed by DOI registration for blog posts.
\",\"tags\":[\"News\"],\"language\":\"en\",\"references\":[],\"blog_id\":\"f0m0e38\",\"blog\":{\"id\":\"f0m0e38\",\"title\":\"Front Matter\",\"description\":\"The Front Matter Blog covers the intersection of science and technology since 2007.\",\"language\":\"en\",\"favicon\":\"https://blog.front-matter.io/favicon.png\",\"feed_url\":\"https://blog.front-matter.io/atom/\",\"home_page_url\":\"https://blog.front-matter.io\",\"user_id\":\"8498eaf6-8c58-4b58-bc15-27eda292b1aa\",\"created_at\":\"2023-05-31T07:20:33+00:00\",\"indexed_at\":\"2023-01-02\",\"feed_format\":\"application/atom+xml\",\"license\":\"https://creativecommons.org/licenses/by/4.0/legalcode\",\"generator\":\"Ghost 5.52\",\"category\":\"Engineering and Technology\",\"prefix\":\"10.53731\",\"modified_at\":\"2023-06-05T13:18:24+00:00\",\"version\":\"https://jsonfeed.org/version/1.1\",\"backlog\":true,\"authors\":null}},{\"id\":\"https://doi.org/10.59350/d3dc0-7an69\",\"uuid\":\"545c177f-cea5-4b79-b554-3ccae9c789d7\",\"url\":\"https://iphylo.blogspot.com/2021/10/reflections-on-macroscope-tool-for-21st.html\",\"title\":\"Reflections on \\\"The Macroscope\\\" - a tool for the 21st Century?\",\"summary\":\"This is a guest post by Tony Rees. It would be difficult to encounter a scientist, or anyone interested in science, who is not familiar with the microscope, a tool for making objects visible that are otherwise too small to be properly seen by the unaided eye, or to reveal otherwise invisible fine detail in larger objects. A select few with a particular interest in microscopy may also have encountered the Wild-Leica \\\"Macroscope\\\", a specialised type of benchtop microscope optimised for...\",\"date_published\":\"2021-10-07T12:38:00Z\",\"date_modified\":\"2021-10-08T10:26:22Z\",\"date_indexed\":\"1909-06-16T10:02:25+00:00\",\"authors\":[{\"url\":null,\"name\":\"Roderic Page\"}],\"image\":null,\"content_html\":\"This is a guest post by Tony Rees.
\\n\\nIt would be difficult to encounter a scientist, or anyone interested in science, who is not familiar with the microscope, a tool for making objects visible that are otherwise too small to be properly seen by the unaided eye, or to reveal otherwise invisible fine detail in larger objects. A select few with a particular interest in microscopy may also have encountered the Wild-Leica \\\"Macroscope\\\", a specialised type of benchtop microscope optimised for low-power macro-photography. However in this overview I discuss the \\\"Macroscope\\\" in a different sense, which is that of the antithesis to the microscope: namely a method for visualizing subjects too large to be encompassed by a single field of vision, such as the Earth or some subset of its phenomena (the biosphere, for example), or conceptually, the universe.
\\n\\nMy introduction to the term was via addresses given by Jesse Ausubel in the formative years of the 2001-2010 Census of Marine Life, for which he was a key proponent. In Ausubel's view, the Census would perform the function of a macroscope, permitting a view of everything that lives in the global ocean (or at least, that subset which could realistically be sampled in the time frame available) as opposed to more limited subsets available via previous data collection efforts. My view (which could, of course, be wrong) was that his thinking had been informed by a work entitled \\\"Le macroscope, vers une vision globale\\\" published in 1975 by the French thinker Joël de Rosnay, who had expressed such a concept as being globally applicable in many fields, including the physical and natural worlds but also extending to human society, the growth of cities, and more. Yet again, some ecologists may also have encountered the term, sometimes in the guise of \\\"Odum's macroscope\\\", as an approach for obtaining \\\"big picture\\\" analyses of macroecological processes suitable for mathematical modelling, typically by elimination of fine detail so that only the larger patterns remain, as initially advocated by Howard T. Odum in his 1971 book \\\"Environment, Power, and Society\\\".\\n\\nFrom the standpoint of the 21st century, it seems that we are closer to achieving a \\\"macroscope\\\" (or possibly, multiple such tools) than ever before, based on the availability of existing and continuing new data streams, improved technology for data assembly and storage, and advanced ways to query and combine these large streams of data to produce new visualizations, data products, and analytical findings. I devote the remainder of this article to examples where either particular workers have employed \\\"macroscope\\\" terminology to describe their activities, or where potentially equivalent actions are taking place without the explicit \\\"macroscope\\\" association, but are equally worthy of consideration. To save space here, references cited here (most or all) can be found via a Wikipedia article entitled \\\"Macroscope (science concept)\\\" that I authored on the subject around a year ago, and have continued to add to on occasion as new thoughts or information come to hand (see edit history for the article).
\\n\\nFirst, one can ask, what constitutes a macroscope, in the present context? In the Wikipedia article I point to a book \\\"Big Data - Related Technologies, Challenges and Future Prospects\\\" by Chen et al. (2014) (doi:10.1007/978-3-319-06245-7), in which the \\\"value chain of big data\\\" is characterised as divisible into four phases, namely data generation, data acquisition (aka data assembly), data storage, and data analysis. To my mind, data generation (which others may term acquisition, differently from the usage by Chen et al.) is obviously the first step, but does not in itself constitute the macroscope, except in rare cases - such as Landsat imagery, perhaps - where on its own, a single co-ordinated data stream is sufficient to meet the need for a particular type of \\\"global view\\\". A variant of this might be a coordinated data collection program - such as that of the ten year Census of Marine Life - which might produce the data required for the desired global view; but again, in reality, such data are collected in a series of discrete chunks, in many and often disparate data formats, and must be \\\"wrangled\\\" into a more coherent whole before any meaningful \\\"macroscope\\\" functionality becomes available.
\\n\\nHere we come to what, in my view, constitutes the heart of the \\\"macroscope\\\": an intelligently organized (i.e. indexable and searchable), coherent data store or repository (where \\\"data\\\" may include imagery and other non numeric data forms, but much else besides). Taking the Census of Marine Life example, the data repository for that project's data (plus other available sources as inputs) is the Ocean Biodiversity Information System or OBIS (previously the Ocean Biogeographic Information System), which according to this view forms the \\\"macroscope\\\" for which the Census data is a feed. (For non habitat-specific biodiversity data, GBIF is an equivalent, and more extensive, operation). Other planetary scale \\\"macroscopes\\\", by this definition (which may or may not have an explicit geographic, i.e. spatial, component) would include inventories of biological taxa such as the Catalogue of Life and so on, all the way back to the pioneering compendia published by Linnaeus in the eighteenth century; while for cartography and topographic imagery, the current \\\"blockbuster\\\" of Google Earth and its predecessors also come well into public consciousness.
\\n\\nIn the view of some workers and/or operations, both of these phases are precursors to the real \\\"work\\\" of the macroscope which is to reveal previously unseen portions of the \\\"big picture\\\" by means either of the availability of large, synoptic datasets, or fusion between different data streams to produce novel insights. Companies such as IBM and Microsoft have used phraseology such as:
\\n\\nBy 2022 we will use machine-learning algorithms and software to help us organize information about the physical world, helping bring the vast and complex data gathered by billions of devices within the range of our vision and understanding. We call this a \\\"macroscope\\\" – but unlike the microscope to see the very small, or the telescope that can see far away, it is a system of software and algorithms to bring all of Earth's complex data together to analyze it by space and time for meaning.\\\" (IBM)\\n\\n
As the Earth becomes increasingly instrumented with low-cost, high-bandwidth sensors, we will gain a better understanding of our environment via a virtual, distributed whole-Earth \\\"macroscope\\\"... Massive-scale data analytics will enable real-time tracking of disease and targeted responses to potential pandemics. Our virtual \\\"macroscope\\\" can now be used on ourselves, as well as on our planet.\\\" (Microsoft) (references available via the Wikipedia article cited above).\\n\\n
Whether or not the analytical capabilities described here are viewed as being an integral part of the \\\"macroscope\\\" concept, or are maybe an add-on, is ultimately a question of semantics and perhaps, personal opinion. Continuing the Census of Marine Life/OBIS example, OBIS offers some (arguably rather basic) visualization and summary tools, but also makes its data available for download to users wishing to analyse it further according to their own particular interests; using OBIS data in this manner, Mark Costello et al. in 2017 were able to demarcate a finite number of data-supported marine biogeographic realms for the first time (Costello et al. 2017: Nature Communications. 8: 1057. doi:10.1038/s41467-017-01121-2), a project which I was able to assist in a small way in an advisory capacity. In a case such as this, perhaps the final function of the macroscope, namely data visualization and analysis, was outsourced to the authors' own research institution. Similarly at an earlier phase, \\\"data aggregation\\\" can also be virtual rather than actual, i.e. avoiding using a single physical system to hold all the data, enabled by open web mapping standards WMS (web map service) and WFS (web feature service) to access a set of distributed data stores, e.g. as implemented on the portal for the Australian Ocean Data Network.
\\n\\nSo, as we pass through the third decade of the twenty first century, what developments await us in the \\\"macroscope\\\" area\\\"? In the biodiversity space, one can reasonably presume that the existing \\\"macroscopic\\\" data assembly projects such as OBIS and GBIF will continue, and hopefully slowly fill current gaps in their coverage - although in the marine area, strategic new data collection exercises may be required (Census 2020, or 2025, anyone?), while (again hopefully), the Catalogue of Life will continue its progress towards a \\\"complete\\\" species inventory for the biosphere. The Landsat project, with imagery dating back to 1972, continues with the launch of its latest satellite Landsat 9 just this year (21 September 2021) with a planned mission duration for the next 5 years, so the \\\"macroscope\\\" functionality of that project seems set to continue for the medium term at least. Meanwhile the ongoing development of sensor networks, both on land and in the ocean, offers an exciting new method of \\\"instrumenting the earth\\\" to obtain much more real time data than has ever been available in the past, offering scope for many more, use case-specific \\\"macroscopes\\\" to be constructed that can fuse (e.g.) satellite imagery with much more that is happening at a local level.
\\n\\nSo, the \\\"macroscope\\\" concept appears to be alive and well, even though the nomenclature can change from time to time (IBM's \\\"Macroscope\\\", foreshadowed in 2017, became the \\\"IBM Pairs Geoscope\\\" on implementation, and is now simply the \\\"Geospatial Analytics component within the IBM Environmental Intelligence Suite\\\" according to available IBM publicity materials). In reality this illustrates a new dichotomy: even if \\\"everyone\\\" in principle has access to huge quantities of publicly available data, maybe only a few well funded entities now have the computational ability to make sense of it, and can charge clients a good fee for their services...
\\n\\nI present this account partly to give a brief picture of \\\"macroscope\\\" concepts today and in the past, for those who may be interested, and partly to present a few personal views which would be out of scope in a \\\"neutral point of view\\\" article such as is required on Wikipedia; also to see if readers of this blog would like to contribute further to discussion of any of the concepts traversed herein.
\",\"tags\":[\"guest post\",\"macroscope\"],\"language\":\"en\",\"references\":[],\"blog_id\":\"tyfqw20\",\"blog\":{\"id\":\"tyfqw20\",\"title\":\"iPhylo\",\"description\":\"Rants, raves (and occasionally considered opinions) on phyloinformatics, taxonomy, and biodiversity informatics. For more ranty and less considered opinions, see my Twitter feed.So a while ago I was looking around for the RIS specification again. I had not found it earlier, only a reference implementation from Zotero, a surprisingly complete list of tags and types on Wikipedia and some examples from various websites and programs exporting RIS files. They did not seem to go together well, however. There were some slight differences in tags here and there, and a bunch of useful tags listed by Wikipedia were labelled “degenerate” in the Zotero codebase, and only used for imports — implying some sort of problem.
\\nWhat could be going on? Well, I checked out the references on the Wikipedia page again, to see if there really was no official specification or some other more reliable source where it got its information from. And, suddenly, there was an actual source this time. I do not know how I missed it earlier, but there was a page (archived) that linked to a zip file containing a PDF file with general specifications and an Excel file with sheets with property lists for all different types.
\\nThat sounded useful, so I spent waaayy to much time automating a script to turn those sheets — with a bunch of user input — into usable mappings for Citation.js. I just finished that today, apart from some… questionable mappings, but I wanted to at least test the final script with an example. As for the results, well, see for yourself. The example, from the Wikipedia page (CC-BY-SA 3.0 Unported) was
\\nTY
\ - JOUR\\nT1 - On computable numbers, with an application to the Entscheidungsproblem\\nA1
\ - Turing, Alan Mathison\\nJO - Proc. of London Mathematical Society\\nVL
\ - 47\\nIS - 1\\nSP - 230\\nEP - 265\\nY1 - 1937\\nER -\\n
\\nand my results were
\\n{ issue: 1, page:
230,
type: 'article-journal', volume:
47 }\\n
\\nThat
looked really weird and disappointing. Again, what could possibly be going
on here? The example on Wikipedia is using T1
, A1
,
JO
and Y1
while the specs say to use TI
,
AU
, T2
and PY
here. Where are these
differences coming from?
After some digging around on Wikipedia I found a comment saying that there are in fact two specifications: one from 2011 and one from before. The archived spec I checked out was from 2012 (as linked by Wikipedia!), while they use the version from before 2011; which luckily is still available. To be continued.
\\nRecently I moved the
Citation.js API documentation
from /api
to /api/0.3
, to put the new documentation
on /api/0.5
. I fixed all the links to the documentation, but
I still got a issue request regarding a 404 error after just a few days. All
in all, I had to redirect pages from /api/*
to /api/0.3/*
while all these pages are hosted as static files on GitHub Pages.
There are three ways I found to do this:
\\n/api/*
that redirect to /api/0.3/*
via JavaScript or a <meta>
tag.jekyll-redirect-from
.
This is equivalent to option 1, I think.Option 1 seemed like a hassle and I do not use Jekyll so option 2 seemed out of the question as well. However, we still have option 3 to consider:
\\n404.html
to the repository which gets served automatically
on a 404. It then redirects to /api/0.3/*
with JavaScript, and
gives guidance on how to find the new URL manually if JavaScript is disabled.404.html
is just a normal 404 page with 4 lines of JavaScript:
var docsPattern =
/(\\\\/api)(\\\\/(?!0.[35]\\\\/)|$)/
\ \\n \\nif (docsPattern.test(location.pathname)) {
\ \\n location.pathname = location.pathname.replace(docsPattern,
'$1/0.3$2')
\ \\n}\\n
\\nBreaking down the RegExp pattern:
\\n(\\\\/api)
matches
“/api” in the URL(\\\\/(?!0.[35]\\\\/)|$)
matches
one of two things, immediately after “/api”\\n$
,
the end of the string (like “https://citation.js.org/api”
without the trailing slash)\\\\/(?!0.[35]\\\\/)
,
which matches a forward slash (\\\"/api/\\\") followed by anything except
“0.3” or “0.5”. This is to avoid matching things like “/apical/” or “/api/0.3/does-not-exist”.This
is not the neatest solution but I like it conceptually. It shows a bit of
potential for Single-Page Applications as well: you can serve the same HTML+JavaScript
for every possible path without having to rely on URLs like https://example.org/#/path/page
.
The problem is that you still get the 404 HTTP status (as you should), so
if a browser or search crawler decides to care you have a problem.
Try it out now: https://citation.js.org/api/
\\n\",\"tags\":[\"Citation.js\",\"programming\"],\"language\":\"en\",\"references\":[],\"blog_id\":\"y55kq35\",\"blog\":{\"id\":\"y55kq35\",\"title\":\"Syntaxus baccata\",\"description\":\"Thoughts about bibliographic metadata, programming, statistics, taxonomy, and biology.\",\"language\":\"en\",\"favicon\":\"https://blogger.googleusercontent.com/img/a/AVvXsEjHdUshBRlHAgurlZYYubJ9yfTRkteSS8hwgor7bpsdHNByrUXglmpVcCS2McVehJLgWZPyr61hB2Nq8iXql2-0qLFfZZHWdMn8wuKGz5UO1vZXWQkRKn_4EelUV6NneQW6fN1wlggv7skzUBri-T7bJZnlzXkKj6FjF8p0wLDR8DXwO7oN8QSwcN9-=s64\",\"feed_url\":\"https://larsgw.blogspot.com/feeds/posts/default\",\"home_page_url\":\"https://larsgw.blogspot.com/\",\"user_id\":\"8498eaf6-8c58-4b58-bc15-27eda292b1aa\",\"created_at\":\"2023-05-31T16:56:38+00:00\",\"indexed_at\":\"2023-01-08\",\"feed_format\":\"application/atom+xml\",\"license\":\"https://creativecommons.org/licenses/by/4.0/legalcode\",\"generator\":\"Blogger 7.00\",\"category\":\"Engineering and Technology\",\"prefix\":\"10.59350\",\"modified_at\":\"2023-05-08T11:05:00+00:00\",\"version\":\"https://jsonfeed.org/version/1.1\",\"backlog\":true,\"authors\":null}},{\"id\":\"https://doi.org/10.59350/7dz35-q5185\",\"uuid\":\"d2b5f8f3-dde5-4944-a4e1-f9776c86b38b\",\"url\":\"http://sfmatheson.blogspot.com/2023/06/design-without-designer-explaining-and.html\",\"title\":\"Design without a designer: explaining and answering some questions\",\"summary\":\"I've been writing and thinking about design in biology since I started Quintessence of Dust. I want to write and think about it a lot more, so in my last post I introduced my view of the concept and pointed ahead to this post, which consists of edited excerpts from some conversations at a discussion forum at the Peaceful Science site. You will find links to those conversations in the previous post. I have removed people's names and lightly edited for clarity. Other people’s words are indented;...\",\"date_published\":\"2023-06-06T20:22:00Z\",\"date_modified\":\"2023-06-07T20:44:58Z\",\"date_indexed\":\"1970-01-01T00:00:00+00:00\",\"authors\":[{\"url\":null,\"name\":\"Stephen Matheson\"}],\"image\":null,\"content_html\":\"I've been writing and thinking about design in biology since I started Quintessence of Dust. I want to write and think about it a lot more, so in my last post I introduced my view of the concept and pointed ahead to this post, which consists of edited excerpts from some conversations at a discussion forum at the Peaceful Science site. You will find links to those conversations in the previous post. I have removed people's names and lightly edited for clarity. Other people’s words are indented; the rest are mine.I would like to understand better how you think about it because it seems quite different from the way I normally think about it. I would really like learn more about how you see it and expand my understanding of \\\"design\\\".
How do you manage the distinction between design as a \\\"process of making\\\" vs. \\\"plan or schematic\\\" vs. \\\"product of a designing process\\\"?
Is that last one even rightly called design?
So you are trying to legitimize one understanding of design in its own right, separate from other definitions.
Seems like the same thing would have to be done with the term \\\"purpose.\\\"
I don’t think science recognizes design without a model of a designer. So this would be pretty revolutionary.
I tend towards notion of \\\"appearance of design,\\\" and trying to make sense of what defines that appearance. That is, I think, closely aligned with what you are after.
What would you suggest as terminology for the two cases?
The second misplaced emphasis is Pinker’s phrase 'the illusion of design in the natural world.\\\" Richard Dawkins, in a similar vein, says \\\"the illusion of design conjured by Darwinian natural selection is so breathtakingly powerful\\\" in The Ancestors’ Tale (p 457), and elsewhere proposes to speak of \\\"designoid\\\" features of the natural world (eg., Climbing Mount Improbable, p 4). I disagree with this policy, which can backfire badly. I recently overheard a conversation among some young people in a bar about the marvels of the nano-machinery discovered inside all cells. \\\"When you see all those fantastic little robots working away, how can you possibly believe in evolution!\\\" one exclaimed, and another nodded wisely. Somehow these folks had gotten the impression that evolutionary biologists thought that the intricacies and ingenuities of life’s processes weren’t all that wonderful. These evolution-doubters were not rednecks; they were Harvard Medical students! They hugely underestimated the power of natural selection because they had been told by evolutionary biologists, again and again, that there is no actual design in nature, only the appearance of design. This episode strongly suggested to me that one of the themes that has been gaining ground in \\\"common knowledge\\\" is that evolutionary biologists are reluctant to \\\"admit\\\" or \\\"acknowledge\\\" the manifest design in nature. I recommend instead the expository policy of calling nature’s marvels design, as real as any design in the universe, but just not the products of an intelligent designer.
Research will be different from normal for a while and even in the best cases it will also be slower. The shift to working remotely will limit the kinds of work we can do and everyone doing research is experiencing a dramatic disturbance to their lives. This means the people in our labs will need flexibility and support.
\\nTalk to your lab members to understand their needs: Recognize that these needs will be different for different people. Many will have new responsibilities and stresses that preclude working normally, but some may use work as a coping mechanism.
\\nMake it clear that moving more slowly and delaying things is expected and 100% OK. Push back project timelines, understand that some folks will make little to no progress for a while, consider delaying stressful graduate activities like qualifying exams.
\\nProvide financial reassurance: If true, ensure your team that their current funding won’t be cut. If possible, offer extensions on funding. This will help alleviate stress and uncertainty.
\\nRecognize power dynamics when offering flexibility: Make sure that team members are comfortable opting-out of “optional” choices and don’t feel pressured to be productive, to work on-campus, or work at certain times.
\\nProvide access to university resources: Inform group members about university programs related to mental health, expanded sick leave, and other forms of support for well being.
\\nGive your team the resources they need to work from home: Encourage your group to move things from campus that they need to work at home including computers, books, and chairs. Universities typically allow this for remote work (there may be a form to fill out). If possible, purchase additional supplies needed for remote work (e.g., headsets).
\\nRemote management recommendations focus on good communication, breaking projects into manageable pieces, keeping everyone on the same page with clear next steps, and tracking progress. This will make your group more efficient and inclusive when working remotely.
\\nUse video conferencing to replace in-person interactions: Do this for any regular meetings you have (e.g., one-on-one meetings, lab group meetings) and also informal interactions (like popping into an advisors office or chatting science with labmates). Communicate your availability and how to set up meetings.
\\nUse a group-based discussion tool (e.g., Slack or Microsoft Teams) : This supports asking questions and working on group projects and facilitates interactions among lab members with different work hours (important for those with responsibilities like child care). It can also provide an outlet for social interactions. Text does lose subtle social cues so video or audio is still best for delicate conversations. Check out the getting started documentation for Slack or Microsoft Teams.
\\nRead up on managing remote teams: There are some unique skills to remote management, but there is lots of information on how to do this including: How to overcome your worries about letting people work remotely, How to oversee a remote team’s work, Ten simple rules for a successful remote postdoc.
\\nUse project management and collaboration tools: These tools help you use good remote management practices. Most labs will benefit from a tool for writing and a tool for project management. Labs that write code (including for analyzing data) will also benefit from a code collaboration platform. Check out getting started guides for Google Docs (for writing) and Trello (for project management). Learning version control for managing code is a bigger commitment, but the Software Carpentry lessons are a good starting point.
\\nIt’s important to prioritize the safety of your team over research. This may mean changing your research plans to support social distancing and reduce or eliminate travel.
\\nFocus on analyzing and writing up existing data and ideas: This is the easiest adjustment because it minimizes shifts in research area and need for new skills. Existing data isn’t just what a specific student has already collected, but can include previous data collection from your lab.
\\nSynthesize existing knowledge: Writing reviews lets your team use their expertise to synthesize existing knowledge.
\\nConduct research on open data: There are increasingly large amounts of openly available data in many fields. There may be data that can be used to address questions similar to those you are studying using field or lab based approaches.
\\nCollaborate when extending into new research areas: Computational research, working with large datasets, systematic reviews and metaanalyses all take expertise. To pivot into new methods or topics consider finding someone with the associated expertise to collaborate with. There may well be experts on your students’ committees or in your department or university.
\\nDevelop new skills/expertise to expand your groups’ research horizons: Instead of jumping into a new project requiring new skills support your team taking this time to learn new skills (e.g., computing methods or statistical approaches) or develop new expertise (reading up on new areas of the literature) to serve as the foundation for future research.
\\nInitially prepared for the UF/IFAS
Faculty Forum: Living, Working, and Adapting to the New Normal of COVID-19.
Led by Ethan White (@ethanwhite)
(who is responsible for anything bad) with contributions from SK
Morgan Ernest (@skmorgane), Hao
Ye (@Hao_and_Y), Brandon
S. Cooper (@brandonscooper), JJ
Emerson (@JJ_Emerson), Katy
Huff (@katyhuff), Russell
Neches (@ryneches), Auriel
Fournier (@RallidaeRule), Jessica
Burnett (@TrashBirdEcol), Melissa
Rethlefsen (@mlrethlefsen), Eric
Scott (@LeafyEricScott), Kathe
Todd-Brown (@KatheMathBio), itati
en casa (@itatiVCS), Alexey
Shiklomanov (@ashiklom711) (who are responsible for anything awesome).
A lot of the thinking in “Adopt & adapt tools and approaches for managing
remote teams” was influenced by “Ten
simple rules for a successful remote postdoc” by Kevin
Burgio, Caitlin McDonough
MacKenzie, Stephanie
Borrelle, Morgan Ernest,
Jacquelyn Gill, Kurt
Ingeman, Amy Teffer,
and me.
\\n
In their textbook Modern Principles of Economics, Tyler and Tabarrok use asteroid defense as an example of a public good.
Less saturated fat, less sodium, less added sugar; manufacturers say almost no packaged foods would qualify under the agency’s new rules.
A short cultural history of slime.
This report outlines a series of collaborative discussions that brought together an international group of stakeholders and experts.
Short video telling the story of how the Parthenon Marbles were moved from Athens to London 200 years ago.