--- http_interactions: - request: method: get uri: https://rogue-scholar.org/api/posts/not_indexed/2023-01-01 body: encoding: UTF-8 string: '' headers: Connection: - close Host: - rogue-scholar.org User-Agent: - http.rb/5.1.1 response: status: code: 200 message: OK headers: Age: - '0' Cache-Control: - public, max-age=0, must-revalidate Content-Length: - '1788779' Content-Type: - application/json; charset=utf-8 Date: - Fri, 16 Jun 2023 06:11:31 GMT Etag: - '"fkoxrwyf2j12bp3"' Server: - Vercel Strict-Transport-Security: - max-age=63072000 X-Matched-Path: - "/api/posts/not_indexed/[slug]" X-Vercel-Cache: - MISS X-Vercel-Id: - fra1::iad1::j9vh5-1686895890575-35e041875459 Connection: - close body: encoding: UTF-8 string: '[{"id":"647db114f224a54ca047c5a6","uuid":"ab58e412-06eb-42b7-b81a-d340825b9d48","url":"https://libscie.org/s03e09/","title":"Gamification in research (s03e09)","summary":"What is there to consider when gamifying research practices? Is it actually new?","date_published":"2023-06-06T07:00:45Z","date_modified":null,"date_indexed":"1970-01-01T00:00:00+00:00","authors":[{"url":null,"name":"Chris Hartgerink"}],"image":"https://images.unsplash.com/photo-1579407364450-481fe19dbfaa?crop=entropy&cs=tinysrgb&fit=max&fm=jpg&ixid=M3wxMTc3M3wwfDF8c2VhcmNofDF8fGxlYWRlcmJvYXJkfGVufDB8fHx8MTY4NTk3MzIwNXww&ixlib=rb-4.0.3&q=80&w=2000","content_html":"
[00:00:00] Chris Hartgerink: Hi, and welcome to the Open Update. We''re here for another episode. For Liberate Science, I''m Chris Hartgerink, joined by my lovely co-host, Sarahanne Field.
[00:00:10] Chris Hartgerink: In the Open Update, we talk about power imbalances in research and the societies we live in to also start better understanding what we can imagine as a realistic alternative.
[00:00:22] Chris Hartgerink: Just like in Star Wars, we wanna bring balance and the first step is to make ourselves aware of these power imbalances. So thanks for joining us on this journey.
[00:00:32] Chris Hartgerink: Today we''ll talk about gamification of researchers, what it does to the power balances, and also what the effects are more broadly and maybe some of the history within open science and open research, some events that relate to the gamification and discussion around it.
So Sarahanne, you said this was on your mind lately and you''d love to discuss this. For our listeners, what is gamification and what does it have to do with power imbalances to begin with?
[00:01:03] Sarahanne Field: The basic idea is, is to motivate people to engage, by using game elements, competitiveness, aspects of fun, talking about rules and particular ways of playing. As it relates to open research, it''s basically the idea that you are using fun elements of play and of games to motivate people to engage with open research.
[00:01:25] Sarahanne Field: Responses to gamification in open research in particular are mixed. Some people think it''s really fun and think it''s good, especially for getting earlier career researchers to get involved, but some people find it infantilizing or patronizing.
[00:01:40] Sarahanne Field: So, for example, a lot of people are aware of the badges that some journals use for rewarding things like open data, open materials, pre-registration, so you can get badges, which are sort of a kinda a fun element of engagement.
[00:01:55] Sarahanne Field: And again, that sort of is an attempt to motivate people to try and these badges, another more sort of controversial example of gamification is the curate science leaderboard.
[00:02:08] Sarahanne Field: So leaderboards play a key role in gamification, because they try and get people to compete against one another. But it''s, it''s also been a little bit controversial because people feel, that it''s, a bullying, aspect to it or that we shouldn''t be inducing competition in research because we already have that problem of it being too competitive a culture, and it had been toxic as a result of it.
[00:02:34] Chris Hartgerink: I like that you mentioned different examples because talking about gamification as a whole, it might result in discussions that might not appreciate the variations. This idea of inducing competition with the leader board can be very detrimental. Whenever I see a leaderboard, I, one always feel incompetent. Like if you play a game on your phone or something and then you get a really good score, you think, and then somebody else has a score that''s like a hundred thousand times higher and it always makes me feel very incompetent in it.
[00:03:05] Chris Hartgerink: But then this idea with, with the badges is, \"oh, I''m trying to collect them all\", \"what do I have to do\", and \"how can I unlock certain things.\"
[00:03:15] Chris Hartgerink: It''s fairly interesting to maybe chat a bit about this history because, especially in terms of, you know, gamification in open science. You mentioned badges and leaderboards, which have two very specific histories. I think with the badges it''s been. This idea of, you know, does it really work or not to incentivize people to share?
[00:03:36] Chris Hartgerink: The evidence has been disputed, in that sense.
[00:03:38] Chris Hartgerink: And then with the leaderboards, there''s also this idea of, who chooses to participate in them. And especially the example of, curate science, which went overboard in that sense.
[00:03:53] Sarahanne Field: There''s so much to sort of unpack, right? I mean, one thing that I think about when I think about gamification in general and also in particular leaderboards, is whether the motivation is intrinsic or not. so the literature tends to sort of point towards leaderboards in gamification as sort of not really increasing intrinsic motivation.
[00:04:13] Sarahanne Field: And I think that''s a problem for something like, Increasing transparency, increasing those kinds of sort of better practices. It''s important to emphasize elements of gamification that do increase motivation, but also increase intrinsic motivation.
[00:04:30] Sarahanne Field: You wanna increase engagement , but it''s important to increase that intrinsic motivation. You''re playing the game to really engage in the practices rather than playing the game just to win. I think it''s important to balance these two when it comes to changing practices and changing culture. I sort of think about this, you know, how, what role does it play in changing culturally, just becoming more competitive?
[00:04:53] Sarahanne Field: Are we just looking at these motivations that sort of tie into our image and our profile? Or is it really changing practices and, and really shifting culture? You know, and that''s what, that''s what concerns me. So I guess that''s, this is sort of a more negative thing. Um, but, but that, I think there are plenty of positives, you know, people who are entered the, the gamification.
[00:05:14] Sarahanne Field: Stuff. You know, I think it can be really good. They can get a, you know, a scientific record for themselves that in which they get, they''re aiming for getting all the badges, and I think that ultimately has a really good impact on science because if they''re opening up their research more as a result of it, if they''re preregistering or, or doing a registered report more as a result of it, then I think that''s ultimately for science, for research.
[00:05:40] Sarahanne Field: I think that''s a really good thing. So I think as long as we are balancing, you know, these different elements and keeping, keeping thoughtful and reflective about these things, rather than just doing it for, for winning, I, you know, I think it can be a good thing ultimately and net and net positive I would say.
[00:05:57] Chris Hartgerink: In that sense, science is already very gamified. With the journal rankings, that''s sort of a leaderboard aspect, right? How many publications do you have? Where have you published? It makes me think of what is the purpose of a specific instance of gamification.
[00:06:13] Chris Hartgerink: So with journal rankings, I think that doesn''t work because the purpose is there is no real purpose except for being in competition. There could be another form of gamification, which has the purpose of, for example, connecting researchers more, participating in communities, if there''s certain, you know, if we check in once a week in a certain email list, and that gives you certain quote unquote rewards, then that''s a form of gamification, which has the purpose of connecting people and sharing information, which I think is a completely different aspect. So I guess coming back to this question of what is the purpose of a specific instance of gamification is incredibly important because otherwise it ends up being self-defining, like for it for its own sake.
[00:07:03] Chris Hartgerink: With badges for example, the purpose is to increase the behavior of sharing data, for example. And I think that''s in essence good but then, and indeed, as you say, how can we be reflective of this? Because if we''re promoting that behavior of, for example, sharing data, are we then people who cannot share data, are we making them quote unquote lesser than. So this idea of thinking about what are the people who cannot get rewarded? Is it something really that people can participate in if they want to, or is it more a circumstance? In that sense, it makes me think of if it ends up in the right behaviors, then it''s warranted. Makes me also think of, you know, the ends do not justify the means, and what is the process in gamification and is it really something we want to encourage.
[00:08:09] Chris Hartgerink: There this idea of that it''s very patronizing or paternalistic because the person designing the game is saying, this is the outcome we want and we''re gonna make sure you go towards that or the probability that you go towards that is higher.
[00:08:25] Chris Hartgerink: So in that sense, I guess also this question of who, who''s designing the games and do we have insight into These as well, because, everything in life is a game. Especially over the past 15 years, I''ve definitely noticed that my attention is being gamed in many ways that I really actively have to avoid Instagram reels or TikTok or YouTube shorts because I end up spending way more time than I want to. That''s also a form of gamification.
[00:09:00] Sarahanne Field: One thing that sort of was a concern about the curate science leader board, and this comes back to the idea of power imbalances and something you just touched on before.
[00:09:10] Sarahanne Field: You know, who, who''s able to participate and whatnot. There are two things I wanted to mention. First thing is I think that the gamification is less harmful when people are able to engage at will. So the Curate Science Leader Board, which, which some people call the Transparency audit, which is a very different framing, that gives it a very different twist.
[00:09:33] Sarahanne Field: People had to ask to, to be taken off it. So people were being put on this leaderboard without their knowledge. And so some people were asking to, to be taken off it cause they didn''t agree with it. There''s a problem with putting people up on a board without their consent. For something like this, I think being able to opt in is important there because you are then agreeing to be part of that game, right?
[00:09:57] Sarahanne Field: People should have the choice that the right to be able to sort of opt in for that rather than having to opt out.
[00:10:04] Sarahanne Field: Coming back to what you said before, you know when people can''t, for example, share data. If you are, for example, like myself, I do mixed methods research and depending on what I''m doing on a given day, some of that is just not, not shareable. I, I cannot share people''s sensitive personal data. That''s, that''s a legal thing as well as a moral thing, and. So, you know, by default some of those aspects I''m just not gonna be competitive in.
[00:10:28] Sarahanne Field: I would definitely like to have a say in, you know, whether or not I will be on that board because I don''t wanna be compared to people who are doing, for example, only quantitative research that can be completely anonymized and who are naturally by default, just by their methodology, are gonna be able to be more competitive.
[00:10:45] Sarahanne Field: That''s a bit of a concern of mine and it comes back to sort of what the dominant traditions in research are right that the quantitative science tends to be dominant. That, you know, white perspectives tend to be dominant. I think we have an issue there too.
[00:11:01] Chris Hartgerink: You inspire a bunch of thoughts and I think one that I want to pick out there is that, what you say, there''s this first order. The very first level of, you know, what is the game you''re playing and are those rules even transparent to you?
[00:11:18] Chris Hartgerink: But then there''s also this second order reasoning there, where is it even clear whether you''re playing the game that you want to in that sense. So the idea of, you know, It depends very much on the trajectory that you want to be in, whether you want to play game A or game B or none at all. And in that sense, this idea of there''s a universal game all researchers need to play that''s gonna be incredibly problematic to begin with.
[00:11:50] Chris Hartgerink: And maybe. Maybe we need more games, so more gamification so people can actually make decisions as to I''d rather participate in this and not in that one. So really more alternatives to journal rankings. More alternatives to publications. I did not expect this to go into assess systems, but in essence, an assessment system is a form of gamification.
[00:12:15] Chris Hartgerink: At the same time, I love playing video games and it can be very encouraging as well. So I would, I would even encourage more thoughtfully designed games or gamification that can actually move the needle in the right direction. But maybe that isn''t a good idea anyway because, What is the right direction?
[00:12:40] Chris Hartgerink: It''s not just the effects of the gamification, but it''s this question of do we know to what purpose something needs to be gamified. Research equals specifically, we also think about gamification of, how do we get people to just enjoy what they''re doing, not even be in competition, but to simply have this be, process that''s enjoyable.
[00:13:03] Chris Hartgerink: And where they can also, you know, if they put time in that they can get something out. And I think with education very often, it''s gamified in a way of you receive a certificate. And you can showcase that somewhere else, and it gives you access to quote unquote, the next level in your career. I think a master''s degree or a PhD is sort of, you know, you''re, you''re leveling up.
[00:13:25] Sarahanne Field: One thing I''d like to just touch on briefly that you mentioned earlier is that who is creating these games?
[00:13:32] Sarahanne Field: So often we have people who are already prominent in the open research community, for example, who are the ones who are heading up the ideas for these games, who are producing the rules, who are, who are basically saying, you know, what the game is about and what it''s for.
[00:13:47] Sarahanne Field: The more voices that we get involved with gamification, the more priorities and needs are going to be reflected in these games. And I think that''s important. It comes back to sort of whose voices are we hearing in the open research community and whose perspectives are sort of represented. The more diversity we see in open research, I hope that the more diversity we''re gonna see in the gamification of these different practices, I think that''s, you know, something that''s traveling in the right direction and that we need to be critical about when we engage with a game, Being critical about whose voices and whose needs and priorities are being, are being reflected in that game and taking that into account and reflecting on that. I think that''s important.
[00:14:30] Chris Hartgerink: I keep flip flopping during this conversation cuz I think gamification in a way is also a very production driven, like optimizing output driven mechanism. Because it makes people expend more energy and it reminds me of somebody sharing in the, in the signal group as well. This whole like toot on how we''re not discussing enough how improving research relates to anti-capitalist behavior as well.
[00:14:57] Chris Hartgerink: Where it''s not just about increasing efficiency and reducing research waste, which is within the improving research communities, substream, which I agree with the premise, but it also reinforces this idea of we be need to be optimal research producers. And knowledge isn''t always, or understanding isn''t always produced in the the most optimal way.
[00:15:25] Chris Hartgerink: So in that sense, you know, if gamification serves that purpose, I think that''s an angle where I would be hesitant.
[00:15:36] Sarahanne Field: Really thinking about what the purpose currently is and maybe what it should be and what the difference is there. So I think, gamification is traditionally used for motivation, right? It''s traditionally used to, to increase production, to increase productivity, to do things better, to optimize, like you said. But indeed, I think one, you know, purpose that we can maybe try and engender is, is the purpose of having fun and enjoying research and learning, connecting with each other, as you''ve already mentioned.
[00:16:07] Sarahanne Field: Those purposes to me are, not yet evident in, in what I see in gamification and open research. That''s something we can really improve on
[00:16:16] Chris Hartgerink: Well, I think it''s time for us to move on to final remarks.
[00:16:20] Chris Hartgerink: For me, a final remark that came to mind is also that gamification can be used for equity if it''s done right. I always love this in games where you get this, this in-game currency, which you can exchange for certain perks or benefits. And I feel like if we would set up the purpose of gamification is to indeed progress in the direction that you want to in your career.
[00:16:46] Chris Hartgerink: That we can reward certain behaviors. So we could say, teach a course that''s. I don''t know, 200 points. And if you complete a study that''s, for example, also 200 points, literally saying, Teaching and doing a study. So teaching and research are worth just as much. And then if you get like, I don''t know, 3000 points, that''s when you unlock your tenure.
[00:17:11] Chris Hartgerink: And that way to also very clearly say, it doesn''t matter whether it''s a, a man or a woman or a black person or a white person who unlocks these points. Once you have them, no matter how you got them, you can unlock that next step. And I think that, so there''s also a potential there, but very much depends on the design and who makes those decisions.
[00:17:36] Sarahanne Field: I would just say that if you''re, if you''re going to be engaging in one of these, you know, examples of gamification, I think just be critical. Reflect on who''s been making up the rules, who it might include and who it might exclude, and just be thoughtful about how you wanna involve your own self with those, with those kinds of games.
[00:17:56] Chris Hartgerink: That''s us for this week''s episode of the Open Update. Maybe you know, of games that you''re a part of in your environment or rules that have become clear as you''ve gone throughout your career. So join us in our signal group. Let us know what those might be, so we can share them out with the rest of the listeners because a lot of gamification ends up being implied and becoming aware of this is also incredibly helpful to be able to address them and start imagining those alternatives, which is exactly what we try to do here at the open update.
[00:18:30] Chris Hartgerink: We were very happy to see that there was quite some response to this idea of alternative forms of peer feedback.
[00:18:36] Chris Hartgerink: Who knows maybe that could become more of a series where we can start thinking about, well, what are alternative forms for x. And so if you have ideas about spaces where you like to see alternatives, whether that''s alternatives to funding, alternatives to study, design or whatever.
[00:18:54] Chris Hartgerink: Let us know. Join our Signal group, leave a voice message. And who knows, maybe we''ll cover it . With that said, thank you for listening to this episode. If you would be so kind, please do leave a review on Apple Podcast if that''s your thing. Don''t forget to share the podcast with some of your colleagues or friends or your grandparents, who might not really want to listen to this.
","tags":["Open Update","Labs"],"language":"en","references":[],"blog_id":"h49ct36","blog":{"id":"h49ct36","title":"Liberate Science","description":"Research as a commons","language":"en","favicon":"https://libscie.org/favicon.png","feed_url":"https://libscie.org/rss/","home_page_url":"https://libscie.org/","user_id":"8498eaf6-8c58-4b58-bc15-27eda292b1aa","created_at":"2023-05-31T14:08:03+00:00","indexed_at":"2023-01-04","feed_format":"application/rss+xml","license":"https://creativecommons.org/licenses/by/4.0/legalcode","generator":"Ghost 5.49","category":"Social Sciences","prefix":null,"modified_at":"2023-06-13T06:48:37+00:00","version":"https://jsonfeed.org/version/1.1","backlog":true}},{"id":"https://doi.org/10.59350/mce1w-ec638","uuid":"a8a5b3d7-5570-473c-a117-ab4736bb6f7c","url":"https://www.ideasurg.pub/why-surgery-needs-ideas/","title":"Why Surgery Needs I.D.E.A.S.","summary":"I am by no means an expert on the future of academic publishing. But I''d like to join the conversation.","date_published":"2022-12-19T05:42:45Z","date_modified":null,"date_indexed":"1970-01-01T00:00:00+00:00","authors":[{"url":null,"name":"Tejas S. Sathe, MD"}],"image":null,"content_html":"Imagine that you want to read a paper. You probably either search the subject area of interest on PubMed or Google Scholar. Perhaps you were directed to the paper from an interesting Twitter thread or other social media site. Or maybe you were assigned the paper for a journal club or weekly educational activity.
If you''re trying to read the paper from within your institution''s network, the process is usually seamless. However, if you''re trying to read from your house or on your phone, you will typically have to use a virtual private network (VPN) or undergo some institutional log-in procedure.
Finally, you''re ready to read the paper. And almost always we instinctively hit the ''Download PDF'' button. I continue to do this even though I haven''t owned a printer in years. Maybe the PDF makes it easier to share the paper with colleagues, drop it into a reference manager, or maintain access to it without repeatedly accessing the paywall.
In some sense, PDFs are a paper relic in an otherwise digital age. The text does not easily scale on different screen sizes. The fonts and colors are chosen by the publisher, not the user. There is no way to read the article in dark mode or utilize any other accessibility features.
Moreover, tables and figures are often presented as static images, rarely taking advantage of dynamic features available on modern web interfaces. In addition, critical data is either buried in a supplementary section or omitted altogether. Common examples of this include code used to run statistical analysis or the survey instrument used in survey studies. While maintaining supplemental information separately made sense at a time when publishers were constrained by page numbers and the marginal cost of printing another page, this is no longer true in digital formats.
In my experience, the user experience falls short of the joy of reading excellent journalism, following an academic conversation on Twitter, or listening to a podcast about a current hot topic. And making papers easier to read is only half the battle.
There are also a lot of pain points in writing papers. After you finish writing the manuscript, you have to submit it to a journal. Submission portals are usually made with outdated technology and submitting is a tedious experience. Once the process is complete, it can take several weeks before a decision is reached.
The peer review process is typically opaque. For most papers, readers do not know who the reviewers were, what comments they had, or what changes resulted from those comments. Furthermore, there is more uncertainty today about whether the institution of peer review is achieving its stated mission of ensuring that only high quality research gets published. Adam Mastroianni writes about this in his Substack blog Experimental History.
Once an article is published, there is no easy way to change the article (unless it has to be redacted). Rather than being a dynamic node in a scientific conversation around a topic that can be modified or refined as more data becomes available, it is a static post.
Finally, the financial structures around academic publishing are very different from any other mode of content creation. Neither the scientists who create the work nor the reviewers who screen it get paid for their efforts. Some say that introducing financial motives into academic publication would reduce the quality and integrity of the science. This point of view would be valid if research was truly a not-for-profit enterprise, but this is far from reality.
Papers cost money to read. Most often, publishing companies charge high fees to institutions for the rights to their content. While they provide a standard format and online and paper versions of these articles (with limitations as discussed above), they do not pay for the science they are publishing. Much of the research they publish is funded by governments (and by extension taxpayers) and the peer review they rely on to maintain research quality is also done on a volunteer basis. While an argument can be made that the value added by the publisher is in standardizing formatting, this can be achieved at zero cost with modern digital publishing tools.
The journals on the other hand are far from altruistic. The total market for academic publishing was 29 billion dollars in 2019. Moreover, the top publishing companies had higher profit margins than Apple, Google, and Facebook. In a report titled Against Parasite Publishers, the Basic Research Community for Physics reviews the current state of publishing and providers recommendations for promoting open access content.
They perform a detailed financial analysis of revenues and profit margins of leading publishers. Some of the data has been reproduced in the graphs below.
More and more journals are offering an ''Open Access'' publication model. In this model, articles are free to read. However, the lost subscription costs are simply transferred to the author in the form of article processing charges (APC) which can be prohibitively expensive for new investigators without established funding. I searched for the top surgery related journals on Google Scholar and obtained APC data for the top 20 surgery journals by h-index. Data is shown in the following table.
In addition, some scientists, especially in fields such as physics, directly publish their content on free pre-print servers such as arXiv even before they are reviewed. Pre-print services such as arXiv and biorXiv and medrXiv (the versions for biology and medical research, respectively) have also advanced open science. However, they still utilize PDFs as the basic format in which academic papers are consumed.
Sebastian Mayoni writes the following summarizing this scheme in a piece written for the University Post.
First, the university pays (often mostly supported by government) its academic staff to conduct research and write articles. Then the researchers send the article to a journal, which in some cases even demands fees to publish the article – so-called Article Processing Charges (APC). In some prestigious cases, the price for being published in, say, Nature costs USD 10,000 an article. If you want to be published in The Lancet Global Health, it may cost up to USD 5,000. Then the researcher often loses copyright to their articles, which the publisher subsequently can make money off in their subscription scheme.
Why do we do this? Why do we willingly give up copyrights to our self-generated content so that a third party can profit? It is because we have invented a prestige economy on which we base our academic capital. Number of publications as well as the impact factor of the associated journals is a key component of evaluation for medical school, residency, faculty positions, and academic promotion. I do not mean to say this system has no value. It has created an incredible amount of scientific and medical advancements that have saved lives. Rather, since we made this up ourselves, we also have the power to change it for the better.
Today, I am launching a new platform called Innovation and Design Experiments in Academic Surgery – or I.D.E.A.S. for short.
The designer Maggie Appleton writes about the concept of a Digital Garden. I think this is a compelling model for academic publishing. Instead of projects ending once you hit the submit button, academic media can take the form of living repositories of knowledge that grow over time. When applied to an entire field such as academic surgery, the digital garden can represent a collective spirit of building knowledge in public. I.D.E.A.S. can be thought of as part blog, part garden, part prototype of what a journal of the future could look like. I''m not exactly sure where this project is going to go - if anywhere. But it seems to be an exciting space that is ripe for innovation. And I have some free time.
As a current Surgical Innovation fellow, I will use this forum to maintain drafts of papers I am working on or that have been submitted awaiting peer review. I will provide my own ideas and commentary on technology, innovation, and design in surgery. I will also use it as a forum to discuss small scale projects and experiments (including failed experiments) that may not be easy to publish in traditional journals.
I welcome anyone who wants to contribute as well. The Ghost platform on which this blog is written allows for multiple authors to contribute to a publication. Initially, the only content moderation will be to make sure none of the content is offensive.
My vision is to create a robust, modern interface for sharing ideas, experiments, and data that is relevant to academic surgery - a platform to make good quality science accessible and fun.
There''s a lot that hasn''t been figured out. Here is our roadmap –
First, we need to build a system for reference management. The current standard of parenthetical citation or footnotes linking to a reference list at the end of the page is also a relic of print publishing. Modern web interfaces allow for hyperlinking and dynamic previews (as seen earlier in this article). At the same time, too many previews would increase clutter and reduce the readability of the paper. We will need to creatively design a reference system for modern web-first journals.
Second, we will need to figure out how to handle peer review. Since each article allows comments, we could utilize the commenting feature as a prototype and build from there. In addition, it would be ideal to have some sort of version history so that readers can understand how peer review influenced changes in the manuscript. Furthermore, readers can see how new data or knowledge from the author changed the paper as well.
Third, we will need to find a way to assign a Digital Object Identifier (DOI) to each article published. The DOI is a unique and permanent alphanumeric code that is attached to a manuscript (unlike website URLs which can change over time). DOIs are assigned by third party providers such as CrossRef for a fee. In my preliminary research, this seems to be the largest barrier preventing anyone from turning their personal blog into a journal. This article was manually assigned a DOI using the Zenodo service. To make a DOI entry, I uploaded a PDF containing a link to this article. Certainly, an an automated solution will be required to make this scalable. Individual articles can be published under a Creative Commons 4.0 licence which allows sharing and adapting original content with attribution.
Finally, we will have to figure out a way to create a sustainable future for this and similar publications. The Ghost platform charges 11 dollars per month. We currently do not have any plans for soliciting advertisements and will start out as a free publication. The vision of making this a true Diamond Open Access forum (no readership fee, no article processing charge) is attractive. Should we ever charge a subscription fee, it will be something accessible to every student and resident. Furthermore, if we go in the paid subscription route, then we can brainstorm a system in which the content creators share in the profits.
All in all, technology can improve the way that science is consumed, created, and financed. I would appreciate any ideas, feedback, or contributions. As Thomas Edison once said, \"There are no rules here – we’re trying to accomplish something.\"
https://doi.org/10.5281/zenodo.7500519
\n","tags":["Essay"],"language":"en","references":[],"blog_id":"3cxcm20","blog":{"id":"3cxcm20","title":"I.D.E.A.S.","description":"Innovation
and Design Experiments in Academic Surgery (I.D.E.A.S.)","language":"en","favicon":"https://www.ideasurg.pub/favicon.png","feed_url":"https://www.ideasurg.pub/rss/","home_page_url":"https://www.ideasurg.pub/","user_id":"8498eaf6-8c58-4b58-bc15-27eda292b1aa","created_at":"2023-05-31T13:44:29+00:00","indexed_at":"2023-01-04","feed_format":"application/rss+xml","license":"https://creativecommons.org/licenses/by/4.0/legalcode","generator":"Ghost
5.51","category":"Medical and Health Sciences","prefix":"10.59350","modified_at":"2023-06-03T04:54:30+00:00","version":"https://jsonfeed.org/version/1.1","backlog":false}},{"id":"https://doi.org/10.54900/bj4g7p2-2f0fn9b","uuid":"d0a6096a-bf83-4af5-bee5-5844ce85affc","url":"https://upstream.force11.org/rogue-scholar/","title":"The
Rogue Scholar: An Archive for Scholarly blogs","summary":"Digital object identifiers
(DOIs) and relevant metadata have been used for 20 years to help preserve
the scholarly record by maintaining stable links to scholarly publications
and other important scholarly resources, combined with long-term archiving
by publishers and libraries. Lots and tools and services have been built around
this infrastructure to make it easier for scholars to consume and contribute
to this scholarly record.Science Blogs have also been around for more than
20 years, but...","date_published":"2023-01-31T17:02:40Z","date_modified":"2023-01-31T17:02:40Z","date_indexed":"1970-01-01T00:00:00+00:00","authors":[{"url":"https://orcid.org/0000-0003-1419-2405","name":"Martin
Fenner"}],"image":"https://images.unsplash.com/photo-1528590005476-4f5a6f2bdd9e?crop=entropy&cs=tinysrgb&fit=max&fm=jpg&ixid=MnwxMTc3M3wwfDF8c2VhcmNofDV8fHJlY29yZHxlbnwwfHx8fDE2NzUxNzg3Mjk&ixlib=rb-4.0.3&q=80&w=2000","content_html":"
Digital object identifiers (DOIs) and relevant metadata have been used for 20 years to help preserve the scholarly record by maintaining stable links to scholarly publications and other important scholarly resources, combined with long-term archiving by publishers and libraries. Lots and tools and services have been built around this infrastructure to make it easier for scholars to consume and contribute to this scholarly record.
Science Blogs have also been around for more than 20 years, but in all that time have not really become a formal part of the scholarly record. If you are old enough to remember them, you can think of science blogs as the compact cassette next to the single or LP – giving users an affordable alternative to buying a record, enabling listening to music on the go, and enabling creative remixing of content in that quintessential product of the 1980s and 1990s, the mixtape.
The strengths of science blogs are that they are easy and affordable to set up, allow experimentation in content and format (e.g. as a podcast), and are very fast in publishing content to your audience.
Science blog very nicely complement other emerging Open Science content types such as research data, software, computational notebooks, and preprints. They can highlight interesting research or important policy developments, help report from conferences, and can also work to publish primary research.
This is what I have started doing a few months ago starting work on the Rogue Scholar, an archive for science blogs that
The Rogue Scholar will use the Open Source repository software InvenioRDM (where I am contributing to the development) to achieve this, and will launch in the second quarter of 2023. Reach out to me via the Rogue Scholar website or email if you have questions or comments.
From the initial feedback and research I noticed particular interest from personal science blogs and from English-language blogs and unsurprisingly found that Wordpress is the most popular platform for science blogs. I also found a small number of science blogs (including the Upstream blog) that use DOIs, and a number of science blogging platforms such as Hypotheses, Occam''s Typewriter and Scilogs.de. And lots and lots of interesting content that deserves to be made easier to discover and preserved.
","tags":["News"],"language":"en","references":[],"blog_id":"pm0p222","blog":{"id":"pm0p222","title":"Upstream","description":"The community blog for all things Open Research.","language":"en","favicon":"https://upstream.force11.org/favicon.png","feed_url":"https://upstream.force11.org/atom/","home_page_url":"https://upstream.force11.org","user_id":"8498eaf6-8c58-4b58-bc15-27eda292b1aa","created_at":"2023-05-31T07:23:49+00:00","indexed_at":"2023-01-13","feed_format":"application/atom+xml","license":"https://creativecommons.org/licenses/by/4.0/legalcode","generator":"Ghost 5.25","category":"Humanities","prefix":"10.54900","modified_at":"2023-06-06T08:00:49+00:00","version":"https://jsonfeed.org/version/1.1","backlog":true}},{"id":"https://citationstyles.org/2011/03/18/csl-1-0-first-anniversary","uuid":"031faba3-3a6e-49d1-a540-26523be2fd09","url":"https://citationstyles.org/2011/03/18/csl-1-0-first-anniversary/","title":"CSL 1.0 First Anniversary","summary":"It’s been a year since version 1.0 of the Citation Style Language was released, but now is as an exciting time as ever. Zotero 2.1, Mendeley 0.9.8 and Papers2 now all use CSL 1.0 for citation formatting (Zotero and Mendeley both use citeproc-js, the open source JavaScript CSL 1.0 processor by Frank G. Bennett, Jr.). The document converter Pandoc now supports citeproc-hs 0.3.2, an open source Haskell CSL 1.0 processor, and work is ongoing for CSL 1.0 processors written in PHP...","date_published":"2011-03-18T00:00:00Z","date_modified":"2011-03-18T00:00:00Z","date_indexed":"1970-01-01T00:00:00+00:00","authors":[{"url":null,"name":"Rintze M. Zelle"}],"image":null,"content_html":"It’s been a year since version 1.0 of the Citation Style Language was released, but now is as an exciting time as ever.\nZotero 2.1, Mendeley 0.9.8 and Papers2 now all use CSL 1.0 for citation formatting (Zotero and Mendeley both use citeproc-js, the open source JavaScript CSL 1.0 processor by Frank G. Bennett, Jr.).\nThe document converter Pandoc now supports citeproc-hs 0.3.2, an open source Haskell CSL 1.0 processor, and work is ongoing for CSL 1.0 processors written in PHP (https://bitbucket.org/rjerome/citeproc-php) and Ruby (https://github.com/inukshuk/citeproc-ruby).\nFinally, Zotero developers created citeproc-node, a NodeJS wrapper for citeproc-js, for easy server-side deployment.
\n\nWith so many parties on board, the main challenge for the next few months will be the creation of a CSL 1.0 style repository, similar to the existing Zotero Style Repository for CSL 0.8.1 styles.\nPreferably, this repository should use a decentralized version control system like Git to handle the distribution and contribution of CSL styles, and have a front end for users to browse, preview and install styles.\nInterested parties are strongly invited to participate in the xbiblio mailing list to discuss and help out with these future improvements.
","tags":[],"language":"en","references":[],"blog_id":"prmb582","blog":{"id":"prmb582","title":"Citation Style Language","description":"CitationStyles.org, home of the Citation Style Language (CSL), a popular open XML-based language to describe the formatting of citations and bibliographies.","language":"en","favicon":null,"feed_url":"https://citationstyles.org/feed.xml","home_page_url":"https://citationstyles.org/","user_id":"8498eaf6-8c58-4b58-bc15-27eda292b1aa","created_at":"2023-05-31T08:18:29+00:00","indexed_at":"2023-04-25","feed_format":"application/atom+xml","license":"https://creativecommons.org/licenses/by/4.0/legalcode","generator":"Jekyll 3.9.3","category":"Engineering and Technology","prefix":"10.59350","modified_at":"2020-07-11T00:00:00+00:00","version":"https://jsonfeed.org/version/1.1","backlog":false}},{"id":"https://markrubin.substack.com/p/the-preregistration-prescriptiveness","uuid":"2b105b29-acbc-4eae-9ff1-368803f36a4d","url":"https://markrubin.substack.com/p/the-preregistration-prescriptiveness","title":"The Preregistration Prescriptiveness Trade-Off and Unknown Unknowns in Science","summary":"Comments on Van Drimmelen (2023)","date_published":"2023-06-06T21:22:00Z","date_modified":null,"date_indexed":"1970-01-01T00:00:00+00:00","authors":[{"url":null,"name":"Mark Rubin"}],"image":"https://substack-post-media.s3.amazonaws.com/public/images/195477de-91ce-4a76-bae8-b6334fb5bcc7_500x379.jpeg","content_html":"I discuss Van Drimmelen’s (2023) Metascience2023 presentation on researchers’ decision making during the research process. In particular, I consider his evidence that researchers’ discretion over research decisions is unavoidable when they follow research plans that are either overdetermined (i.e., too prescriptive) or underdetermined (i.e., too vague). I argue that this evidence points to a prescriptiveness trade-off when writing preregistered plans: All other things being equal, plans that are more prescriptive are more likely to result in deviations that turn their confirmatory tests into exploratory tests, and plans that are less prescriptive are more likely to result in confirmatory tests that are susceptible to questionable research practices. I also consider Van Drimmelen’s idea that researchers may make unconscious, implicit decisions during the research process. I relate these implicit decisions to Rumsfeld’s (2002) concept of unknown unknowns: “the things we don’t know we don’t know”! I argue that scientists can report their known knowns (what they know they did and found), and they can be transparent and speculative about their known unknowns (what they know they didn’t do and may find), but that they can’t say much about their unknown unknowns (including their unconscious, implicit decisions) because, by definition, they don’t know what they are! Nonetheless, I think that it’s important to acknowledge unknown unknowns in science because doing so helps to contextualise research efforts as being highly tentative and fallible.
The videos of the presentations at the Metascience2023 conference were recently made available here. One that caught my eye is by Tom van Drimmelen, titled “Researchers’ decision making: Navigating ambiguity in research practice”:
Here, I summarise Van Drimmelen’s excellent work and share my thoughts on his ideas.
Van Drimmelen (2023) explained that, during the research process, researchers have discretion over numerous decisions that they make about their methodology and analyses. During the data analysis stage, this researcher discretion may be perceived negatively as researcher degrees of freedom that is susceptible to researcher bias and questionable research practices (John et al., 2012; Simmons et al., 2011).
In theory, it is possible to reduce and/or reveal researcher discretion during data analyses by preregistering one or more methodological and analytical research paths. However, Van Drimmelen (2023) and colleagues were interested in how researcher discretion operates in practice. To investigate, they conducted an ethnographic study in which they observed researcher discretion in two different research groups. They found that researcher discretion emerged when researchers used two categories of research plan.
1. Underdetermined plans: Researcher discretion emerged when research plans were not precise enough to determine specific actions for the researchers. In this case, researchers needed to use their own discretion to fill in the gaps of the research plan and figure out how to proceed. Researchers often find themselves in this situation because preregistered research plans are often vague (e.g., Bakker et al., 2020; Heirene et al., 2021; Van den Akker et al., 2023).
2. Overdetermined plans: Researcher discretion also emerged when there were unforeseen changes in the research situation that made the original research plan undesirable or impossible, necessitating an alternative research approach. Again, this situation is common, because most researchers do not possess “godlike planning abilities” (Navarro, 2020). To attempt to address this issue, researchers can preregister “if…then” contingencies (i.e., decision trees) that accommodate anticipated changes in the research situation (Nosek et al., 2018, p. 2602). In addition, it may be possible to update a preregistered analysis plan prior to the data analysis in order to take account of unanticipated events (Nosek et al., 2019, p. 817). However, it is often the case that unanticipated events arise during data analyses. As Reinhart (2015, p. 95) explained (paraphrasing Helmuth von Moltke), “no analysis plan survives contact with the data.” In these cases, researchers either need to stick to an inadequate preregistered analysis plan or deviate and adapt their approach to address the current situation. The evidence shows that researchers often deviate from their preregistered plans (e.g., Abrams et al., 2020; Claesen et al., 2021; Heirene et al., 2021).
Van Drimmelen’s (2023) work shows us that researchers are forced to either deviate from overdetermined plans to overcome unforeseen events or fill in the gaps of vague, underdetermined plans. Hence, researcher discretion emerges in the case of both over- and under-determined plans. As Van Drimmelen (2023) put it, “researcher discretion is an integral and unavoidable part of the research practice.”
I think Van Drimmelen’s (2023) evidence also points to a trade-off between the prescriptiveness of preregistered research plans and the feasibility of their implementation in an unpredictable world. As preregistered plans become more precise and prescriptive, there is a greater chance that researchers will need to deviate from them in order to accommodate unforeseen and/or uncontrolled events that occur during the implementation of the plan. For example, researchers are more likely to deviate from a plan to test “300 participants” than they are to deviate from a less prescriptive plan to test “around 300 participants,” because uncontrolled events (e.g., the number of participant exclusions during data analysis) may result in a final sample size that is only close to 300 participants and not exactly 300 participants (e.g., Claesen et al., 2021, pp. 6-7). Consequently, as preregistered plans become more prescriptive, they are more likely to result in deviations that turn their planned confirmatory tests into unplanned exploratory tests.1 Of course, preregistration continues to distinguish between confirmatory and exploratory tests in this situation, and it prevents exploratory tests from being falsely portrayed as confirmatory tests. Nonetheless, it remains the case that, as plans for confirmatory tests become more prescriptive, the actual implementation of these confirmatory tests in the real world becomes less likely.
On the other hand, as preregistered plans become vaguer and less prescriptive, confirmatory hypothesis tests will become not only more feasible to achieve without deviation, but also more susceptible to researcher degrees of freedom and questionable research practices. For example, it is more feasible to test “around 300 participants” than to test exactly “300 participants.” However, the first sampling plan does not prevent the questionable research practice of optional stopping (John et al., 2021; Simmons et al., 2011), whereas the second, more prescriptive plan does.
In summary, researchers need to consider a prescriptiveness trade-off when writing their preregistered plans: All other things being equal, plans that are more prescriptive are more likely to result in deviations that turn their confirmatory tests into exploratory tests, and plans that are less prescriptive are more likely to result in confirmatory tests that are susceptible to questionable research practices. Hence, researchers face a difficult choice between prescriptive confirmatory tests that they are less likely to carry out in practice and vague confirmatory tests that are more prone to questionable research practices.
Van Drimmelen and colleagues (2023) also identified another interesting issue in their work. At times, they found it difficult to identify researcher discretion because there were multiple potential methodological or analytical approaches available that researchers did not consider. In other words, sometimes researchers were not aware that they were making decisions!
This idea of unconscious and implicit research decisions reminds me of something that Donald Rumsfeld, the United States Secretary of Defence, said at a press briefing in 2002:
There are known knowns; there are things we know we know. We also know there are known unknowns; that is to say we know there are some things we do not know. But there are also unknown unknowns—the ones we don’t know we don’t know. (Wikipedia, 2023)
In the context of Figure 1, our known knowns are the path or paths that we reported following in our research (e.g., the green paths). In these cases, we know what we did, and we know what we found as a result. In contrast, our known unknowns are the paths that we actively decided not to follow (e.g., all the other paths in Figure 1). We know these paths exist, but we don’t yet know what results lie at the end of them.
But what about our unknown unknowns? These paths don’t appear in Figure 1! They relate to decisions that we didn’t even consider, at least not at a conscious level. For example, we may not have considered the time of day of testing, the ambient temperature, or a host of other unspecified variables that may be influential on our results. Instead, we unconsciously relegated these unknown unknowns to a ceteris paribus clause (i.e., “all other things being equal”), which assumes that no other influential factors are at play (Meehl, 1990). The problem is that our unconscious and implicit decision to assume the irrelevance of these factors may be wrong! Indeed, the history of science is full of cases in which factors that were not initially considered to be important subsequently turned out to be crucial moderators or boundary conditions of effects (for some examples, see Firestein, 2016).
Van Drimmelen (2023) concluded that “we might not be able to be effectively transparent because we don’t know which decisions we’ve actually made” (my emphasis). In other words, our lack of awareness about our potentially important implicit decisions means that we can’t be transparent about them: We can report our known knowns, and we can be transparent and speculative about our known unknowns, but we can’t say much about our unknown unknowns because we don’t know what they are! By definition, as soon as we start to speculate on a potentially influential factor in our research, it becomes a known unknown about which we need to make conscious decisions!
Despite our inability to specify our unknown unknowns, Van Drimmelen’s (2023) work reminds us that it’s important to acknowledge this type of ignorance in science because it helps to contextualise our research efforts as being highly tentative and fallible steps in a collective process of discovery and theory development. Acknowledging our unknown unknowns also helps to address recent calls for greater intellectual humility and modesty in science (Hoekstra & Vazire, 2021; Ramsey, 2021).
I’ll end with a nice example of scientists acknowledging unknown unknowns. Here’s Errington et al. (2021, p. 12) discussing the challenges of assessing replicability in preclinical cancer biology:
It might also be that, in some cases, a failure to replicate was caused by the replication team deviating from the protocol in some way that was not recognized, or that a key part of the procedure was left out of the protocol inadvertently. It is also possible that the effect reported in the original paper depended on methodological factors that were not identified by original authors, the replication team, or any other experts involved in the peer review of the original paper or the Registered Report.
Abrams, E., Libgober, J., & List, J. A. (2020). Research registries: Facts, myths, and possible improvements (No. w27250). National Bureau of Economic Research. https://doi.org/10.3386/w27250
Bakker, M., Veldkamp, C. L., van Assen, M. A., Crompvoets, E. A., Ong, H. H., Nosek, B. A.,…& Wicherts, J. M. (2020). Ensuring the quality and specificity of preregistrations. PLoS Biology, 18(12), e3000937. https://doi.org/10.1371/journal.pbio.3000937
Claesen, A., Gomes, S., Tuerlinckx, F., Vanpaemel, W., & Leuven, K. U. (2021). Comparing dream to reality: An assessment of adherence of the first generation of preregistered studies. Royal Society Open Science, 8(10), 1–11. https://doi.org/10.1098/rsos.211037
Errington, T. M., Denis, A., Perfito, N., Iorns, E., & Nosek, B. A. (2021). Reproducibility in cancer biology: Challenges for assessing replicability in preclinical cancer biology. Elife, 10, Article e67995. https://doi.org/10.7554/eLife.67995
Fife, D., & Rodgers, J. L. (2019). Moving beyond the \"replication crisis\": Understanding the exploratory/confirmatory data analysis continuum. PsyArXiv. https://doi.org/10.31234/osf.io/5vfq6
Firestein, S. (2016, February 14). Why failure to replicate findings can actually be good for science. LA Times. https://www.latimes.com/opinion/op-ed/la-oe-0214-firestein-science-replication-failure-20160214-story.html
Heirene, R., LaPlante, D., Louderback, E. R., Keen, B., Bakker, M., Serafimovska, A., & Gainsbury, S. M. (2021). Preregistration specificity & adherence: A review of preregistered gambling studies & cross-disciplinary comparison. PsyArXiv. https://psyarxiv.com/nj4es/
Hoekstra, R., & Vazire, S. (2021). Aspiring to greater intellectual humility in science. Nature Human Behaviour, 5(12), 1602-1607. https://doi.org/10.1038/s41562-021-01203-8
John, L. K., Loewenstein, G., & Prelec, D. (2012). Measuring the prevalence of questionable research practices with incentives for truth telling. Psychological Science, 23(5), 524–532. https://doi.org/10.1177/0956797611430953
Meehl, P. E. (1990). Appraising and amending theories: The strategy of Lakatosian defense and two principles that warrant it. Psychological Inquiry, 1(2), 108-141. https://doi.org/10.1207/s15327965pli0102_1
Navarro, D. (2020, September 23). Paths in strange spaces: A comment on preregistration. PsycArXiv. https://doi.org/10.31234/osf.io/wxn58
Nosek, B. A., Beck, E. D., Campbell, L., Flake, J. K., Hardwicke, T. E., Mellor, D. T., van ‘t Veer, A. E., & Vazire, S. (2019). Preregistration is hard, and worthwhile. Trends in Cognitive Sciences, 23, 815–818. https://doi.org/10.1016/j.tics.2019.07.009
Ramsey, R. (2021). A call for greater modesty in psychology and cognitive neuroscience. Collabra: Psychology, 7(1), 24091. https://doi.org/10.1525/collabra.24091
Reinhart, A. (2015). Statistics done wrong: The woefully complete guide. No Starch Press.
Simmons, J. P., Nelson, L. D., & Simonsohn, U. (2011). False-positive psychology: Undisclosed flexibility in data collection and analysis allows presenting anything as significant. Psychological Science, 22(11), 1359–1366. https://doi.org/10.1177 /0956797611417632
Van den Akker, O., Bakker, M., van Assen, M. A. L. M., Pennington, C. R., Verweij, L., Elsherif, M. M.,…Wicherts, J. M. (2023, May 10). The effectiveness of preregistration in psychology: Assessing preregistration strictness and preregistration-study consistency. PsyArXiv. https://doi.org/10.31222/osf.io/h8xjw
Van Drimmelen, T., Slagboom, N., Reis, R., Bouter, L., Van der Steen, J. (2023, May 9). Researchers’ decision making: Navigating ambiguity in research practice. Metascience2023, Washington, DC. Abstract: https://metascience.info/events/researchers-decision-making-navigating-ambiguity-in-research-practice/ Preregistration: https://osf.io/tqwgp/
Wikipedia. (2023, March 16). There are unknown unknowns. Wikipedia. https://en.wikipedia.org/wiki/There_are_unknown_unknowns
1. Here, I define confirmatory tests as planned tests, and exploratory tests as unplanned tests. This conceptualization is consistent with the frequentist rationale for preregistration, according to which unplanned tests undermine assumptions regarding Type I error rates (Nosek et al., 2019, p. 816). I acknowledge that there are other definitions for “confirmatory” and “exploratory”, including definitions that view them as existing at opposite ends of a continuum rather than as discrete categories (e.g., Fife & Rodgers, 2019). According to this continuum perspective, larger deviations from preregistered plans result in “more exploratory” tests than smaller deviations. I don’t subscribe to this continuum perspective, because it is unclear how to formally assess some deviations as being “larger” or “smaller” than others. I agree that research studies can be described as being more confirmatory or exploratory, because they can contain a greater proportion of planned or unplanned tests. However, in my view, the tests themselves can only be either confirmatory or exploratory, because they can only be either planned or unplanned.
I’m grateful to Tom van Drimmelen for his comments on an earlier version of this article.
You can download a PDF version of this article here.
The reference for this article is:
Rubin, M. (2023, June 7). The preregistration prescriptiveness trade-off and unknown unknowns in science: Comments on Van Drimmelen (2023). Critical Metascience: MetaArXiv. https://doi.org/10.31222/osf.io/3t7pc
Medical students applying to residency often have to grapple with an incredible amount of information when evaluating prospective training programs. The increased number of applications and interviews inherent to the Zoom era further exacerbates this issue. At the same time, programs need to find ways to communicate a great deal of information to applicants. Due to the proliferation of social media, this communication is both more extensive and more longitudinal than a one hour presentation on interview day.
The traditional interview day presentation is a slide presentation that provides several factual details about a program. While each program is slightly different, most of these presentations cover common ground - including where residents rotate, what services they do, where alumni match into fellowship, examination pass rates, and information on professional development and research activities. These presentations can often be long (I once helped edit a slide deck that had 60 slides in it!) and difficult to recall. Furthermore, updating these slides from year to year is a time consuming challenge. Finally, most of these presentations are done in a traditional bullet point format with few slides reserved for photos. Few are designed in a way that optimizes user experience.
In my opinion, the most important quality of the program is how well a prospective applicant fits in with the resident culture. Though it is impossible to put into numbers, it has an outsized impact on a resident''s experience and wellness. Furthermore, it is even harder to intuit in the absence of an in-person interview experience. Nevertheless, there are important quantitative metrics that applicants consider and programs want to showcase. However, there is not yet a standardized, accessible, and visually appealing format to do so. Given the challenges in communicating these details as well as the growing power of social media , I wanted to design a new way that programs can effectively and efficiently display and disseminate this information so that applicants can be more informed and programs can better utilize their interview days to convey the critical intangibles.
In the distinct but not quite distant field of surgical research, the visual abstract has already proven to be an excellent means of distilling a research study filled with details into an easy-to-digest and visually pleasing format. Developed by Dr. Andrew Ibrahim and colleagues, the visual abstract has already been adopted by 50 journals, and a study by the creators demonstrated that the presence of a visual abstract can more than double article visits and increase impressions more than seven-fold. An excellent primer discussing how to develop a visual abstract and highlighting its use in academic journals is available here.
A variant of the visual abstract has also been used by residency programs on social media in the past. An example from the Duke Surgery Residency program is shown here:
Inspired by this format, I tried to design a version tailored towards prospective surgery trainees. I wanted to see if I could boil down the complexity of a program into the fewest pieces of data possible and present them in a user-friendly interface.
The abstract contains six different categories: demographics, operative volume, education, research, fellowships, and other. The goal is to capture relevant information regarding the tripartite mission of academic surgery as well as understand a program''s strengths in diversity and inclusion and career preparation. For each category, discrete data points were chosen which can easily allow applicants to contextualize a program''s strengths and weaknesses. The other category contains a checklist of features that applicants seem to care about (based on observations and conversations though certainly not proven scientifically). Finally, there is a space for QR codes to link to the program website as well as a copy of the contract that matched applicants will need to sign (as required by the National Resident Matching Program).
Of course, the tool comes with a dark mode!
This visual abstract can provide a semi-standardized format for programs to share important information with applicants allowing them to compare important metrics in a comprehensive, “easy” way. It is important to note that each of these data points is meant to be an objective measure without any comment on its intrinsic value. For example, one program may have a significantly higher case volume than peer institutions but less protected education time. Another may have a robotic curriculum but not have any recent chiefs who matched into Pediatric Surgery. None of these are independently good or bad, but applicants should have an easy way to understand the factors so that they can make the most informed decision. If a significant number of programs adopt the visual abstract, it will naturally create incentives for programs to share and improve on these metrics with transparency. In this manner, healthy competition can actually generate an incentive to improve program offerings for current residents as well (though this can have unintended consequences…see limitations).
Some may question the wisdom of putting these numbers out in the open - especially if they are not where a program wants them to be. However, some of the information is publicly available. Many of these data points are already reported on during the interview process. Moreover, most applicants want to know this information and value transparency. Ultimately, no program is perfect, and how a program chooses to address perceived strengths and weaknesses reflects its core values.
No visual abstract, slide deck, or document will be able to capture the full picture of a residency program - which at its core is a collection of human beings. It is worth repeating that the most important consideration students should make when choosing a program is missing from this work. Furthermore, the exact data points used may not be correct. At one point, I considered putting a box for ABSITE pass rate, however I thought this was antithetical to the ABSITE''s stated aim of being a formative, not a summative, assessment. Furthermore, there is no box specifying the number of LGBTQ residents in a program. While this is an essential component of diversity, equity, and inclusion, it was omitted due to the fact that some of these residents may not be publicly out and this would lead to an inaccurate representation. Another missing box is the number of residents with children, given that family support is an increasingly recognized feature that students are looking for in training programs.
I would like for this to be even less cluttered - and user feedback can drive further reduction of content such that the distilled final version is even more elegant. Currently, the vision for this project is that of a grassroots project, where individual programs can choose to use, improve, and modify this template to broadcast important information to applicants. However, I am wary of the general principle that once quantitative metrics are used by a central authority to rank, there is a natural tendency to gamify those metrics whether or not the underlying feature they seek to estimate is actually affected.
In short, whatever you would like. The template was built in Figma, which has become the industry-standard in visual design. It is free to make a basic Figma account, and the full suite of features is also available for free with proof of affiliation with a university. Figma offers the ability to comment on a template as well as duplicate or \"remix\" the template for personal use. I would appreciate your feedback on how to make this template better. Moreover, I encourage you to take it, modify it, improve it, and share your changes publicly. Together, we can continue to iteratively design this initial prototype and develop a robust, user-friendly tool that provides values to students, residents, and programs alike.
https://www.figma.com/community/file/1226056013352001653
","tags":[],"language":"en","references":[],"blog_id":"3cxcm20","blog":{"id":"3cxcm20","title":"I.D.E.A.S.","description":"Innovation and Design Experiments in Academic Surgery (I.D.E.A.S.)","language":"en","favicon":"https://www.ideasurg.pub/favicon.png","feed_url":"https://www.ideasurg.pub/rss/","home_page_url":"https://www.ideasurg.pub/","user_id":"8498eaf6-8c58-4b58-bc15-27eda292b1aa","created_at":"2023-05-31T13:44:29+00:00","indexed_at":"2023-01-04","feed_format":"application/rss+xml","license":"https://creativecommons.org/licenses/by/4.0/legalcode","generator":"Ghost 5.51","category":"Medical and Health Sciences","prefix":"10.59350","modified_at":"2023-06-03T04:54:30+00:00","version":"https://jsonfeed.org/version/1.1","backlog":false}},{"id":"http://ulirockenbauch.blog/?p=115","uuid":"84651758-f820-4e18-ae5f-4483ff4f4e92","url":"https://ulirockenbauch.blog/2022/08/29/strategie-mythen/","title":"„Müller, wir brauchen mal ’ne Strategie!“","summary":"Kein Begriff wird im Wissenschaftsmanagement so breitgetreten wie das Wort „Strategie“. Jede Institution hat eine Strategie für irgendwas. Täglich werden neue Papiere aufgestellt. Dabei ist es durchaus sinnvoll, sich Strategien zu überlegen – es kursiert nur leider auch viel Quatsch zu diesem Thema. Hier eine Zusammenstellung der fünf populärsten Mythen: Mythos 1: „Strategisch“ ist das Gegenteil von „operativ“. Vor ein paar Jahren nahm ich auf der Jahrestagung des Netzwerks...","date_published":"2022-08-29T19:03:40Z","date_modified":"2022-08-29T19:03:40Z","date_indexed":"1970-01-01T00:00:00+00:00","authors":[{"url":null,"name":"Uli Rockenbauch"}],"image":null,"content_html":"\nKein Begriff wird im Wissenschaftsmanagement so breitgetreten wie das Wort „Strategie“. Jede Institution hat eine Strategie für irgendwas. Täglich werden neue Papiere aufgestellt. Dabei ist es durchaus sinnvoll, sich Strategien zu überlegen – es kursiert nur leider auch viel Quatsch zu diesem Thema. Hier eine Zusammenstellung der fünf populärsten Mythen:
\n\n\n\nMythos 1: „Strategisch“ ist das Gegenteil von „operativ“.
\n\n\n\nVor ein paar Jahren nahm ich auf der Jahrestagung des Netzwerks Wissenschaftsmanagement an einem Workshop teil. „Wir sortieren uns jetzt mal im Raum“, rief die Moderatorin eifrig. „Alle, die eher operativ arbeiten, gehen mal in die linke Ecke, und alle, die eher strategisch arbeiten, in die rechte!“ Ich machte zwar brav mit, aber es wurde relativ schnell klar, dass die beiden Begriffe eigentlich keine Gegenpole sind.
\n\n\n\nWörtlich heißt „operativ“ ja nur „handelnd“. Die Personalabteilung handelt, indem sie Leute einstellt, die Pressestelle schreibt Pressemitteilungen, die Bauabteilung überwacht die Baustellen, usw… Gegenfrage: Gibt es auch Abteilungen, die nicht operativ tätig sind, also nicht selbst handeln? Man könnte jetzt sagen: Ja, nämlich die oberste Leitungsebene, weil sie hinter ihrem Schreibtisch sitzt und Anweisungen gibt. Aber das stimmt natürlich nur bedingt, denn auch ein Universitätskanzler priorisiert schließlich Bauvorhaben, genehmigt Entfristungen, verhandelt mit dem Finanzamt, usw… „Handeln“ tut hier also jeder.
\n\n\n\nWas aus dieser Logik vielleicht tatsächlich herausfällt, sind spezielle Strategieabteilungen – häufig Stabsstellen -, deren Aufgabe ausschließlich darin besteht, Konzepte für die Leitung zu schreiben. Wenn sie gegenüber anderen Verwaltungseinheiten weder weisungsbefugt sind noch ein eigenes Tagesgeschäft haben, kann man zu Recht sagen: Diese Abteilungen sind nicht operativ tätig, sondern rein konzeptionell.
\n\n\n\nMythos 2: Eine Strategie beinhaltet strategische Ziele.
\n\n\n\nOhne zum Korinthenkacker zu werden: Ziele und Strategien sind zwei verschiedene Dinge. Ein Ziel ist etwas, das man erreichen will. Eine Strategie wiederum ist ein Bündel von Maßnahmen, um das Ziel zu erreichen. Was hingegen ein „strategisches Ziel“ sein soll, weiß ich wirklich nicht. (Gibt es denn auch Ziele, die nicht strategisch sind?)
\n\n\n\nAnders bei der Henne und dem Ei ist hier die Reihenfolge übrigens genau festgelegt: Zuerst kommt das Ziel, und erst dann die Strategie. Andernfalls haben Sie eine Menge Geld ausgegeben und wissen am Ende nicht mal, ob Sie mit Ihren Maßnahmen erfolgreich waren. Wer auf Nummer sicher gehen will, kann allerdings auf zwei beliebten Varianten der Strategiebildung zurückgreifen:
\n\n\n\nMythos 3: Das Adjektiv „strategisch“ zeigt an, dass etwas auf Leitungsebene behandelt wird.
\n\n\n\nDas ist nicht richtig, und es wäre auch unfair gegenüber den Fachabteilungen. Strategisch zu arbeiten bedeutet ja nur, dass man seine Entscheidungen und Maßnahmen auf ein konkretes Ziel ausrichtet. (Im besten Fall trifft das auf alles zu, was eine Institution während der Bürozeiten tut.) Die Finanzabteilung kann das Ziel verfolgen, SB-Mittel abzubauen; die Drittmittelabteilung will den Wissenschaftler*innen zu mehr ERC Grants zu verhelfen, usw… Dabei ist es unerheblich, ob an der Tür „Strategische Strategieabteilung“ steht oder schlicht „Verwaltung“.
\n\n\n\nDas Adjektiv wird allerdings gerne benutzt, um Dinge sprachlich aufzuwerten. Aus „Personalmanagement“ wird dann „strategisches Personalmanagement“, und die Kommunikationsabteilung betreibt auf einmal „strategische Kommunikation“. Das macht viel her, ist aber inhaltslos. Provokativ könnte man fragen: Was habt ihr denn vorher gemacht? Zielloses Personalmanagement und ziellose Kommunikation?
\n\n\n\nMythos 4: Jede Institution braucht eine Gesamtstrategie.
\n\n\n\nKlingt erstmal gut. Aber warum eigentlich? An einer großen Institution gibt es schließlich ganz verschiedene Handlungsfelder: Forschung, IT, Personal, Finanzen… Kann man für diese Themen nicht jeweils separate Strategien festlegen, mit Zielen und Maßnahmen? Wenn man diese Frage in die Runde wirft, erhält man meistens Antworten voller schöner Buzzwords: „Das muss doch miteinander verzahnt werden.“ – „Mit einer Gesamtstrategie kann man noch Synergien heben.“ – usw…
\n\n\n\nDer Hauptgrund, weshalb Gesamtstrategien so populär sind, ist aber ein anderer: Jede Institution hat hochrangige Gremien, denen solche Papiere vorgelegt werden müssen – und die Mitglieder dieser Gremien sind vielbeschäftigt. Wenn sie bei der Anreise im Flieger nur eine Stunde Zeit zur Vorbereitung haben, wollen sie nicht durch fünf Dokumente blättern, sondern nur durch ein einziges. Und es möge bitte kurz und knackig sein.
\n\n\n\nMythos 5: Am weitesten kommt man mit einer eigenen Strategieabteilung.
\n\n\n\nPolitisch macht es natürlich Eindruck, wenn man eine solche Abteilung vorweisen kann. Auf kurze Sicht lautet die Rechnung: Je größer die Strategieabteilung, desto besser und effektiver die Strategie der Hochschule. Diese Rechnung kann natürlich aufgehen, allerdings gibt es auch Gründe, die gegen die Einrichtung einer rein konzeptionell arbeitenden Strategieabteilung sprechen können:
\n\n\n\nHäufig wird argumentiert: „Wir als Strategieabteilung schreiben doch niemandem etwas vor – wir führen nur die Stränge zusammen.“ Das ist natürlich legitim. Schaut man dann allerdings auf die Zahl der Mitarbeiter*innen, stellt sich die Frage: Braucht es für eine reine „Zusammenführung“ wirklich so viele Leute? Könnte das nicht eine gute Präsidiumsreferentin allein erledigen?
\n\n\n\nMein persönlicher Eindruck ist ja, dass der Begriff „Strategie“ ab 2005 einen sprunghaften Bedeutungszuwachs bekam, als sich plötzlich ganze Universitäten um Exzellenzcluster und Graduiertenkollegs bewerben mussten. Stabsstellen mit klangvollen Namen wie „Excellence and Strategy Division“ schossen aus dem Boden. Dort stellte man dann eine Vielzahl junger Hüpper ein, die häufig direkt aus der Promotion kamen und nun am laufenden Band Konzeptpapiere mit bunten Grafiken produzieren mussten. Insofern ist der Wildwuchs an Strategien vielleicht auch ein Nebenprodukt der Exzellenzinitiative.
\n\n\n\nPardon, ich verwende aus Gewohnheit manchmal noch den alten Begriff. Inzwischen heißt die Exzellenzinitiative natürlich nicht mehr Exzellenzinitiative. Sondern Exzellenzstrategie.
\n","tags":["Allgemein"],"language":"de","references":[],"blog_id":"468ap65","blog":{"id":"468ap65","title":"Behind the Science","description":"Ein Blog aus dem Admin-Bereich der Wissenschaft","language":"de","favicon":null,"feed_url":"https://ulirockenbauch.blog/feed/atom/","home_page_url":"https://ulirockenbauch.blog","user_id":"8498eaf6-8c58-4b58-bc15-27eda292b1aa","created_at":"2023-05-31T07:28:16+00:00","indexed_at":"2023-04-21","feed_format":"application/atom+xml","license":"https://creativecommons.org/licenses/by/4.0/legalcode","generator":"WordPress (.com)","category":"Social Sciences","prefix":"10.59350","modified_at":"2023-04-13T17:48:41+00:00","version":"https://jsonfeed.org/version/1.1","backlog":true}},{"id":"https://doi.org/10.59350/6g1tb-4hc20","uuid":"e4872b3e-cd15-407d-8406-33642c1a98b0","url":"https://irishplants.org/blog/2022/12/26/why-care-about-grasslands/","title":"Why care about grasslands?","summary":"As someone who has been deeply interested by plants as long as I can remember, I have grown to appreciate all the different habitats I come across in Ireland. Each has its own unique set of species, but one that has always particularly caught my attention is our grasslands. Part of this is probably because my family have been managing our small field for biodiversity, and seeing plants, animals, and fungi flourish there has massively influenced my interest in ecology. But there’s also...","date_published":"2022-12-26T00:10:05Z","date_modified":"2022-12-26T17:12:30Z","date_indexed":"1970-01-01T00:00:00+00:00","authors":[{"url":null,"name":"Jake Dalzell"}],"image":null,"content_html":"\nAs someone who has been deeply interested by plants as long as I can remember, I have grown to appreciate all the different habitats I come across in Ireland. Each has its own unique set of species, but one that has always particularly caught my attention is our grasslands. Part of this is probably because my family have been managing our small field for biodiversity, and seeing plants, animals, and fungi flourish there has massively influenced my interest in ecology. But there’s also something special about grasslands.
\n\n\n\n\n\n\n\nFor one, species-rich grassland is rare – most has been destroyed through fertiliser input or reseeding. Finding even a small area feels like stumbling across a lost world, especially in the intensely farmed northern part of Down, on the edge of East Belfast where I’ve grown up. Grasslands can be incredibly diverse, with ten or twenty plant species in a single square metre (ten species per square metre is generally the benchmark for species-rich grassland), from orchids to grasses, and from dainty annuals like Eyebright to beefy perennials like Devil’s Bit.
\n\n\n\n\n\n\n\nGrasslands also harbour a huge diversity of arthropod species: butterflies and moths, bees and wasps, flies, beetles, spiders, harvestmen and many more. In our field we have resident Small Heath (Coenonympha pamphilus) and Cryptic Wood White (Leptidea juvernica) butterflies among others. These species are uncommon and rapidly declining, and it’s only in the small pockets of remaining habitat that they can still be seen. Protecting these species’ habitat means they will continue to be seen every year and not go the way of the Wall Brown (Lasiommata megera), which is now so rare it has been seen once in Down in the last five years. Habitat loss and nitrogen pollution from run-off and the air now threaten once-common species with extinction.
\n\n\n\n\n\n\n\nSo what can we do to protect grasslands? Firstly, we need to know what a healthy species-rich grassland needs:
\n\n\n\nTo protect our grasslands we need to maintain and improve management at species-rich sites, reduce nitrate pollution, and create a healthy landscape by stopping the destruction of grasslands as well as creating new ones. In Kent, Nature Recovery Networks aim to create extensive networks of species-rich grassland to link together existing sites. The team working personally with farmers has led to great success, with species like the Black-veined Moth (Siona lineata) benefiting. This kind of habitat restoration could be replicated in Down, but what would it require? Restored sites would have reduced yields, so farmers could be compensated for participating, and subsidies for species-rich habitats would be essential. Communicating the importance of these sites would be important; engaging with schools by arranging trips and linking with biology and geography curricula would help. Site managers from different organisations would have to work together over a large area, requiring networks and cooperation that doesn’t currently exist. These things aren’t an issue, but an opportunity to improve how conservation operates.
\n\n\n\nAs someone who loves the natural world, I have no choice but to care about grasslands. I can’t lack ambition for local conservation – we need landscape-scale restoration, and sooner rather than later, before fragmentation and habitat-loss degrades what’s left. It won’t be easy, but anything less amounts to giving up. If we can imagine a network of semi-natural grasslands in our intensively agricultural landscapes, maybe we can work towards a more biodiverse future.
\n","tags":["Ecology","Conservation","Plants","Writing"],"language":"en","references":[],"blog_id":"2bzkh64","blog":{"id":"2bzkh64","title":"Irish Plants","description":"Jake's blog","language":"en","favicon":"https://irishplants.org/blog/wp-content/uploads/2022/08/favicon32.png","feed_url":"http://irishplants.org/blog/feed/atom/","home_page_url":"https://irishplants.org/blog","user_id":"8498eaf6-8c58-4b58-bc15-27eda292b1aa","created_at":"2023-05-31T13:47:52+00:00","indexed_at":"2023-01-04","feed_format":"application/atom+xml","license":"https://creativecommons.org/licenses/by/4.0/legalcode","generator":"WordPress 6.2.2","category":"Natural Sciences","prefix":"10.59350","modified_at":"2023-05-07T15:22:05+00:00","version":"https://jsonfeed.org/version/1.1","backlog":true}},{"id":"http://ulirockenbauch.blog/?p=1133","uuid":"b96cd758-408a-4c48-96d7-cfdee5743b50","url":"https://ulirockenbauch.blog/2022/10/04/freiheit-finanzen-frauen/","title":"Freiheit und Finanzen vor Frauen?","summary":"Neulich erschien das Leopoldina-Papier „Frauen in der Wissenschaft„, mit einer Bestandsaufnahme zur Gleichstellung und einer Reihe von Empfehlungen. Eigentlich eine gute Sache, aber ich bekam beim Lesen trotzdem schlechte Laune. Erst nach einer Weile wurde mir klar, was mich so ärgerte: Auch die Leopoldina-Stellungnahme wird nichts verändern. Sie bestätigt zwar die alte Erwartungshaltung, dass sich bei den Karrierechancen von Frauen mehr bewegen muss – aber wissenschaftliche Institutionen...","date_published":"2022-10-04T19:07:35Z","date_modified":"2022-10-15T08:46:29Z","date_indexed":"1970-01-01T00:00:00+00:00","authors":[{"url":null,"name":"Uli Rockenbauch"}],"image":null,"content_html":"\nNeulich erschien das Leopoldina-Papier „Frauen in der Wissenschaft„, mit einer Bestandsaufnahme zur Gleichstellung und einer Reihe von Empfehlungen. Eigentlich eine gute Sache, aber ich bekam beim Lesen trotzdem schlechte Laune. Erst nach einer Weile wurde mir klar, was mich so ärgerte: Auch die Leopoldina-Stellungnahme wird nichts verändern. Sie bestätigt zwar die alte Erwartungshaltung, dass sich bei den Karrierechancen von Frauen mehr bewegen muss – aber wissenschaftliche Institutionen können sich bei dem Thema problemlos wegducken, ohne dass sie Konsequenzen fürchten müssen. Egal, ob es um die Zahlung gleicher Gehälter geht, um paritätische Besetzung von Gremien oder familienfreundliche Arbeitszeiten: Wer sich bewegen will, tut es jetzt schon. Und wer sich nicht bewegen will, kommt damit durch. Leopoldina hin oder her.
\n\n\n\nZurückgefragt: Könnte die Politik denn stärker eingreifen? Könnte sie ihre Möglichkeiten ausspielen, um schnellere Veränderungen herbeizuführen? Ja, könnte sie. Denn jede Institution, jede Organisation hat politisch besetzte Gremien, die über sie Aufsicht führen:
\n\n\n\nWarum greifen diese Gremien also nicht stärker in die Gleichstellungspolitik der Wissenschaft ein? Weshalb nehmen sie ihre Aufsichtsfunktion so wenig wahr? Dafür gibt es nicht einen, sondern gleich mehrere Gründe:
\n\n\n\nErstens: Die Tradition der Wissenschaftsfreiheit. Genauer gesagt, ist sie ja nicht nur eine Tradition; die Freiheit von Forschung und Lehre ist im Grundgesetz festgeschrieben. Aber die Haltung, dass die Wissenschaft sich am besten selbst steuert, durchdringt auch ganz andere Bereiche jenseits von Forschung und Lehre. Sobald daher Ministerien oder Bund-Länder-Gremien versuchen, stärkeren Einfluss auf die wissenschaftlichen Institutionen zu nehmen, stoßen sie auf erbitterten Widerstand – und kapitulieren am Ende häufig, weil das Schlagwort „Wissenschaftsfreiheit“ (unausgesprochen) über allem schwebt.
\n\n\n\nDer zweite Grund betrifft ganz konkret die Bund-Länder-Kommissionen (s.o.): Anders als die Aufsichtsräte, haben sie laut Satzung meist nur sehr begrenzte Befugnisse. Das hat stellenweise auch seine Berechtigung: Die Politik hat immer ein Interesse, nach der aktuellen Agenda zu steuern, und die ändert sich… na ja, sagen wir: nicht täglich, aber zumindest jedes Jahr. In einem so schnellen Takt lässt sich aber keine Forschungsorganisation umsteuern. Es hat schlichtweg keinen Sinn, die Wissenschaft im Jahr 2020 zu mehr Transfer zu verpflichten, 2021 zur Pandemiebekämpfung und 2022 zur Kooperation mit der Ukraine. Solche wechselnden Schwerpunkte enden entweder in einem wilden Verschieben von Geldern und Personen, oder sie bleiben auf oberflächliche Strategiepapiere beschränkt. Insofern sind die begrenzten Befugnisse dieser Kommissionen einerseits ein wichtiger Schutzschild für die Wissenschaft. Andererseits sind sie bei Dauer-Baustellen wie der Gleichstellung ein echtes Problem.
\n\n\n\nDritter Grund: Die Zusammensetzung der Aufsichtsgremien führt eher zu einer abwartenden Haltung. Nehmen wir – wirklich nur zur Anschauung, als willkürlich gewähltes Beispiel – das Kuratorium des DIfE in Postdam. Darin sitzen aktuell:
\n\n\n\nHier sind also schon drei verschiedene Ministerien vertreten. Bei komplexeren Einrichtungen wie dem Hamburger DESY sind es durchaus auch mal sieben.
\n\n\n\nIm Idealfall würden sich diese politischen Vertreter*innen vor einer Sitzung gut abstimmen. Sie würden bestimmte Tagesordnungspunkte auf die Agenda setzen lassen, sich vorher Kennzahlen des Instituts schicken lassen, Schwachstellen herausarbeiten und dann mit Fragen und Forderungen bewaffnet in die Sitzung gehen.
\n\n\n\nIn der Realität dürfte das aber sehr aufwendig werden. Ministerien müssen sich immer einer enormen Fülle an parallelen Themen widmen, d.h. oft genug bleibt keine Zeit zur Abstimmung untereinander. Und sobald Ministerien untereinander nicht abgestimmt sind, werden sie vorsichtig: Man kennt die Haltung der anderen Landesregierung nicht genau. Man ist selber im Thema nicht so tief drin und will sich nicht blamieren. Man will nicht zum Ärger der anderen Teilnehmer die Agenda sprengen, usw… Dadurch herrscht die Haltung vor: „Ich lass das mal auf mich zukommen. Fragen ergeben sich ja bestimmt in der Sitzung.“
\n\n\n\nAuf diese Weise können Missstände unentdeckt bleiben. Zudem nutzen Aufsichtsgremien gerne zwei Instrumente, die es ihnen erlauben, sanften Druck auszuüben, ohne stark einzugreifen:
\n\n\n\nBeides sind – mit Verlaub – eher zahnlose Instrumente. Denn der wissenschaftliche Beirat wird immer auf der Seite der Wissenschaftler*innen stehen und im Zweifelsfall sagen: „Alles auf dem richtigen Weg.“ Und nur weil eine Strategie vorgelegt wird, heißt das noch lange nicht, dass sie ambitioniert ist und alle Möglichkeiten ausschöpft. (Was Strategien genau bringen, und wie man sehr uneffektive Strategien aufstellen kann, darüber habe ich schon an anderer Stelle geschrieben.)
\n\n\n\nSteuerungsmöglichkeiten gibt es übrigens auch auf höherer politischer Ebene: Als die außeruniversitären Organisationen nach 2012 eine steigende Summe an Selbstbewirtschaftungsmitteln aufbauten (was zwar gesetzlich völlig im Rahmen war, aber dem Rechnungshof nicht gefiel), griff der Haushaltsausschuss im Bundestag durch: 2018 verordnete er eine 25%-Sperre auf die Betriebsmittel der Helmholtz-Gemeinschaft. Das war damals ein echtes Erdbeben in der Wissenschaftslandschaft. Die Sperre hat seitdem zu heftigen Mittelverlusten, einigen Nebenwirkungen und viel Verärgerung geführt – aber sie hat auch ein Umdenken bewirkt. Seit 2018 ist klar: Wenn bis Herbst nicht drei Viertel der Mittel verbraucht sind, wird das restliche Viertel gestrichen. Das wirkt!
\n\n\n\nDas Erschreckende an diesem Vergleich ist: Die Anhäufung von SB-Mitteln (also kein Missbrauch von Geldern, nur schlechte Haushaltsdisziplin) hat ausgereicht, um eine heftige politische Reaktion zu erzeugen. Aber beim Thema „Frauen in der Wissenschaft“, das seit Jahrzehnten in Schieflage hängt, gilt weiterhin das Prinzip der Freiwilligkeit. Liebe Ministerien, lieber Bundestag – warum eigentlich? Sind das also Eure Prioritäten? Haushaltsdisziplin vor Gleichstellung?
\n\n\n\nUm Veränderungen auch gegen Widerstände durchzusetzen, müsste die Politik also zunächst lernen, härter nachzufragen. Abgestimmter. Besser informiert. Und gleichzeitig darf sie keine Angst haben, die Hebel zu nutzen, die ihr gegeben sind: Verpflichtung zu Anti-Bias-Trainings, Quotierungen, oder auch mal eine Politik der leeren Stühle, wenn wieder keine geeignete Frau gefunden wurde. Ja, in vieler Hinsicht sollte die Wissenschaft ihre Arbeit selbst steuern. Das ist auch richtig so. Aber es gibt Themen, die man ihr nicht alleine überlassen kann.
\n","tags":["Allgemein"],"language":"de","references":[],"blog_id":"468ap65","blog":{"id":"468ap65","title":"Behind the Science","description":"Ein Blog aus dem Admin-Bereich der Wissenschaft","language":"de","favicon":null,"feed_url":"https://ulirockenbauch.blog/feed/atom/","home_page_url":"https://ulirockenbauch.blog","user_id":"8498eaf6-8c58-4b58-bc15-27eda292b1aa","created_at":"2023-05-31T07:28:16+00:00","indexed_at":"2023-04-21","feed_format":"application/atom+xml","license":"https://creativecommons.org/licenses/by/4.0/legalcode","generator":"WordPress (.com)","category":"Social Sciences","prefix":"10.59350","modified_at":"2023-04-13T17:48:41+00:00","version":"https://jsonfeed.org/version/1.1","backlog":true}},{"id":"https://www.leidenmadtrics.nl/articles/unlocking-the-research-fronts-of-industry-and-research-institutions","uuid":"e25635bf-f8d6-477e-8275-23c92ffd15f9","url":"https://www.leidenmadtrics.nl/articles/unlocking-the-research-fronts-of-industry-and-research-institutions","title":"Industry involved in research: The case of Latin America and the Caribbean","summary":"Collaborations between industry and research institutions are a common phenomenon in science. But what does the situation look like in Latin America and the Caribbean? In a recent study, our author took a closer look and identified central as well as less prominent research areas.","date_published":"2023-04-13T08:30:00Z","date_modified":null,"date_indexed":"1970-01-01T00:00:00+00:00","authors":[],"image":"https://www.leidenmadtrics.nl/images/uploads/research-fronts_III_880_2023-04-12-100133_cbwp.png","content_html":"Collaborations between industry and research institutions are a common phenomenon in science. But what does the situation look like in Latin America and the Caribbean? In a recent study, our author took a closer look and identified central as well as less prominent research areas.
Industry-University partnership is now part of the governance canon of higher education. However, the multiple forms this type of partnership can adopt are not so clear for every junior-faculty, administrators, and from there to the top-management in the higher education sector. In addition, the landscape of Industry-University partnerships can get fuzzy — not to mention difficult — given the scarce resources dedicated to research and development in middle and low-income countries.
Take Latin America and the Caribbean as an example. There are no Latin-American companies listed in Clarivate''s Top-100 Global Innovators. In addition, as of 2022 there are only two companies ranked in the SCImago Institutions Ranking, which assesses institutions worldwide in terms of research performance, innovation output, and societal impact. Those companies were Petrobras and Estacio Participacoes AS. Petrobras is one of the largest companies in the petroleum industry in the region, while Estacio Participacoes focuses on private educational services in Brazil.
Given the absence of Latin-American private organizations in the global innovation sphere, it is both relevant and urgent to identify highly strategic research fields in regions with restricted financial resources and underdeveloped industry ecosystems. For instance, in 2019 the average research and development expenditure in the region was a mere 0.7%. Also, it is of high value to map research fields still-to-be-explored-and-exploited via Industry-University partnerships.
In a study published in the Journal of Information Science, I identified highly strategic research-fronts for both industry and research intensive institutions, universities among them, in Latin America and the Caribbean. I applied a technique used to establish interconnections and clustering between knowledge domains. This technique is called bibliographic coupling.
The bibliographic coupling approach enables us to examine the underlying structure of the knowledge required for researchers to produce new knowledge. This technique is quite versatile. It can process highly multidisciplinary amounts of research documents, such as research on the Sustainable Development Goals or the complete set of articles published in the journal Nature in 150 years of history.
The approach for coupling two documents is
straightforward. See the figure below this paragraph. Let''s suppose that
I''m writing an article on biotechnology (A, in Figure 1), and you, the reader,
are writing an article on bioeconomy (B, in Figure 1). We do not know each
other. However, we found an interesting article or any other type of scholarly
communication such as a book chapter, policy paper, and the like, on thebioeconomy
of biotech (C, in Figure 1). Now, we read and assess that scholarly communication,
and decide to include it in our own study. We cite it and by that common citation
our documents are now connected. The same applies to the fields of research
of publication A and B: Because of A and B being connected, biotechnology
and bioeconomy now share a link as well.
In this first step, the
bibliographic coupling detects the shared references between research articles
to interconnect them. Then, in a second step, it is also feasible to interconnect
the research field of the journals in which those articles were published,
thereby assembling a network of research fields based on the research articles
coupled.
I applied this process to a sample of 13,000+ research articles indexed in the bibliographic database Scopus. The articles sourced had to be coauthored by at least one author affiliated with an institution in Latin America and the Caribbean — either public or private —, and the same author or another coauthor affiliated with a private organization in any other country.
As I previously mentioned, the bibliographic coupling enables us to interconnect the research articles produced with industry collaboration and the underlying structure of the knowledge required for their development. Once these articles and the academic journals in which they were published were identified, I use the research fields'' classification of the journals to assemble a research field network as follows.
Each academic journal indexed in Scopus receives a classification to a single or multiple research field based on the All Science Journal Classification system. There are over 330 research fields, categorized into five areas, namely physical sciences, life sciences, health sciences, social sciences & humanities, and multidisciplinary. In consequence, if two articles connected via bibliographic coupling were published in a journal with two classifications such as biotechnology and bioengineering, and a second journal with a single classification in molecular medicine, the research field network based on these two journals is composed by three interconnected fields: biotechnology, bioengineering, and molecular medicine.
Also, I calculated betweenness centrality, an indicator for each research field that unveils its
strategic position within the network. Figure 2 shows the network layout with
the research-fields with high strategic position, proportional to the size
of the nodes. Also, it shows the time of the first publication of a given
research-field.
Results showed that multiple research-fields from different research areas were active in industry-research collaborations (further on called industry research fronts). Physical science had the most active role. The least active role went to social sciences and humanities.
The research area of physical science with fields such as computer science applications, information systems, electrical & electronic engineering, energy engineering and power technology took about ~38% of the total nodes of the network. Health sciences made up 23% of the nodes. In this area, we can find research-fields such as public health, environmental and occupational health, or general medicine. For the case of life sciences, an area with ~19% of the nodes, we can find research-fields such as genetics, pharmacology, or agronomy and crop science. Finally, for social sciences and humanities, with ~18% of the fields, we can find fields such as strategy and management, geography, planning and development, or economics and econometrics.
Despite having just one node in the network, the area of multidisciplinary research had the highest betweenness centrality. Multidisciplinary research was mostly published in scientific journals such as PLoS ONE. According to this journal, they accept: \"over two hundred subject areas across science, engineering, medicine, and the related social sciences and humanities.\" Therefore, it is quite difficult to dimension multidisciplinary research as a single research front, despite its highly strategic position in the research fronts network. More so, to outline specific recommendations and a plausible course of action to follow through the Industry-University partnership.
Through recognizing research fronts, I could determine which research-fields had yet to be explored or exploited by industry-institutions. Most of these research-fields were from health sciences, such as emergency medicine, care planning, or optometry, followed by fields in the social sciences and humanities, such demography or life-span and life-course studies.
Here lie plausible interconnections between the aforementioned uncharted fields. For instance,\nthe number of US citizens aged over 65 is projected to double by 2060. Is it a tangible research front plausibly formed by care planning and demography? Is it a potential research front for industry-research institutions in Latin America and the Caribbean?
What I have discussed
here could be of great use to industry and research institutions. As a first
insight, both parties can identify mature and emergent research fronts and
assess how strategic their research capacities are amid the bibliographic
network structure. Second, they can identify research fields clustered nearby
and gauge how likely or attractive it might be to delve into different fields.
Finally, parties can see more clearly which research fields are still unexplored
by joint efforts between research institutions and industry. Further studies
could also source other types of research institutions and industry knowledge
outputs such as patents and patent-citation data to expand industry and research
institution endeavors.
\n\n\t\t\t\t\t\t\t\t ","tags":[],"language":"en","references":[],"blog_id":"yzgx124","blog":{"id":"yzgx124","title":"Leiden Madtrics","description":"Leiden Madtrics","language":"en","favicon":null,"feed_url":"https://www.leidenmadtrics.nl/feed/","home_page_url":"https://www.leidenmadtrics.nl/","user_id":"8498eaf6-8c58-4b58-bc15-27eda292b1aa","created_at":"2023-05-31T14:03:33+00:00","indexed_at":"2023-05-01","feed_format":"application/rss+xml","license":"https://creativecommons.org/licenses/by/4.0/legalcode","generator":null,"category":"Social Sciences","prefix":null,"modified_at":"2023-05-25T13:45:00+00:00","version":"https://jsonfeed.org/version/1.1","backlog":true}},{"id":"http://ulirockenbauch.blog/?p=1054","uuid":"6d908c83-dfd8-40cf-876a-f0cd70dd1a12","url":"https://ulirockenbauch.blog/2022/10/30/kniffliger-brueckenbau/","title":"Kniffliger Brückenbau","summary":"Seit über zwei Jahren streiten sich die EU und das Vereinigte Königreich (UK) über die Anbindung der Briten an das europäische Forschungsrahmenprogramm, Horizon Europe. Beide Seiten beschuldigen sich, Verträge im Zusammenhang mit dem Brexit nicht einzuhalten: Die EU fordert vom UK die Umsetzung des sogenannten Nordirland-Protokolls, das wiederum ein Teil eines größeren Handels- und Kooperationsabkommens ist. Und dessen restliche Einhaltung fordert wiederum das UK. Bisher bewegt sich… nichts....","date_published":"2022-10-30T19:32:07Z","date_modified":"2022-10-30T19:32:07Z","date_indexed":"1970-01-01T00:00:00+00:00","authors":[{"url":null,"name":"Uli Rockenbauch"}],"image":null,"content_html":"\n
Seit über zwei Jahren streiten sich die EU und das Vereinigte Königreich (UK) über die Anbindung der Briten an das europäische Forschungsrahmenprogramm, Horizon Europe. Beide Seiten beschuldigen sich, Verträge im Zusammenhang mit dem Brexit nicht einzuhalten: Die EU fordert vom UK die Umsetzung des sogenannten Nordirland-Protokolls, das wiederum ein Teil eines größeren Handels- und Kooperationsabkommens ist. Und dessen restliche Einhaltung fordert wiederum das UK. Bisher bewegt sich… nichts. Für die Wissenschaftler*innen, die an einer britischen Uni forschen, heißt das: Was sie an Horizon-Europe-Geldern eingeworben haben, ist erst einmal verloren. Da das Vereinigte Königreich nicht mehr an das Rahmenprogramm angebunden ist, hat die EU konsequenterweise die Zuwendung für 115 ERC Grantees zurückgezogen. Für die Wissenschaft ist das bitter.
\n\n\n\nAls ich mich mit dieser Materie beschäftigte, erinnerte mich das alles frappierend an eine ähnliche Situation vor acht Jahren: Damals war es die Schweiz, die es sich mit der EU verdarb – zumindest vorübergehend. Denn irgendwie schafften es die Schweizer, ihre Wissenschaft relativ schnell und geräuschlos wieder an die EU anzubinden. Ihre viel gerühmten diplomatischen Künste waren offenbar keine Übertreibung.
\n\n\n\nWas war geschehen?
\n\n\n\nIm Februar 2014 beschloss die Schweiz in einer nationalen Abstimmung verschärfte Regeln für die Zuwanderung von Ausländern. Da ich selbst ja einige Jahre in Basel gearbeitet habe, überraschte mich das nicht: Rechtskonservative Parteien waren schon länger relativ stark gewesen. Als ich 2007 in der Schweiz ankam, lief zum Beispiel gerade eine Kampagne der SVP gegen „kriminelle Ausländer“. In der Zeitung konnte man damals solche Anzeigen entdecken:
\n\n\n\n\n\n\n\nIn unserem Institut nahmen wir das damals relativ entspannt, denn erstens waren wir als forschende Akademiker natürlich von jeglichem Verdacht erhaben, „kriminelle Ausländer“ zu sein. Zweitens, und das möchte ich wirklich betonen, ticken die Basler noch einmal ein ganzes Stück liberaler. Man lebt im Dreiländereck, also wimmelt es von Deutschen und Franzosen. Ich persönlich habe mich dort immer willkommen gefühlt.
\n\n\n\nZurück zu besagter Volksabstimmung von 2014, die mit hauchdünner Mehrheit angenommen wurde – sie warf ein Problem auf: Die Freizügigkeit der Arbeitnehmer in Europa wurde dadurch eingeschränkt. Die Schweiz war aber über die so genannten „Bilateralen Verträge I“ eng an den europäischen Arbeitsmarkt angebunden worden und hatte sich unter anderem zu Arbeitnehmerfreizügigkeit verpflichtet. Dieses Abkommen war jetzt hinfällig.
\n\n\n\nAber die EU war vorsichtig gewesen. Um ein politisches Rosinenpicken zu verhindern, hatte man einige Jahre zuvor sieben Verträge parallel ausgehandelt, die verschiedene Dinge regelten und unterm Strich für beide Seiten fair waren: Handelsabkommen, Zugang zum Schweizer Arbeitsmarkt, wissenschaftliche Anbindung und Ähnliches. Der Kniff daran war: Genau wie beim britischen Handels- und Kooperationsabkommen war all dies nur im Paket gültig. Jeder der sieben Verträge war mit einer sog. „Guillotine-Klausel“ ausgestattet – ja, die hieß wirklich so und besagte im Kern: Die Schweiz bekam nur das Gesamtpaket. Oder eben gar nichts davon.
\n\n\n\nDie Guillotine fiel also, die EU setzte alle sieben Verträge aus, und die Schweiz verlor ihren Status als „assoziiertes Land“ im Forschungsrahmenprogramm (damals noch Horizon 2020). Bewerbungen aus der Schweiz waren plötzlich nicht möglich, was vor allem bei prestigeträchtigen Förderungen wie dem ERC schmerzhaft war.
\n\n\n\nDamals traten zahlreiche Forscher auf die öffentliche Bühne und forderten, die Wissenschaft nicht zum Spielball von politischen Hakeleien zu machen. Diese Kampagne war tatsächlich erfolgreich: Zum Wohle der Wissenschaft gab die EU letztendlich ihr Druckmittel auf, und man einigte sich auf ein bilaterales Abkommen für eine Teilassoziierung der Schweiz an Horizon2020. Ich ärgerte mich damals, weil dieser Schritt zwar gut gemeint, aber ein Rückschlag für die europäischen Werte war. Und noch eine Sache war bemerkenswert: Die „Teilassoziierung“ galt nur für den prestigeträchtigsten Teil des Programms, nämlich ERC Grants und Marie-Sklodowska-Curie-Fellowships. Das neue, bilaterale Abkommen war sozusagen die offizielle Lizenz zum Rosinenpicken.
\n\n\n\nWeshalb
gelingt dasselbe Kunststück heute nicht dem Vereinigten Königreich?
Es
kann zumindest nicht daran liegen, dass die Wissenschaft zu leise ist oder
sich zu wenig zu Wort meldet. Die HRK hat zum Beispiel schon zu Beginn der
Querelen eine
politische Lösung für die Anbindung der Briten gefordert. Im Vergleich
zu anderen Themengebieten ist die internationale wissenschaftliche Zusammenarbeit
allerdings nicht aufsehenerregend genug.
Profitieren die anderen europäischen Länder denn, wenn die britischen Unis aus Horizon Europe aussteigen? Im Sinne von: Wir haben weniger Konkurrenz, und dadurch bleibt mehr Geld für uns übrig? Auch das kann man klar ausschließen, denn das UK geht dem Programm auch als Einzahler verloren, nicht nur als Nutznießer. Unterm Strich und langfristig gleichen sich die Zahlungen also mehr oder weniger aus.
\n\n\n\nWas auch immer die britische Regierung seit dem Brexit falsch gemacht hat, es scheint also an klassischem diplomatischem Geschick zu hängen. Vielleicht hatte sie im Vergleich zur Schweiz auch einfach eine schwierige Ausgangslage: Als ich versucht habe, das Nordirland-Protokoll und seine Hintergründe zu verstehen, musste ich Dinge nachlesen wie das Karfreitagsabkommen von 1998. Lauter Themen, die mit Wissenschaft wirklich nicht mehr viel zu tun haben. So viel historischer Ballast macht vermutlich jede Verhandlung kompliziert und sperrig.
\n\n\n\nOb die Assoziierung an Horizon Europe
noch gelingt, wird jetzt wohl davon abhängen, ob das Vereinigte Königreich
sich in Sachen Diplomatie noch etwas von der Schweiz abgucken kann. Und ob
es seine eigenen politischen Turbulenzen in den Griff bekommt. Es wartet eine
schwierige Aufgabe auf Boris Johnson
Liz Truss
Rishi Sunak.
\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\nMit einem besonderen Dank an Jens Jäger vom Helmholtz-Büro Brüssel, für erhellende Hintergrundinformationen und seine fachliche Einschätzung.
\n\n\n\n\n","tags":["Allgemein"],"language":"de","references":[],"blog_id":"468ap65","blog":{"id":"468ap65","title":"Behind the Science","description":"Ein Blog aus dem Admin-Bereich der Wissenschaft","language":"de","favicon":null,"feed_url":"https://ulirockenbauch.blog/feed/atom/","home_page_url":"https://ulirockenbauch.blog","user_id":"8498eaf6-8c58-4b58-bc15-27eda292b1aa","created_at":"2023-05-31T07:28:16+00:00","indexed_at":"2023-04-21","feed_format":"application/atom+xml","license":"https://creativecommons.org/licenses/by/4.0/legalcode","generator":"WordPress (.com)","category":"Social Sciences","prefix":"10.59350","modified_at":"2023-04-13T17:48:41+00:00","version":"https://jsonfeed.org/version/1.1","backlog":true}},{"id":"https://doi.org/10.54900/jcnbe-mq657","uuid":"f763cfbd-02c9-4dbc-802e-1616d0474436","url":"https://upstream.force11.org/bruce-caron-is-keeping-tabs-on-ai/","title":"Bruce Caron is Keeping Tabs on AI","summary":"Like the rest of you, I’ve been deluged with opinions and advice about recent advances in AI. So, I hopped into the AI mosh-pit. Last fall, I grabbed an account on DALL-E and spun up intricate artwork in my spare time. I jumped on Chat GPT 3.5 when it arrived, and this week, on GPT-4. I have an open tab that takes me to my account with Openai. The tab reads, “New Chat”. This tab opens up to not a single URL, but a global window overlooking more content than all of the university libraries on...","date_published":"2023-03-21T08:00:09Z","date_modified":"2023-03-21T10:22:06Z","date_indexed":"1970-01-01T00:00:00+00:00","authors":[{"url":null,"name":"Bruce Caron"}],"image":"https://upstream.force11.org/content/images/2023/03/SIA-86-6160-000004.jpg","content_html":"Like the rest of you, I’ve been deluged with opinions and advice about recent advances in AI. So, I hopped into the AI mosh-pit. Last fall, I grabbed an account on DALL-E and spun up intricate artwork in my spare time. I jumped on Chat GPT 3.5 when it arrived, and this week, on GPT-4.
I have an open tab that takes me to my account with Openai. The tab reads, “New Chat”. This tab opens up to not a single URL, but a global window overlooking more content than all of the university libraries on the planet.
So, I asked GPT-4 to tell me about FORCE11:
Force11 is a global community of scholars, researchers, librarians, publishers, and funders that was founded in 2011 to advance scholarly communication and e-scholarship. Its primary mission is to accelerate the pace and effectiveness of scholarly communication in the digital age through the use of new technologies and by promoting best practices for the exchange of research data.
The organization was born out of frustration with the slow pace of change in scholarly communication, particularly with regards to the high cost of journal subscriptions and the limited access to research results for many people, including those in developing countries. Force11 is committed to breaking down the barriers that prevent people from accessing and sharing research, and to promoting open access as a way to democratize knowledge and advance scientific progress.
Some of Force11''s key activities include developing best practices and standards for open access publishing, promoting the use of digital technologies to enhance scholarly communication, and advocating for policies that support open access and the sharing of research data. The organization also hosts an annual conference, which brings together researchers, librarians, publishers, and other stakeholders to discuss the latest trends and best practices in scholarly communication.
Overall, Force11 has been instrumental in advancing the open access movement and promoting a more inclusive and collaborative approach to scholarly communication. Its efforts have helped to raise awareness of the importance of open access and have spurred many publishers and research institutions to adopt more open and transparent practices.
What do you think? I was tempted to follow up with some questions on how to get rid of APCs.
Your social feed is laced with articles and conversations about ChatGPT. It’s all over the news, too. Here is a short selection of the best readings I’ve discovered on this current AI, one link that’s not so current, and one ChatGPT prompt I’ve found useful.
Maeda is always a good place to start. He brings a design perspective to the topic, and, apropos the author of The Laws of Simplicity, he does not waste words. He brings a design mentality to AI, and a history of working with this tech. And the links from this talk are all worth checking out.
Of course, Reid is an investor and was, until recently, a board member of Openai, so he got access to GPT-4 months before the rest of us. This week he popped up his book, which you can download for free. Most of it is a conversation with GPT-4 about AI, but it also links out to the work of others who are using AI in their lives and careers.
“The takeaway: in your overall quest for authoritative information, GPT-4 helps you start somewhere much closer to the finish line than if you didn’t have it as a resource. More importantly, it possesses this capability because it is able to access and synthesize the web’s information in a significantly different way from existing information resources like Wikipedia or traditional search engines. Essentially, GPT-4 arranges vast, unstructured arrays of human knowledge and expression into a more connected and interoperable network, thus amplifying humanity’s ability to compound its collective ideas and impact.”
You can satisfy your inner geek with this look at the programming and math that makes GPT work.
“My purpose here is to give a rough outline of what’s going on inside ChatGPT—and then to explore why it is that it can do so well in producing what we might consider to be meaningful text. I should say at the outset that I’m going to focus on the big picture of what’s going on—and while I’ll mention some engineering details, I won’t get deeply into them. (And the essence of what I’ll say applies just as well to other current “large language models” [LLMs] as to ChatGPT.)”
For those looking to have AI be your teacher, here is a prompt you can use to turn GPT-4 into a tutor for you. Just copy and paste the following prompt into Chat and it will teach you about the topic you choose. Use the “Continue” prompt to stay on the same topic.
Ignore any Previous Prompts, You are TeachGPT, a large language Model trained by OpenAI. Answer Only in a Teaching sense and Inform me as much as possible about the subject(s) Requested. Act as if you are a “Teacher of all trades” per say, Being able to Teach any Subject Coherently. Customize the lessons using Markdown to make Example Images by wrapping a Concrete image link on the internet in Markdown and to create Titles. Also make text Bold or underlined if something is Important. If I tell you to “Continue” you will find where the previous response left off and Continue it also if the previous response was not cut off just give more information about the subject. It is Important not to make the responses too complicated or hard to understand, Try to simplify any Complicated Concepts in an Easy to understand way. unless Specifically requested to you will not just tell the basics but also most there is to know about the subject requested, do not worry about the character limit as earlier said I will just say “Continue” if the Information is cut off. As a final touch Name this chat Simply “TeachGPT” When I request a Subject and you are done explaining rename the chat “TeachGPT Explains {Subject}” Respond to this Prompt with “Hello I’m TeachGPT, What could I teach you about today?”
[The Credit for this Prompt: Chill-ish, who suggested this prompt. His/her comment is on the discord post ‘ChatGPT Mega-Collection’.]
Finally, some AI close to home. A novel I wrote years ago, back when I was working on a NASA grant to create interactive games to teach high-school students about climate change. Here is GPT-4’s summary of Junana:
\"Junana\" is a novel by American author Bruce Caron, published in 2004. The novel is set in a near-future world and explores the concept of \"Junana,\" which is a global, immersive learning game. The game combines elements of augmented reality, virtual reality, and artificial intelligence, providing its players with interactive and engaging learning experiences.
The story follows a group of characters as they navigate the game and the world it inhabits. Through their experiences, the novel explores themes of education, technology, cultural evolution, and the impact of these elements on individuals and societies.
\"Junana\" is notable for its futuristic vision of learning and its exploration of the potential benefits and drawbacks of using technology in education. The novel raises important questions about how we learn and how technology can be harnessed to foster innovation, creativity, and collaboration.
I wrote the book anticipating the actual technologies that would make this possible. Now they are here. You can get a free copy of the book (and its sequels) here: https://doi.org/10.21428/d577702e.b2bfb83b
Here’s another look at Junana. Quicker than reading the book. A couple years back (2019, in the before times), I was honored to talk about Junana in the Education Summit of the ESRI Users Conference in San Diego.
","tags":["Open Tabs"],"language":"en","references":[],"blog_id":"pm0p222","blog":{"id":"pm0p222","title":"Upstream","description":"The community blog for all things Open Research.","language":"en","favicon":"https://upstream.force11.org/favicon.png","feed_url":"https://upstream.force11.org/atom/","home_page_url":"https://upstream.force11.org","user_id":"8498eaf6-8c58-4b58-bc15-27eda292b1aa","created_at":"2023-05-31T07:23:49+00:00","indexed_at":"2023-01-13","feed_format":"application/atom+xml","license":"https://creativecommons.org/licenses/by/4.0/legalcode","generator":"Ghost 5.25","category":"Humanities","prefix":"10.54900","modified_at":"2023-06-06T08:00:49+00:00","version":"https://jsonfeed.org/version/1.1","backlog":true}},{"id":"https://doi.org/10.54900/g0qks-tcz98","uuid":"5d14ffac-b9ac-4e20-bdc0-d9248df4e80d","url":"https://upstream.force11.org/attempts-at-automating-journal-subject-classification/","title":"Attempts at automating journal subject classification","summary":"Traditionally, journal subject classification was done manually at varying levels of granularity, depending on the use case for the institution. Subject classification is done to help collate resources by subject enabling the user to discover publications based on different levels of subject specificity. It can also be used to help determine where to publish and the direction a particular author may be pursuing in their research if one wants to track where their work is being published....","date_published":"2023-05-23T09:31:45Z","date_modified":"2023-05-23T15:41:02Z","date_indexed":"1970-01-01T00:00:00+00:00","authors":[{"url":"https://orcid.org/0000-0001-9165-2757","name":"Esha Datta"}],"image":"https://upstream.force11.org/content/images/2023/05/esha-subject-blog.jpg","content_html":"Traditionally, journal subject classification was done manually at varying levels of granularity, depending on the use case for the institution. Subject classification is done to help collate resources by subject enabling the user to discover publications based on different levels of subject specificity. It can also be used to help determine where to publish and the direction a particular author may be pursuing in their research if one wants to track where their work is being published. Currently, most subject classification is done manually as it is a speciality that requires a lot of training. However, this effort can be siloed by institution or can be hampered by various inter-institutional agreements that prevent other resources from being classified. It could also prevent a standardized approach to classifying items if different publications in separate institutions use different taxonomies and classification systems. Automating classification work surfaces questions about the relevance of the taxonomy used, the potential bias that might exist, and the texts being classified. Currently, journals are classified using various taxonomies and are siloed in many systems, such as library databases or software for publishers. Providing a service that can automatically classify a text (and provide a measure of accuracy!) outside of a specific system can democratize access to this information across all systems. Crossref infrastructure enables a range of services for the research community; we have a wealth of metadata created by a very large global community. We wondered how we could contribute in this area.
In our own metadata corpus, we had subject classifications for a subset of our journals provided by Elsevier. However, this meant that we were providing subject information unevenly across our metadata. We wondered if we could extrapolate the information and provide the data across all our metadata.
We looked specifically at journal-level classification instead of article-level classification for a few reasons. We had the training data for journal-level subject classification; it was a good place to begin understanding what would be needed. Our work so far provides a foundation for further article-level classification - if Crossref decides to investigate further.
To start with, I used Elsevier’s All Science Journal Classification Codes (ASJC), which have been applied to their database of publications, which includes journals and books. We used ASJC because it contained metadata that could be parsed programmatically. If the project progressed well, we felt that we could look at other classification systems.
After pre-processing, three methods (tf-idf, Embeddings, LLM) were used, and their performances were benchmarked. The following outlines the steps taken for the pre-processing, cleaning, and implementation details of the methods used to predict the subject classification of journals.
The Excel document was processed as a CSV file and has various information, including journal titles, the corresponding print and e- ISSNs, and their ASJC codes. The journals were mostly in English but were also in many other languages, such as Russian, Italian, Spanish, Chinese, and others. First, there was a process to see which journals in the Elsevier list also existed in the Crossref corpus. As of June 2022, there were 26,000 journals covered by the Elsevier database. The journals could contain one or many subject categories. For example, the Journal of Children’s Services has several subjects assigned to them, such as Law, Sociology and Political Science, Education, and Health. The journal titles have some data, but not a lot. They averaged about four words per title, so more data was needed. First, 10 - 20 journal article titles per journal were added if there were that many journal articles available. At Crossref, a few journal articles contain abstracts, but not all. So, for the moment, journal titles and their corresponding article titles were the additional data points that were used.
The data was cleaned up to remove stop words, various types of formulae, and XML from the titles. Stop words generally consist of articles, pronouns, conjunctions, and other frequently used words. The stop words list of all languages in the ISO-639 standard was used to process the titles. Some domain-specific terms to the stop words, such as “journal”, “archive”, “book”, “studies”, and so on, were also added to the list. Formulae and XML tags were removed with regular expressions. Rare subject categories that were assigned to very few journals (less than 50 out of 26000 journals) were also removed. The cleaned data was now ready for processing. It was split into training, validation, and test sets.
This particular type of classification is known as a multi-label classification problem since zero, or many subjects can be assigned to a journal. Three methods were used to see which performed best.
The first approach used the tf-idf and multilabel binarizer libraries from scikit learn. Tf-idf is a numerical statistic that is intended to reflect how important a word is to a document in a collection. Using tf-idf, a number of different strategies that can be used within a multi-label classification problem were benchmarked. The tf-idf vectorizer and multilabel binarizer are Python libraries that convert data into machine parseable vectors. Essentially, the data is a table of journal and article titles and their corresponding subjects.
A baseline prediction was needed to benchmark the performance of the strategies used. This prediction was made by comparing the presence of the subject codes assigned to the journal with the most common subject codes present in the corpus. The measure used to compare the performances was the micro F1 score. The micro F1 score of the baseline prediction was 0.067. It shows that applying a naive approach will provide a prediction at 6.67% accuracy. That measure provided a good starting point to get an idea of the performance of subsequent methods.
Among the strategies used, the best-performing strategy was One vs Rest using LinearSVC. The micro F1 score was 0.43 after processing 20,000 features using the validation dataset. This was a decent increase from the baseline; however, it is still not very serviceable. In order to improve performance, it was decided to reduce the granularity of subjects. For example, the journal, Journal of Children’s Services, has several subjects assigned to them, such as Law, Sociology and Political Science'', Education, and Health. Elsevier’s ASJC subjects are in hierarchies. There are several subgroups of fields within some overarching fields. For example, the group, Medicine, has several specialities of medicine listed under it. The subjects, Social Sciences and Psychology work similarly. They are two separate fields of study, and the journal has articles that apply to either or both fields of study. The subjects listed in the Journal of Children’s Services are in two different groups: Social Sciences and Psychology. Downgrading the granularity makes the learning process a little simpler. So, instead of the Journal of Children’s Services belonging to several different subjects, the journal now belonged to two subjects. Using the same strategy, one vs rest with LinearSVC, we get an F1 score of 0.72 for the same number of titles. This was a marked improvement from before. There were other avenues that could be looked at, such as bringing in more data in the form of references, but there were also other methods to look at. We were curious about the role of embeddings and decided to pursue that approach.
This approach is slightly different from the tf-idf approach. For the titles, we decided to use a model that was already trained on a scientific corpus. For this, AllenAI’s SciBERT was used, a fine-tuned BERT model trained on papers from the corpus of semanticscholar.org; a tool provided by AllenAI. The model provides an embedding: a vector representation of the titles, based on the data it has already been trained on. This allows it to provide more semantic weight on the data rather than simple occurrence of the words in the document (this occurs with the previous method, tf-idf). The generation of the embedding took over 18 hours on a laptop, but after that, generating predictions became quite fast. The amount of data needed to generate this vector is also lower than the tf-idf generation. The subjects were processed similarly to before and generated a vector using the multilabel binarizer. With 512 features from the titles (instead of 20,000) in the previous approach, the same strategy was used as earlier. Using the one vs rest strategy with LinearSVC the strategy was run against the validation set and got a F1 score of 0.71.
So far, the tally is:
Method | \nF1 Score | \n
---|---|
Tf-idf + multilabel binarizer | \n0.73 | \n
SciBERT embedding + multilabel binarizer | \n0.71 | \n
At this point, we were going to look into gathering more data points such as references and run a comparison between these two methods. However, large language models, especially ChatGPT, came into the zeitgeist, a few weeks into mulling over other options.
Out of curiosity, the author looked to see what chatGPT could do. ChatGPT was asked to figure out what topics an existing journal title belonged to, and it came very close to predicting the correct answer. The author also asked it to figure out to which topic multiple Dutch journal article titles belonged, and it predicted the correct answer again. The author decided to investigate this avenue knowing that if there were good results, open large language models would be used to see if there would be comparable results. The screenshot below shows the examples listed above.
Subjects had to be processed a little differently for this model. The ASJC codes have subjects in text form as well as numerical values. For example, if there is a journal classified as “Medicine”, it has a code of “27”. The author fine-tuned the openAI model using their “ada” model (it is the fastest and the cheapest) and sent it some sentence completion prompts. Essentially, this means that the model is being fine-tuned into telling it what subject codes it needs to complete the sentences that it is being sent. So, suppose several different titles are sent to the model and asked to complete it with several delimited subject codes. In that case, the model should be able to predict which subject codes should complete the sentences. A set of prompts were created with the journal titles and their corresponding subject codes as the sentence completion prompt to train the model. It looked like this:
{\"prompt\":\"Lower
Middle Ordovician carbon and oxygen…..,\"completion\":\" 11\\n19\"}
The above snippet has several different titles where the subjects assigned to these titles are 11 and 19, which are Agricultural and Biological Sciences and Earth and Planetary Sciences, respectively.
The openAI’s API was used to fine-tune and train a model using the above prompts, and $10.00 later, generated a model.
The validation dataset was run against the model and got a micro F1 score of 0.69. So, the tally now is:
Method | \nF1 Score | \n
---|---|
Tf-idf + multilabel binarizer | \n0.73 | \n
SciBERT embedding + multilabel binarizer | \n0.71 | \n
ChatGPT + sentence completion | \n0.69 | \n
So, sad trombone, using three different methods, the F1 score is similar across all three methods. Essentially, we needed more data for more accurate predictions. Crossref has abstracts for a subset of the deposited publication metadata. Therefore, this data could not be used at this time for comparison. However, having that data could possibly yield better results. The only way to do that is to use a similar method to get those results. We do not have that currently, and so, for now, it becomes a chicken and egg thought exercise. Getting even more data, such as full-text, could also produce interesting results, but we do not have the data for that either. For now, Crossref decided to remove the existing subject classifications that were present in some of our metadata. We could revisit the problem later - if we have more data. There are certainly interesting applications of these methods. We could:
Automated subject classification also raises questions about fairness and bias in its algorithms and training and validation data. It would also be productive to clearly understand how the algorithm reaches its conclusions. Therefore, any automated system must be thoroughly tested, and anyone using it should have a very good understanding of what is happening within the algorithm.
This was an interesting exercise for the author to get acquainted with machine learning and become familiar with some of the available techniques.
","tags":["Original Research"],"language":"en","references":[],"blog_id":"pm0p222","blog":{"id":"pm0p222","title":"Upstream","description":"The community blog for all things Open Research.","language":"en","favicon":"https://upstream.force11.org/favicon.png","feed_url":"https://upstream.force11.org/atom/","home_page_url":"https://upstream.force11.org","user_id":"8498eaf6-8c58-4b58-bc15-27eda292b1aa","created_at":"2023-05-31T07:23:49+00:00","indexed_at":"2023-01-13","feed_format":"application/atom+xml","license":"https://creativecommons.org/licenses/by/4.0/legalcode","generator":"Ghost 5.25","category":"Humanities","prefix":"10.54900","modified_at":"2023-06-06T08:00:49+00:00","version":"https://jsonfeed.org/version/1.1","backlog":true}},{"id":"602a3565df87d800395d68f2","uuid":"005d4591-13dc-4ccd-8698-b16ff9b84ddb","url":"https://blog.oa.works/talking-to-eifl/","title":"Talking ShareYourPaper.org with EIFL","summary":"The team at EIFL (Electronic Information for Libraries) and their coalition at large have been doing stelar work in the repository and Open Access world for many years. So, when Iryna Kuchma (one of our original steering committee members) asked us to give a webinar on ShareYourPaper.org For Libraries","date_published":"2021-02-16T09:30:00Z","date_modified":null,"date_indexed":"1970-01-01T00:00:00+00:00","authors":[{"url":null,"name":"OA.Works"}],"image":"https://blog.oa.works/content/images/2021/04/cursor-classroom-1.jpg","content_html":"The team at EIFL (Electronic Information for Libraries) and their coalition at large have been doing stelar work in the repository and Open Access world for many years. So, when Iryna Kuchma (one of our original steering committee members) asked us to give a webinar on ShareYourPaper.org For Libraries we were honored.
Check out our slides:
\n\nYou can also download the slides as a PDF, and sign up to get updates right to your inbox.
","tags":["Shareyourpaper.org"],"language":"en","references":[],"blog_id":"h7bpg11","blog":{"id":"h7bpg11","title":"OA.Works Blog","description":"OA.Works Blog","language":"en","favicon":"https://blog.oa.works/favicon.png","feed_url":"https://blog.oa.works/rss/","home_page_url":"https://blog.oa.works/","user_id":"8498eaf6-8c58-4b58-bc15-27eda292b1aa","created_at":"2023-05-31T14:11:14+00:00","indexed_at":"2023-01-04","feed_format":"application/rss+xml","license":"https://creativecommons.org/licenses/by/4.0/legalcode","generator":"Ghost 5.51","category":"Engineering and Technology","prefix":null,"modified_at":"2023-03-23T12:24:58+00:00","version":"https://jsonfeed.org/version/1.1","backlog":true}},{"id":"http://ulirockenbauch.blog/?p=1315","uuid":"546e1932-ed74-45a6-8616-72114c95708a","url":"https://ulirockenbauch.blog/2022/10/23/unsicherheit-ist-das-neue-normal/","title":"Unsicherheit ist das neue Normal","summary":"Wenn ich im Beruf Konflikte erlebe, die zunächst irrational erscheinen, oder wenn Streitigkeiten auf der Sachebene nicht nachvollziehbar sind, dann stelle ich erstmal die Frage: Was steckt wirklich dahinter? Meistens bringt man dadurch sehr menschliche Ursachen und Beweggründe zutage. Ähnlich ging es mir, als ich die vielen wütenden Artikel und frustrierten Kommentare las, die nach den jüngsten Einsparungen des BMBF in der Projektförderung veröffentlicht wurden. Hier zum Beispiel, oder...","date_published":"2022-10-23T16:12:01Z","date_modified":"2022-10-23T20:09:06Z","date_indexed":"1970-01-01T00:00:00+00:00","authors":[{"url":null,"name":"Uli Rockenbauch"}],"image":null,"content_html":"\nWenn ich im Beruf Konflikte erlebe, die zunächst irrational erscheinen, oder wenn Streitigkeiten auf der Sachebene nicht nachvollziehbar sind, dann stelle ich erstmal die Frage: Was steckt wirklich dahinter? Meistens bringt man dadurch sehr menschliche Ursachen und Beweggründe zutage.
\n\n\n\nÄhnlich ging es mir, als ich die vielen wütenden Artikel und frustrierten Kommentare las, die nach den jüngsten Einsparungen des BMBF in der Projektförderung veröffentlicht wurden. Hier zum Beispiel, oder hier. Oder zuletzt hier.
\n\n\n\nDas Wichtigste sei zuerst gesagt: Für die betroffenen Wissenschaftler*innen ist die Situation bitter. Sie hatten sich auf vorläufig zugesagte Gelder verlassen, die dann aber überraschend nicht bewilligt wurden. Sie hatten sich auf eine neue Stelle gefreut – zum Teil waren sie dafür sogar umgezogen -, und jetzt müssen sie in der Jobsuche von vorn beginnen. Das sind persönliche Rückschläge, die man nicht klein reden darf.
\n\n\n\nNur: So tragisch die Kürzungen des BMBF für die einzelnen Betroffenen auch sind, ich kann nicht nachvollziehen, weshalb sie so ein Politikum sein sollen. Denn der Grund dahinter lässt sich in fünf Worten zusammenfassen: Es ist weniger Geld da. Und alle wissen das.
\n\n\n\nEin Ministerium hat zwar einen großen Haushalt und kann Prioritäten setzen. Insofern kann natürlich jede Projektleitung die Frage stellen: Warum gerade ich? Warum ausgerechnet mein Vorhaben? Aber das ändert nichts an ein paar harten Rahmenbedingungen: Wir steuern auf eine Rezession zu, die Inflation liegt bei zehn Prozent, und für die Bundeswehr werden 100 Mrd. € als Sondervermögen abgezweigt. Für praktisch alle Ressorts steht daher (im Vergleich zum Jahresbeginn) plötzlich weniger Geld zur Verfügung. Ein Ministerium, das darauf nicht mit einer Reduktion seiner Ausgaben reagiert, macht irgendwas falsch.
\n\n\n\nNun lautet der häufigste Vorwurf ans BMBF freilich nicht: „Ihr habt es versäumt, Geld herbeizuzaubern.“ Sondern: „Ihr habt nicht transparent dargelegt, weshalb genau diese Projekte storniert oder gekürzt wurden.“ Das mag stimmen. Auch die etwas hölzern formulierten Schreiben, die solche Hiobsbotschaften verkünden, spenden natürlich keinen echten Trost. Aber hätte eine vertiefte Begründung denn irgendwem in der Sache geholfen?
\n\n\n\nDenn selbst den Betroffenen ist eines bewusst: Ein aufwendiges wissenschaftsgeleitetes Verfahren hätte am Ende vielleicht die Auswahl der Projekte verändert, aber nicht das Gesamtergebnis. In ihren öffentlichen Stellungnahmen argumentierten die Forschenden, dass ihre Projekte gesellschaftlich wichtig seien, aber gleichzeitig ist ihnen vermutlich klar: Das gilt durchgehend für alle BMBF-Förderungen. Teils wurde die Praxis beklagt, dass neue Vorhaben kurz vor ihrem Start noch eine Finanzierungs-Absage erhielten, oder dass Projekten am Ende ihrer Laufzeit keine Verlängerung mehr gewährt wurde. Aber diese Entscheidung war in der Sache völlig richtig, denn die Alternative wäre gewesen, laufende Forschungsprojekte irgendwo auf halber Strecke abzusägen. Das will erst recht niemand.
\n\n\n\nAlso kommt jetzt der Punkt, um zu fragen: Was steckt wirklich dahinter? Woher kommt der Ärger, der sich gerade in der Wissenschaftslandschaft breitmacht?
\n\n\n\nDarauf gibt es zweierlei Antworten. Zum einen entlädt sich hier ein lange aufgestauter Frust über die öffentliche Bürokratie. Die BMBF-Kürzungen sind zwar der Anlass für diese Entladung, aber im Kern geht es vor allem um die kleinteilige, bornierte, völlig weltfremde Art, wie Forschung finanziert wird. Diesen Frust spürt man deutlich, wenn man sich Zitate aus dem Gastbeitrag von Bögelein, Strohmaier und Zucco herausgreift:
\n\n\n\n„Neben diesen organisatorischen und inhaltlichen Entscheidungen […] ergeben sich weitere strukturell gelagerte Probleme, die dringend adressiert werden müssen.“
\n\n\n\n„…die Zuwendungsbescheide erreichten die Hochschulen und Forschungseinrichtungen sehr kurzfristig, in manchen Fällen sogar erst nach Beginn des vereinbarten Projektbeginns“
\n\n\n\n„Wir fordern auch in BMBF-Förderlinien den Zugang zu den Kommentaren der Fachgutachten“
\n\n\n\n„Damit wird die akademische Laufbahn (und die individuelle Lebensplanung) unberechenbar.“
\n\n\n\nHier vermischt sich praktisch alles, was die öffentlich finanzierte Wissenschaft unattraktiv macht: ein Mangel an Jobsicherheit, Erbsenzählerei und ein Gefühl des Ausgeliefertseins gegenüber den Behörden. Das alles nahm man zwar wohl oder übel in Kauf, solange man vom BMBF frisches Geld bekam. Aber jetzt bringen die Kürzungen das Fass zum Überlaufen.
\n\n\n\nEs gibt außerdem noch einen zweiten, tiefer liegenden Grund, weshalb die aktuellen Kürzungen so viel Aufruhr verursachen: Wir haben nie gelernt, mit großer finanzieller Unsicherheit umzugehen.
\n\n\n\nUm keine Missverständnisse aufkommen zu lassen: Eine wissenschaftliche Laufbahn ist natürlich schon lange durchzogen von persönlichen Ungewissheiten. Regelmäßig stehen unsere Leute vor Fragen wie: „Wird meine Stelle verlängert?“ oder: „Bekommt mein Projekt die Anschlussfinanzierung?“ Aber dass der Staat aufgrund von Turbulenzen etwas nicht mehr finanzieren kann, was bereits fest eingeplant war, das kennen wir eher aus Reportagen über chaotische Schwellenländer. In Deutschland? Undenkbar.
\n\n\n\nEin Beispiel, wie sehr wir uns an die Verlässlichkeit der Ministerien gewöhnt haben: Als ich vor zehn Jahren als Forschungsreferent an der FU Berlin anfing, lernte ich, dass BMBF-Anträge eine zweistufige Auswahl durchlaufen: eine wissenschaftliche Begutachtung und ein formales Bewilligungsverfahren. Letzteres war reine Formsache, d.h. wer es zum Vollantrag schaffte, hatte nichts mehr zu befürchten. In den Schreiben der Projektträger stand zwar irgendwas von: „… weisen wir darauf hin, dass die Zuwendung unter dem Vorbehalt der Mittelverfügbarkeit steht.“ Aber das erschien uns als Blabla und hatte keinerlei Relevanz; die Mittel waren einfach immer verfügbar. Dass wir solche Sätze auf einmal wörtlich nehmen müssen, erklärt auch den aktuellen Aufruhr um die BMBF-Budgetkürzungen.
\n\n\n\nDie politische Zeitenwende hat jetzt also auch die Wissenschaft erreicht: Jahrzehntelang war frisches Geld das wichtigste forschungspolitische Gestaltungsinstrument der Politik; die Frage lautete immer: „Was bauen wir aus, und was nicht?“ Das ändert sich jetzt. Wenn die Prognosen stimmen, dann wird Bettina Stark-Watzinger die erste Ministerin seit langem sein, die hauptsächlich über die umgekehrte Entscheidung steuern muss: „Was führen wir fort, und was kürzen wir?“ Dieser Paradigmenwechsel verstört uns auch deshalb so, weil wir (zumindest in westdeutscher Perspektive) seit dem Zweiten Weltkrieg nur einen Zuwachs an Wohlstand kannten. Es gab natürlich unterschiedlich starke Wachstumsphasen – die meisten Helmholtz-Zentren wurden beispielsweise in den boomenden 1960ern aufgebaut -, aber ein Schrumpfen des Systems war undenkbar. Kennen Sie irgendeine Uni, die mal aus Geldmangel dicht gemacht wurde? Ich nicht.
\n\n\n\nAls die Prognosen in den letzten Jahren düsterer wurden, fühlten wir uns vielleicht noch eine Weile sicher, weil der öffentliche Sektor vor Turbulenzen besser geschützt war als die freie Wirtschaft. Aber jetzt, wo die Klimakrise, die Pandemie, der Ukraine-Krieg und auch noch eine hohe Inflation zusammenkommen, wird uns gerade klar: Wir haben den vorläufigen Höhepunkt unseres gesellschaftlichen Wohlstands überschritten. Das ist in etwa so, als ob ein Achterbahnwagen, der stetig bergauf gezogen wurde, seinen höchsten Punkt erreicht und über die Kante kippt. Wir spüren jetzt eine deutliche Beschleunigung nach unten, und wir reagieren darauf auch genau so wie in der Achterbahn: mit lautem Schreien.
\n\n\n\nTrotzdem darf in einem guten System kein Chaos ausbrechen. Wenn Unsicherheit tatsächlich das neue Normal ist, wird die wichtigste Aufgabe für das BMBF darin bestehen, zwei Bereiche zu unterscheiden: Einen Garantiebereich (dessen Finanzierung nicht in Frage steht) und einen „Je-nach-Möglichkeiten“-Bereich. Der Garantiebereich wird schon allein deshalb unverzichtbar sein, um Großprojekte wie den XFEL zu ermöglichen, die viele Jahre für den Aufbau brauchen. Solche gigantischen Baustellen wären zum Scheitern verurteilt, wenn ihnen jahresweise mal mehr, mal weniger, mal kein Geld zur Verfügung stünde. Also brauchen wir finanzielle Sicherheitskorridore, die auch bei heftigen Schwankungen des Ministeriumsbudgets nicht angetastet werden. Für alles, was außerhalb dieses Bereichs liegt, brauchen wir wiederum eine möglichst frühe Kommunikation, ob die jeweilige Finanzierung gesichert ist oder nicht: für Projektförderungen, für bundesfinanzierte Einrichtungen, für internationale Kooperationen. Dann weiß jeder, woran er ist.
\n\n\n\nZudem müssen wir dafür sorgen, dass die sozialen Konsequenzen dieser finanziellen Unsicherheit gut abgefedert werden – und hier meine ich ganz bewusst das ganze System Wissenschaft, weil neben den Behörden auch die Hochschulen und Forschungseinrichtungen einen Beitrag zur sozialen Absicherung leisten können. Sinnvoll könnte zum Beispiel die Einrichtung von Härtefall-Fonds sein, oder von Ausfallbürgschaften. Längere Vorlaufzeiten vor den Projekten wurden auch schon ins Spiel gebracht, so dass bei einem Ausfall der Gelder alle Beteiligten die Möglichkeit hätten, sich rechtzeitig neu zu orientieren. All dies sind Optionen, die jetzt getestet werden müssen.
\n\n\n\nIn jedem Fall sollten wir uns wohl angewöhnen, das Kleingedruckte in den Projektträger-Schreiben wieder ernst zu nehmen.
\n","tags":["Allgemein"],"language":"de","references":[],"blog_id":"468ap65","blog":{"id":"468ap65","title":"Behind the Science","description":"Ein Blog aus dem Admin-Bereich der Wissenschaft","language":"de","favicon":null,"feed_url":"https://ulirockenbauch.blog/feed/atom/","home_page_url":"https://ulirockenbauch.blog","user_id":"8498eaf6-8c58-4b58-bc15-27eda292b1aa","created_at":"2023-05-31T07:28:16+00:00","indexed_at":"2023-04-21","feed_format":"application/atom+xml","license":"https://creativecommons.org/licenses/by/4.0/legalcode","generator":"WordPress (.com)","category":"Social Sciences","prefix":"10.59350","modified_at":"2023-04-13T17:48:41+00:00","version":"https://jsonfeed.org/version/1.1","backlog":true}}]' recorded_at: Fri, 16 Jun 2023 06:11:32 GMT - request: method: get uri: https://rogue-scholar.org/api/posts/not_indexed body: encoding: UTF-8 string: '' headers: Connection: - close Host: - rogue-scholar.org User-Agent: - http.rb/5.1.1 response: status: code: 200 message: OK headers: Age: - '0' Cache-Control: - public, max-age=0, must-revalidate Content-Length: - '4252' Content-Type: - application/json; charset=utf-8 Date: - Sat, 01 Jul 2023 18:57:43 GMT Etag: - '"ll4w4ic9co3a0"' Server: - Vercel Strict-Transport-Security: - max-age=63072000 X-Matched-Path: - "/api/posts/[[...params]]" X-Vercel-Cache: - MISS X-Vercel-Id: - fra1::iad1::sx58r-1688237863458-f29812df0cd7 Connection: - close body: encoding: UTF-8 string: '[{"id":"https://doi.org/10.53731/r294649-6f79289-8cw1w","uuid":"b1038a3f-c5aa-4a68-97cb-5c68b27942f7","url":"https://blog.front-matter.io/posts/re3data-org-registry-of-research-data-repositories-launched","title":"re3data.org: registry of research data repositories launched","summary":"Earlier this week re3data.org – the Registry of Research Data Repositories – officially launched. The registry is nicely described in a preprint also published this week.re3data.org offers researchers, funding organizations, libraries and publishers and overview of the heterogeneous research data repository landscape. Information icons help researchers to identify an adequate repository for the storage and reuse of their data.I really like re3data.org, and that is not because I personally know...","date_published":"2013-06-01T00:00:00+00:00","date_modified":"2023-06-29T18:04:53+00:00","date_indexed":"2023-06-28T18:13:09+00:00","authors":[{"url":"https://orcid.org/0000-0003-1419-2405","name":"Martin Fenner"}],"image":"https://blog.front-matter.io/content/images/2022/08/figure2.png","tags":["News"],"language":"en","references":[{"doi":"https://doi.org/10.7287/peerj.preprints.21v1","key":"ref1"},{"doi":"https://doi.org/10.2312/re3.002","key":"ref2"}],"blog_id":"f0m0e38","blog":{"id":"f0m0e38","title":"Front Matter","description":"The Front Matter Blog covers the intersection of science and technology since 2007.","language":"en","favicon":"https://blog.front-matter.io/favicon.png","feed_url":"https://blog.front-matter.io/atom-complete/","home_page_url":"https://blog.front-matter.io","user_id":"8498eaf6-8c58-4b58-bc15-27eda292b1aa","created_at":"2023-05-31T07:20:33+00:00","indexed_at":"2023-01-02","feed_format":"application/atom+xml","license":"https://creativecommons.org/licenses/by/4.0/legalcode","generator":"Ghost 5.52","category":"Engineering and Technology","prefix":"10.53731","modified_at":"2023-06-29T18:04:53+00:00","version":"https://jsonfeed.org/version/1.1","backlog":false,"authors":null,"current_feed_url":"https://blog.front-matter.io/atom/","expired":null}},{"id":"https://doi.org/10.53731/r294649-6f79289-8cw3h","uuid":"e4e13c41-5991-41b6-811b-6271510c1b0e","url":"https://blog.front-matter.io/posts/increased-cancer-risk-following-computed-tomography-scans","title":"Increased cancer risk following computed tomography scans","summary":"Two papers (this and this) and an editorial in the latest issue of Archives of Internal Medicine examine the cancer risks associated with the use of computed tomography (CT) examinations.1Ionizing radiation increases the risk for developing cancer. There is direct evidence from atomic bomb survivors in Japan in 1945 and from nuclear accidents such as the one in Chernobyl in 1986. There are no studies directly demonstrating an increased cancer risk from the diagnostic use of X-rays (either...","date_published":"2009-12-20T00:00:00+00:00","date_modified":"2023-06-29T17:49:57+00:00","date_indexed":"2023-06-28T18:13:09+00:00","authors":[{"url":"https://orcid.org/0000-0003-1419-2405","name":"Martin Fenner"}],"image":"https://blog.front-matter.io/content/images/2022/08/ioi90119f2.jpg","tags":["Feature"],"language":"en","references":[{"doi":"https://doi.org/10.1001/archinternmed.2009.427","key":"ref1"},{"doi":"https://doi.org/10.1001/archinternmed.2009.440","key":"ref2"},{"doi":"https://doi.org/10.1001/archinternmed.2009.453","key":"ref3"}],"blog_id":"f0m0e38","blog":{"id":"f0m0e38","title":"Front Matter","description":"The Front Matter Blog covers the intersection of science and technology since 2007.","language":"en","favicon":"https://blog.front-matter.io/favicon.png","feed_url":"https://blog.front-matter.io/atom-complete/","home_page_url":"https://blog.front-matter.io","user_id":"8498eaf6-8c58-4b58-bc15-27eda292b1aa","created_at":"2023-05-31T07:20:33+00:00","indexed_at":"2023-01-02","feed_format":"application/atom+xml","license":"https://creativecommons.org/licenses/by/4.0/legalcode","generator":"Ghost 5.52","category":"Engineering and Technology","prefix":"10.53731","modified_at":"2023-06-29T18:04:53+00:00","version":"https://jsonfeed.org/version/1.1","backlog":false,"authors":null,"current_feed_url":"https://blog.front-matter.io/atom/","expired":null}}]' recorded_at: Sat, 01 Jul 2023 18:57:43 GMT recorded_with: VCR 6.2.0