10 August 2017

Me, 2017

My selfie game is not strong. And I generally feel super scruffy when I am out at the beach collecting. But I like how this pic of me in the field Tuesday came out.

I was scruffy, but the day was gorgeous. I was very lucky to be out on the beach that morning.

But this is a good opportunity to draw your attention  to Paige Jarreau’s project on scientist selfies. There is an Instagram account of other scientist selfies here, a Flickr collection; a Twitter hashtag, #ScientistsWhoSelfie (of which the picture above is one). And you can support her project on Experiment!

External links

04 August 2017

The Journal Loyalty Index

Happy coincidences make good prompts for blog posts. Earlier this week, I returned a review of a manuscript for a journal. It was one I had never reviewed for before, so I added it to a list of reviews I keep in my CV. It was the thirty-ninth  journal I had reviewed a paper for.

I was curious if this was a particularly high number. People often complain talk about how many reviews they do (or are asked to do), but not how many different journals are asking. I asked on Twitter. 39 different journals does seems to be on the high end:

Coincidentally, Stephen Heard published this post about how many journals he had published in, which he, following the example of twitchers, called a “life list.” This interested me, because I deliberately try to publish in as many different journals as possible. It’s a science macho thing: I want to see how many different editors I can fool convince.

As with reviewing, I knew about how many papers I had published, but not the journal distribution. Stephen had proposed the Journal Diversity Index (JDI) the number of journals divided by number of papers. Everyone has a JDI of 1.0 when they publish first paper, and it only declines from there.

My JDI is higher than Stephen’s, but several commenters in his post beat us both.

But this got me thinking about the two things together. What’s the relationship between the journals we publish in, and the journals we review for? I expected there to be substantial overlap:

After all. the whole reason you submit to a journal and review for a journal are the same: you have expertise in that. Moreover, editors look at their lists of authors when looking for reviewers, which I would expect would lead to greater overlap in the journals you publish in and the journals your review for.

I mean, it would be weird if there was almost no connection between your papers and your reviews, right?

But when I crunched the numbers, I was a bit surprised. The sets were almost equal.

There were 18 journals I have published in, but have never reviewed for.

There were 15 journals for which I have both published and reviewed papers. What I was expecting to be the biggest category turned out to be the smallest category.

There are 24 journals I have reviewed papers for, but never published in. That the “review only” part of the Venn diagram is bigger than the “publish only” part of the diagram doesn’t surprise me, because my online friends on editorial boards are constantly talking about how hard it is to find reviewers for manuscripts.

Like Stephen, we can make a simple index. The number of journals you have both published a paper in (B), divided by the total number of journals you have interacted with editors (E). We’ll call it the Journal Loyalty Index (JLI).

I have 15 papers I have both published and reviewed for, out of 57 journals I have interacted with. Making my JLI (15/57) is a mere 0.26.

I of course curious how loyal other researchers are to their journals.

Related posts

Peer review pariah, update

External links

My journal life list
Journal life list

26 July 2017

World’s worst... scientific papers

I have a new project to share! Just for fun, I spent the last few days making another little ebook, similar to what I did with Presentation Tips.

Stinging the Predators is a collection of deliberately horrible papers that were created to punk predatory journals. There have been six such pranks in the last two years. The most recent, which sort of triggered this project, was on Neuroskeptic’s blog last Saturday. Thinking about all the “sting” papers I’d seen over the years, it occurred to me that fake papers were practically their own emerging genre. And what better way to draw attention to a genre than with a curated anthology?

I collected all the sting papers I knew about. There turned out to be thirteen, and collecting them convinced me that it was useful to have all these examples in one place. Each paper has a short new introduction, and links to articles about it. I rounded off the collection with some short essays, some of which appeared here on the blog before, and a couple of which were new.

Once I got started with this project, I couldn’t let it go. I promised myself I would only let myself work on it for a few days, and then get back to work on writing that could be published by other people.

The ebook is available on figshare and on DoctorZen.net.

Update, 28 July 2017: After I posted the first version, I was reminded of another sting paper on Google Plus (see? It’s not a ghost town). I found another abstract after that. I decided to make a quick turnaround from version 1 to 2. There are now fifteen entries in this anthology.

The easy to remember link is http://bit.ly/StingPred. (Capitalization matters! “stingpred” will not work.)

Update, 31 July 2017: I know, two revisions in less than a week? I learned of another sting paper, and another conference abstract, bringing the total number of entries to 169 pages of mostly rubbish. (Some will probably say all is rubbish.)

Update, 7 August 2017: This little project is featured in Times Higher Education today and the Improbable Research blog!

External links

Stinging the Predators on Figshare
Predatory Journals Hit By ‘Star Wars’ Sting
Worst ever research papers revealed
“All these papers were deliberately bad”

18 July 2017

Rough rides at tenure time

Yesterday, I wrote about Dr. Becca’s tumultuous ride through her tenure process. (And thank you to all who read, like, tweeted, shared, and commented!) Becca has done many early career scientists a favour by documenting this difficult process.

I’ve talked from time to time about how important it is to share our failures. But we particularly don’t like to draw attention to issues that came up at tenure time. I wrote about my problems with tenure after I squeaked through the process. I was not writing a pseudonym, and I didn’t blog about the process much while I was going through it. I was very mad about it then. I don’t get visibly upset talking about like I used to, but I can’t say I’ve made peace with it. With the better part of a decade between then and now, I can see why I got a hard time at tenure, but I still feel I was not treated well.

Terry McGlynn had an even rougher time. He was denied tenure, which he wrote about extensively in the Chronicle of Higher Education.

But what I haven’t done specifically is to talk about what came afterwards for me. A lot of people think that after academics get tenure, they drop off and take it easy.

I got better.

After tenure, I finally had everything in place. The gears were turning, and I started to get the researcher coming out much more consistently, with more original data driven papers. And I had seen the adage, “Don’t let the perfect be the enemy of the good.” I lowered my standards and stopped waiting for projects to get that one last bit of data. And stuff started to happen for me. I became one of the most published faculty in the department.

I am not trying to brag here. I know many people would look at my research track record and deem it second rate (at best). “Sand crabs, Zen? Nobody cares about your sand crabs.”

The moral of the story? It’s to remind people that trouble, even at this critical point in an academic career, does not have to cripple the rest of your career.

And that publishing well is the best revenge.

Related posts

Now part of the problem
Low points
Nevertheless, she persisted

External links

Coming out of the closet, tenure denial edition

17 July 2017

Nevertheless, she persisted

Sometimes, you get to watch a friend win one. And that win is practically as sweet as one of your own.

Friend of the blog Dr. Becca has been getting a rough ride at tenure time. Until today:


First things first: Congratulations, Becca! I am so happy for you! Wooo!

Other things: Becca’s win is important beyond just the obvious significance for her and her students and collaborators. It needs to be seen and discussed widely for two reasons.

First, her case needs to be talked about because the grief she was getting was all about one thing: money. Scratch that: it was because she didn’t get the right kind of money. Her job was being threatened because she hadn’t brought in a stand alone research grant from the National Institutes of Health (an NIH R01, to use the jargon).

Becca’s situation is the nightmare scenario that many early career scientists are staring down. The NIH budget is flat, applications are up, and most recognize that the success rate in applying for NIH grants is now so low that many perfectly good projects go unfunded.

In other words, getting a grant has a healthy dose of luck to it and no amount of granting savvy can ensure you will pull down any particular grant. Lack of a grant does not mean your colleagues don’t think you’re doing crummy science.

Becca’s situation shows how dire and destructive this habit of “outsourcing” tenure decisions to granting agencies has become. Professors and administrators need to talk about this and adjust their expectations to line up with reality, and not expect the stone to give blood if you “incentivize” the stone enough.

This is something that has been buzzing in the background for a long time, but the situation has worsened in the last 6-7 years. Academics are used to stability at much longer time scales and aren’t prepare to adjust to the ground shifting underfoot in the time it takes to hire a professor to her tenure review.

Second, Becca’s case matters more generally than her alone because, as Neil Gaiman (channeling G.K. Chesterton) says:

Fairy tales are more than true: not because they tell us that dragons exist, but because they tell us that dragons can be beaten.

Becca shows that you can fight the dragons of university administration, and you can win. And a lot of early career academics need to know that. Because dragons are big and scary and it is easy to give up and concede the battle.

Becca was confronted with career dragons.

Nevertheless, she persisted.

Related posts

The secret life of a banner
The secret life of a banner, part 2

16 July 2017

The future is female

This year has seen something special. There’s been a hunger for new heroes. You can see it in these projects.

Hidden Figures. It challenged the pop culture juggernaut Star Wars at the box office, and got Oscar nominations, too.

Wonder Woman. The biggest hit of the summer, still going strong.

And now... the thirteenth Doctor.

It’s going to be fantastic.

Congratulations, Jodie Whitaker! I look forward to seeing you pilot the TARDIS and fight the monsters!

Added: Reaction to the latter.

13 July 2017

Five years for seven points of data

I was very excited yesterday. I got to add another data point to this graph:

It’s taken me five years to get those seven data points. Five. Years.

It’s not for lack of trying. Each data point depends on me catching a rare event. There’s a limited amount I can do to try to catch those rare events, so this graph is building up slowly. It’s not quite a pitch-drop experiment, but I am seriously wondering if I am ever going to have enough data that I will feel confident about publishing it.

I share this because there are a lot of people fretting about the speed of science these days. People want want fast review, and fast publication. Some are turning to pre-prints for greater speed. But sometimes, try as we might like, some questions force you to take a long, slow slog to get to the answer.