Isn't it weird that acceptance rate is a thing we look for in a conference/journal?
Publishing a paper should not be competitive like "we take the top 20% paper", it should be "we take all papers that are good enough according to our standards". Sometimes it can be a very low or very high number depending on the quality of the paper submitted.
@solalnathan@academicchatter@phdstudents I wonder if it's all due to a lack of substantive understanding and expertise (or concern), at the level of decision-makers. "Top x% is excellent!", whether it's accepted papers or grants provided, is a totally subatnce-free metric. Any monkey can apply it and claim they're measuring exceptionality (just not what kind exactly).
I would guess there is some pressure for competition forced by neoliberalism yes, but also some historical debt.
When you have a print you cannot accept an unlimited amount of paper so you create artificial competition (fix number and not % in this case). But the cost is now marginal to host a PDF on a web server and cannot justify it anymore.
It's also interesting to note that rejection/acceptance rate has no correlation with the Journal Impact Factor™ (but then JIF is also a rather silly and dubious calculation too, so I'm less certain about what the non-correlation really tells us. Noteable nonetheless, particularly for JIF-worshippers of which I am not one)
@solalnathan@TEG@academicchatter@phdstudents sadly LLMs have made paper mills overwhelmingly efficient. Even before, the imbalance between authors and reviewers posed constraints to the number of papers that can be carefully evaluated: now it is getting worse. In this context, the acceptance rate makes even less sense (cheap submissions drive it artificially down) as a proxy for reviewing quality and selectivity. But I believe the whole process is not sustainable anymore. Alternatives anyone?