It's set to fade in from 0 opacity, for some sort of unnecessary "ooh look it's fancy" effect. My guess is that if you check the console you'll find that it hit some exception before it completed its little fade-in effect.
It seems to be caused by the FediAct addon, which doesn't make any sense. Disabling the addon fixes the page without needing a reload, so it must be a CSS issue rather than an exception. Edit: it seems the addon is overriding the fadeIn keyframe, but it looks fixed in the github version.
Might be worth reaching out to the addon authors. Hard to say whether the page or the addon is at fault, but they might be interested to know it even if it's the page's fault.
It looks like it's the addon's fault, and has already been fixed in the github version. It's also been abandoned, so it's probably not worth keeping around anyway.
I think that's the overlap between the dense region of PINs that start with 11, 12, or 13 (similar to the dense regions that start with 21, 22, 23, 31, etc), and the dense square region of month+day dates.
I got a PIN assigned by my bank back in the 1980s, and it is in that range. I always assumed it was random, because how easy is it to generate a 4-digit random number? But maybe they gave out PINs more like safe combinations. I don't think you could change them back then, either.
Just eyeballing the linked image... it looks like most of them agree?
The bias almost certainly exists, according to nearly all analysis here. They just disagree on its magnitude. And for the most part they don't disagree by much.
Why have 4 of the studies seemingly not used error bars at all‽ Like I get that different analyses will arrive at different results, but they should always have error bars, right?
Scientists who fiddle around like this — just about all of them do, Simonsohn told me — aren’t usually committing fraud, nor are they intending to. They’re just falling prey to natural human biases that lead them to tip the scales and set up studies to produce false-positive results.
Since publishing novel results can garner a scientist rewards such as tenure and jobs, there’s ample incentive to p-hack.
I mean really, making claims they aren't committing fraud yet in the very next paragraph demonstrates their motivation... To commit fraud
Nevermind the numerous cases of published papers being bunk. And that something like 80% of published science isn't reproduceable...which is part of what publishing is to enable.
I'm no expert on statistics, but I know enough that repeated experiments should not yield wildly different results unless: 1) the phenomenon under observation is extremely subtle so results are getting lost in noise, 2) the experiments were performed incorrectly, or 3) the results aren't wildly divergent after all.
the whole point of statistics is to extract subtle signals from noise, if you're getting wildly different results, the problem is you're under-powered.
Thanks for taking the time to post these links, just letting you know you're efforts have benefited at least one person who's gonna enjoy reading this.
What do you mean, all different? Most are exactly the same. The first 4 are a bit low and the last 3 a bit high, but last 2 and first also extremely wide, so irrelevant anyway. Everything else agrees, most within >99 % confidence with only slight differences on the absolute values.
9 of the teams reaching a different conclusion is a pretty large group. Nearly a third of the teams, using what I assume are legitimate methods, disagree with the findings of the other 20 teams.
Sure, not all teams disagree, but a lot do. So the issue is whether or not the current research paradigm correctly answers "subjective" questions such as these.
If we only look that those with p <0.05 (green) and with 95 % confidence interval, then there are 17 teams left. And they all(!) agree with more than 95% conference.
So ignore all non-significant results? What's to say those methods result in findings closer to the truth than the methods with no significant results.
The issue is that so many seemingly legitimate methods produce different findings with the same data.
Data Is Beautiful
Hot
This magazine is not receiving updates (last activity 50 day(s) ago).