If A doesn’t prove B is true then A proving B doesn’t matter anyway. An assumption is being made through the then section of the argumentative set. Which is why this would even would be considered circular logic to begin with because after the first assumption is made present the the second assumption Follows.

I agree that valid circular arguments can be made, but technically here you are pregranting an out with and through the If statement. Now if you change If to a When position its harder to escape the fallacy. Hence, let’s change the set up to When A and while still granting the assumption of Then B. In the assumption that: When A is true then B follows and if B is true then A follows.

Here we arrive at an issue with the assumption status of then B being true. Which is again is why most circular logic is invalid because the Then mode has no direct proof profile, so the then proof file of B then A is also set up to be possibly false as well.

In reality this second point is actually even less possible for a valid rational analyst because the first assumption was never proven to begin. Through these two assumptions in combination you have created the middle third assumption variable between the two first assumptions: When A is true then (assumption) B is true, then (assumption) that if B is true then (assumption) A is true. Hence, the assumption of Then with lack of proof is a large portion of the problem with circular logic in addition to the When verse If status. In using If as a possiblity not as an If and only If status negates circular reasoning.

In using When instead of If the argument would be most certainly circular at this point, but not necessarily valid. The circular argument isn’t valid unless you can prove that what makes A true is B and what makes B true is A. Here’s why, A is granted as being true through already established proof once the If changes to When and in that moment you don’t need B to prove A’s validity anyway. Which puts us right back to A is true When A is true, but why is A true? An outside variable must be established for the When previous to A.

However, in choosing to follow the circular logic terms first given: A is true because of B, that is after the If A then B sequence. Now this can actually hold up iff you provide proof and take away the (assumption) of the Then standings while inserting a variable for When when in exchanging it for If. However, A would be true merely because of A either way in that regard because of C and changing If into When. The B varient or any other variable can of course still exist in this cycle, but wouldn’t really matter. Therefore, A is a circular logic theory that is still invalid by itself until you use a extra outside variable during the When exchange for If. Otherwise, A is still self defined. A is true because of A doesn’t hold in an argumentative process. That no different than saying When or If A is A then A is A. Obviously A is A, but what makes A would be C.

It must be that , A is true because of C and when A then B and when B then A, but it would start with C. When there is a C then it’s not totally circular in theory because the line still has a start an stop point. Which is why you can’t start with just If or When because you must have an actual market place holder which becomes the outside C variant and turns If into When (C).

When C Then A and Then B, and Then When B then A. This is technically the only truly valid circular argument one could make. Problem is people don’t start with the outside varient and use the status of Then by way of assumptions. Whereby, When C then A and then A (remember any other variable doesn’t matter because it all comes back to A eventually) by in large isn’t circular at all because of the outermost varient of C.

In closing, yes you can make circular logic arguments, but the If mode here is the place that in fact produces something that isn’t actually circular at all. The If which must be changed to (When) and it is circular at this moment but not valid. When the When status is utilizes and the varient C is used as the outside value before the circular argument has ever been made, and by providing proof that A does in fact lead back to A just not outside the A the argument the circular logic navigation through B and back to A can be valid, but for productivity of simplicity it doesn’t matter. All it does is account for the extra variables that keep perpetuating A, but it’s really C that began the process of the proof file. However, we often don’t know what C varient is. Hence, why we resort to false circular logic and can’t escape it.

]]>Perhaps what I should have said is that often you will have data with a sampling distribution that will only slowly converge on a normal distribution, relatively to the typical size of the samples. While the long-run nature of CIs is still preserved, this does tend to effect accuracy (at least when seen from a ‘CIs are computationally effect credible intervals’ perspective).

]]>In many sciences CIs are constructed for population or sample means, and the error of the mean is well-known to be normally distributed, regardless of the underlying distribution. So you may have binomial or power distribution or any irregular distribution you want, the standard error of the mean (SEM, SD of the mean) will still be normally-distributed and the intervals will hold. This is easily demonstrated through simple simulations.

Therefore, the simplest CIs known are much more applicable than many people think.

]]>https://osf.io/preprints/bitss/fzpcy/

Should it be useful in some way or form, and/or for your possible entertainment, i tried to get attention for this a final, perhaps somewhat desperate, time here (just before the pre-print of Hardwicke & Ioannidis got published):

I am done with it all, but i hope you will (continue to) keep a critical eye on things. Keep it open, keep it real. Thank you for all your efforts trying to help improve Psychological Science.

Kind regards,

Alexander A. Aarts

https://osf.io/eqbas/

Ps. Love your tagline “I am wrong most of the time.” I try to keep that in mind myself.

]]>If I understand Harry’s calcs, they assume the typical p-hacker will respond to a change from .05 to .005 by increasing their n to maintain power. The assumption then is that moving to p < .005 would not discourage p-hacking practices (or may even encourage them).

My intuition is the "typical" p-hacker is someone who is not that sophisticated, and quite possibly has little understanding of power to begin with. That is, I think the typical p-hacker is more the bumbling fool than the deliberate charlatan. And if that's true, expecting them to double their n in response to a change in alpha may be unfounded. They might instead focus their research a bit better on more valid questions or look for safer options like joining the "let's replicate everything" crowd as a ways to get published.

I think anyone clever about stats prefers to go on about how unclever everyone else is rather than use their 'powers' for evil. 😉 The idea that there are these diabolically clever p-hackers out there prepared to game the system no matter what strikes me as implausible.

And if I'm right about who the p-hackers are, the fpr advantage of a lower alpha is greater when power is low, making it fairly sensitive to the practises of people who are ignorant about power.

Now, one could argue that p-hackers are actually quite sophisticated indeed and are deliberately setting out to game the system by running complex designs with umpteen variables, low 'real' power, and then deliberately failing to adjust for multiple comparisons. And maybe there are some out there doing that. But I don't think it's a large number, maybe 5% tops, and certainly nowhere near 15%. I suppose you could argue in response that a lot ARE doing that, though they aren't doing it with diabolical intentions but rather exactly because they are bumbling, and I admit I wouldn't have a good riposte to that.

But overall, the reason I think the % p-hacked number is smaller relates to my sense of the proportion of the replicability problem that arises from p-hacking vs. the proportion that arises from other causes. I haven't looked at this too closely myself, but by all accounts there are a lot of underpowered studies out there, some of them massively underpowered. Lack of power, combined with p<.05, could well alone account for the vast majority of failures to replicate.

Interested to hear what you think.

]]>if I am using a procedure which is estimating right 95% of the times (when the assumption are met), why is it flawed to say that without any additional knowledge the probability of specific application of this procedure to be right is also 95% ?

(I think that’s also what Jan asked in the sixth paragraph of his comment…)

]]>We also send every faculty job candidate out to lunch with the graduate students, and then ask the students for a report. I have been in this department a long time and I will say, no matter what the candidate’s credentials are, no matter where they’ve published or who they know, if the graduate students instantly dislike them, *don’t hire them.* It never works out well. (Yes, we’ve done this. Repeatedly. But I will never again vote for such a candidate.)

That aside, sympathies on the job hunt. If it’s any comfort, applying for grants is much the same…. Please bear in mind that it’s *not about you.* It’s about too many candidates for too few positions/dollars. The field is asking you to play musical chairs and then blaming you if you don’t sit down, but obviously someone won’t get to sit down. The real problem is that there are not enough chairs.

]]>I also like the grad school application advice to somehow figure out in advance (and in depth) what kind of person your potential advisor is and what kind of atmosphere the department has. As if unhappy grad students are willing to suddenly open up to a total stranger who comes for a brief visit.

]]>