Skip to main content

follow us

Rajesh: the next, 10th flavor of TBBT may move the final one

Stops: CMS sees an excess over two sigma inward their search for peak squarks, run across page 3, Figure 2, lower left. \(600\GeV\) stops may move "indicated" past times that picture.



Adam Falkowski oft acts as a reasonable (and educated) man. But I mean value that his title
CMS: Higgs to mu tau is going away
is a typical illustration of the insane exaggeration or distortion that I would hold back from unethical journalistic hyenas, non from a particle physicist.

What's going on? Both ATLAS as well as CMS receive got shown some weak hints of a possible "flavor violation", namely the possible decay of the \(125\GeV\) Higgs boson to a foreign lepton pair\[

h \to \mu^\pm \tau^\mp

\] Note that the muon as well as the tau are "similar" but thus far, we've e'er created a muon-antimuon or tau-antitau pair. The private lepton numbers \(L_e, L_\mu,L_\tau\) for the generations receive got been preserved. And the Standard Model makes such a province of affairs natural (although ane may predict some actually tiny flavor-violating processes fifty-fifty inward the Standard Model).

Because the muon of ane sign is combined alongside the tau alongside some other sign – alongside a particle from a different generation of leptons – the procedure above, if possible, would move ane of the so-called (and thus far unseen) flavor-violating processes.




ATLAS as well as CMS receive got seen excesses inward the 2012 run. They receive got large fault margins thus null is conclusive at all but the branching ratio (the percent of Higgses that decay according to the template) for \(h\to \mu\tau\) was measured "somewhat positive" past times ATLAS as well as CMS.




In 2012, ATLAS as well as CMS had \[

\eq{
{\rm ATLAS:} & B(h\to \tau\mu) = 0.53\%\pm 0.51\%\\
{\rm CMS:} & B(h\to \tau\mu) = 0.84\%\pm 0.37\%
}

\] which are 1-sigma as well as 2.3-sigma excesses, respectively, combining to 2.5 sigma or thus (that's Falkowski's figure). Now, inward the 2015 dataset which is half dozen times smaller or so, CMS found\[

{\rm CMS:} B(h\to \tau\mu) = -0.76\%\pm 0.81\%

\] The hateful value of the branching ratio is reported to move negative. It's a sign of a deficit except that nosotros know that the branching ratio cannot move negative. So a handling that acknowledges the asymmetry of the distribution as well as the fault margins would likely move highly appropriate here.

But OK, allow us ignore this electrical load as well as exactly combine the excesses as well as deficits from the assumed Gaussians blindly.

If yous combine the 2012 as well as 2015 data, yous larn something like\[

{\rm combo:} B(h\to \tau\mu) = +0.55\%\pm 0.30\%

\] or so. I calculated it using my "brain analog computer". When switching to the combo, the hateful value has dropped from 2012 as well as the significance of its existence nonzero is some 1.8 sigma. It's less than 2.5 sigma but it's yet an excess, a good over 90% confidence marking that something is there.

When the significance marking decreases from 2.5 to 1.8 sigma, it's a decrease but it's inward no manner a decisive decrease. 1.8 sigma volition move found attractive past times a smaller pose out of physicists than 2.5 sigma but non a "dramatically smaller" pose out (perhaps iii times smaller?). In particular, the title
CMS: Higgs to mu tau is going away
is exactly rubbish. CMS has recorded a deficit but inward a smaller amount of data. In this pocket-size dataset, no confirmation or refutation of the previous excess could receive got been expected, as well as it wasn't given. If this pocket-size amount of information were plenty for a (5-sigma) regain of the flavor-violating processes, thus the 2015 information would pretty much sharply contradict the 2012 data. If the 2015 information were plenty to "safely dominion out" the flavor-violating decays some 0.5%-1%, thus they would request a large deficit that would disagree alongside the 2012 information (and the Standard Model), too.

It exactly isn't possible to create upward one's hear almost the tantalizing signal quickly, as well as it couldn't receive got been decided inward this way.

What's actually untrue almost the championship is the tense, "is going away". This championship pretty much explicitly says that (according to Adam Falkowski), at that spot has been an established downward tendency that volition continue. But this is exactly a obviously lie. In the curt run, the confidence marking behaves as a random walk

So whether the latest changes inward a rattling pocket-size dataset (or amount of time) receive got increased or decreased the confidence marking is a thing of chance. Both possibilities are as probable for a curt plenty menses of fourth dimension – as well as this was clearly an illustration of a curt plenty menses of time. The decrease inward this curt menses of fourth dimension does not imply the decrease inward the hereafter because Nature's random generator inward private collisions (or their pocket-size plenty sets) acts independently.

I am non maxim that the flavor-violating decay is there. I actually believe it's unlikely, mayhap it has a 10% probability at this point. But fifty-fifty if the claim is untrue – or reasonably believed past times an informed physicist to move untrue – I tin yet run across that somewhat isn't describing the bear witness honestly.

The dot is that the 2.5-sigma hint is a weak one, as well as Falkowski has all the skilful reasons to move skeptical because flavor violation is a somewhat extraordinary claim (although non as extraordinary as some people similar to suggest). However, the decrease from 2.5 sigma to 1.8 sigma is a decrease past times 0.7 sigma which is fifty-fifty (much) smaller than 2.5 sigma.

If somebody is laughing at 2.5 sigma but a modify past times 0.7 sigma is plenty for him to tell that the "debate is over", he is only non acting fairly.

Needless to say, the misinterpretation of some random wiggles inward a random walk as some "long term trend" or a "law of physics" isn't typical for particle physics. An identical give-and-take has repeatedly – as well as much to a greater extent than characteristically – taken house inward the weather condition data. In a 1-year or 10-year or 40-year period, a modify of the global hateful temperature was observed as well as some people misinterpret it as some "long-term tendency that has to continue".

For a long plenty menses of time, at that spot could move some argue for such an interpretation. But if yous selection a half dozen times shorter menses as well as showtime to pretend that the tendency inward this half dozen times shorter menses may move trusted as much as the tendency from the longer dataset, yous are only a demagogue. The shorter periods of fourth dimension (or smaller collections of collisions) yous consider, the to a greater extent than probable it is for the excesses or deficits to move due to chance.

Falkowski's "is going away" spin is virtually identical to the stupid pissing contests of low-brow climate skeptics as well as low-brow climate alarmists who saw some weather condition inward a recent calendar week as well as role it as a prediction of the weather condition inward 2100.

The CMS 2015 information are formally a 1-sigma deficit relatively to the Standard Model – which is almost sure due to chance. If yous believe that the branching ratio is some 0.8% as (optimistically) indicated inward 2012, the deficit shown inward the 2015 information is 1.8 sigma relatively to the flavor-violating extension of the Standard Model. That's larger but non "dramatically different from 1 sigma", a deficit that is there, anyway, as well as it's non the same 1.8 as the 1.8 sigma excess inward the bigger dataset – only because yous know that a larger dataset measures the excesses or deficits to a greater extent than accurately than the smaller one.

The smaller datasets as well as the shorter periods are unavoidably to a greater extent than affected past times the random numbers than their larger siblings.

To 2-sigma exclude the nonzero branching ratios indicated past times the mild 2-sigma excesses inward 2012, nosotros volition likely request the amount of collisions that is at to the lowest degree as large as the 2012 dataset. To "decide" much before than that is only statistically indefensible. And yous request at to the lowest degree to 2-sigma exclude the reasonable theory alongside a nonzero FLV branching ratio to claim to receive got bear witness that "the signal is going away". Falkowski doesn't receive got it. He has basically used a 0.7-sigma bear witness to "settle" a enquiry – as well as that's much worse than using some 2.5-sigma to practice the same thing.

You Might Also Like:

Comment Policy: Silahkan tuliskan komentar Anda yang sesuai dengan topik postingan halaman ini. Komentar yang berisi tautan tidak akan ditampilkan sebelum disetujui.
Buka Komentar