In a previous post I used a dataset taken from my Facebook friends to debunk the claim that more babies are born under a new moon. I have now a chance to increase the test statistics significantly, not because in the meanwhile I made many more friends, but because people collaborating to the FiveThirtyEight blog made available two datasets from the US . See their post “Some people are too superstitious to have a baby on Friday the 13th”.

These very large datasets span the periods 1994-2014 and are provided in a very conveniente CSV format which can be easily manipulated with the python cvs reader. I then re-used the analysis program based on CERN ROOT I used for the previous post.

To summarize the method, I tested four different models:

The “null hypothesis”, i.e. the distribution is flat and can be described by a constant function;

The “main hypothesis”, i.e. that the distribution peaks during the first days of the lunar cycle. In this case, the gaussian is scaled by a signal strength parameter µ, and the standard deviation is set to 5 (less than a week);

The “modified main hypothesis”, i.e. the distribution peaks in the first days, but can account for a very long tail (the standard deviation σ is a free parameter to be fitted);

A periodic model, described by a cosine function

As you can imagine, the best result is obtained for the null hypothesis. Interestingly, the other two main models can be fitted successfully only if the signal strength is allowed to be very small, essentially zero f.a.p.p. Finally, the periodic model is still allowed by the fit, but the amplitude is strongly constrained and the period is around one lunar cycle.

But the question remains: is there any scientific basis for the claim? A possible explanation hinges on the McClintock effect, also known as menstrual synchrony, combined with the known observation that on average the menstrual cycle has approximatively the same length of the lunar month. Allegedly, women who begin living together in close proximity experience their menstrual cycle onsets becoming closer together in time. However, even in this case, the scientific evidence for this hypothesis is very shaky to say the least. Similarly, the relationship between the two cycles is now believed to be a coincidence.

If you want to play around with the same dataset using iPython, here’s some code provided by Marcello.

After the two sophons arrive on Earth, their first mission is to locate the high-energy particle accelerators used by humans for physics research and hide within them. At the level of science development on Earth, the basic method for exploring the deep structure of matter is to use accelerated high-energy particles to collide with target particles. After the target particles have been smashed, they analyze the results to try to find information reflecting the deep structure of matter. […] Successful collisions are very rare. […] This gives the sophon an opening. A sophon can take the place of a target particle and accept the collision. Because they are very intelligent, they can precisely determine through the quantum sensing formation the parts that the accelerated particles will follow within a very short period of time and move to the appropriate location. […] After a sophon is struck, it can deliberately give out wrong and chaotic results. Earth physicists will not be able to tell the correct result from the numerous erroneous results.

The three body problem, Cixin Liu

Or do they?

Quantum trajectories

In the scenario imagined by the Chinese sci-fi author Cixin Liu, an advanced civilization on its way to conquer Earth sends two engineered 11-dimentional protons (“sophons“) to trick scientists and interfere with basic research. The idea is a quite clever one: to develop space travel and other high-tech marvels, humans have to understand the structure of matter first. If high-energy experimental particle physics is maimed at its core, no serious advance will be possible and when the two civilizations will meet, there’s going to be only one possible outcome. Sophons were programmed so that they take the place real proton-proton collisions and give random results, confusing the subsequent data analysis and physical interpretation.

Let’s assume that creating a sophon is actually possible. Do the laws of physics allow them to play this trick? To try answer this question, we have to understand first whether a sophon can predict the trajectory of another proton so that it can “catch” it as described in the novel. According to Heisenberg’s uncertainty principle, the precision to measure at the same time the momentum (let’s say the direction of motion) and the position is limited by the fact that subatomic particle can be described as matter waves:

Assuming for simplicity that the proton travels at the speed of light, and that the speed can be measured with a precision of 1%:

Substituting the numerical values, one can obtain an estimate of the uncertainty on the position which is about the times larger than the size of the proton. This is just a back-on-the-envelope calculation, but it suggests that a sophon would have a hard time to trick the physicists, unless it can violate the uncertainty principle.

So is there an alternative to explain how it can accomplish this trick? In order to violate the Heisenberg principle, the sophon must have access to the so-called hidden variables. In a famous alternative interpretation of quantum mechanics due to David Bohm and Louis De Broglie (the pilot-wave theory), particles travel along directions determined by an exact function of position and momentum. The theory is deterministic and non-local, meaning that information must travel somehow faster than light, yielding what Einstein called spooky actions at distance.

So do the sophon have a chance to access information concerning the pilot-wave and predict the exact future position of the upcoming proton? You must ask this question to Aephraim Steimberg of University of Toronto, who observed in an experiment the non-local influence of another photon that the first photon had been entangled with (though Steinberg points out that both the standard interpretation of quantum mechanics and the De Broglie-Bohm interpretation are consistent with experimental evidence, and are mathematically equivalent).

To summarize: from a theoretical point of view, sophon can play the trick only if reality is non-local, quantum mechanics is incomplete and they can access information about the pilot-wave or some similar mechanism.

Pile-up events, to the rescue!

In the LHC, or any other similar accelerator, two beams of protons are accelerated while they travel along the ring. The beams consist of billion of protons, and they cross in four points (called interaction points) surrounded by the detectors ATLAS, CMS, Alice and LHCb. When they cross each other (bunch crossing), usually more than one proton-proton interaction happen during the same. If we are lucky, only one interaction is really interesting (hard scattering), while the other represent a nasty background noise (pile-up events) we can get rid by measuring the tracks of charged particles and extrapolating them to the interaction point. The vertex where the highest-energy tracks converge is called primaryvertex and is the source of the subsequent analyses.

My main objection to the sophon trick is thus that they must also assume that only one interaction per bunch crossing happen, while this is not true in general. How can a sophon be sure which interaction vertex will generate for example a Higgs boson or a pair of top quarks?

Also in this case, it’s a consequence of quantum mechanics (or more precisely, of quantum field theory) that the outcome of an interaction is a superposition of possible different states and it cannot be predicted exactly before the observation is actually performed.

To summarize: even if we assume that a sophon can hit head-on an incoming proton and scramble the outgoing particles, it seems hard to believe that it would also able to predict which proton-proton interaction among many simultaneous other ones will give rise to interesting physics, while all the others will just generate background noise.

Sophons [reprised]: Is the Standard Model all there is?

One thing that confuses me of the scenario proposed by the author is the “random outcome” of the experiments. In fact, there is a serious theory which predicts such a scenario: the production of quantum black holes! According to this theory, if extra dimensions exist beyond the usual 3+1, gravity spreads across the “bulk”, explaining why this force appears to be so weak compared to the other known three forces. If the energy of the collision is high enough, a microscopic and very short-lived black hole can be produced. Eventually, such black holes are predicted to emit particles in an almost random way (“democratically“) according to a statistical distribution devised by Stephen Hawking. And no, their creation is not really a safety issue at CERN, even though someone in the past believed that the LHC was about to create a black hole which might have swallowed the Earth.

On the other hand, sophons may trick physicists in an even smarter and startling way: by presenting us results only compatible with our current theory, the so called Standard Model of particle physics! In such a scenario, we would be almost certain that there must be some unknown theory explaining a large number of experimental observations (dark matter, neutrino mass, or the accelerated rate of expansion of the universe just to name a few), but we would not be able to find out any experimental clue of this theory. We would be stuck forever in a theoretical rut, called the Standard Model…

Is this what’s happening right now? Is an alien civilization tricking us, hiding from us vital experimental results which would guide us towards a Theory of Everything?

The publication of the novel “The three body problem” written by Chinese author Cixin Liu rekindled my interest about this long-standing issue in classical dynamics. As also pointed out in the novel, an empty space is stationary (and boring). An empty space plus a spherical body is also stationary (still boring). An empty space plus two bodies is a little more interesting, as the two will orbit around their center of mass. But they will do so forever. It’s only when you add a third object that things start to be very interesting. A complex dynamics arises from Newtons’ three laws of motions and the law of universal gravitational.

It was already Newton who attacked for the first time this problem, as it is relevant in the study of the motion of the Sun-Earth-Moon system. However, the problem first gain its “official” name in 1747 thanks to the work of the French mathematician Jean d’Alembert, although he’s mostly known for having found the solution to the wave equation.

If you are not familiar with differential equations, finding a solution means to discover an equation that satisfies all the constraints and gives a valid result for every possible value of the independent variable (usually time or spatial coordinates).

The most famous result about the three-body problem is probably due to Henri Poincaré. He showed that there is no general analytical solution for the three-body problem given by algebraic expressions and integrals. The motion of three bodies is generally non-repeating, except in special cases. The most relevant is probably the case in which the three masses lay on the same plane, as may happen in a solar system. More complex solutions have been found for 3-dimensional arrangements, but the values of the masses have to appear in peculiar ratios, or the initial positions have to coincide with the vertex of a 3:4:5 triangle. As a matter of fact, the 3-body problem can be considered as a special case of the N-body problem, which has a solution found by Karl Sundman, but the convergence of the series is so slow that has no practical application.

Another possibility is actually to give up finding an analytical solution, and instead integrate numerically the differential equation. This usually requires large computational power if the equation is very complex or if the required level of accuracy must be very high. Especially when dealing with planetary orbits, it is not uncommon to run simulations that account for a time span equivalent to a few billion of years (our solar system is in fact about 5 billion years old). Small uncertainties accrue over the iterations and can potentially lead to highly divergent solutions when the calculation is repeated for a very large number of times (see also The Copernicus complex by Caleb Scarf). The basic method is called Euler Forward Method. Improvements to this calculation lead to the so-called Runge-Kutta family of methods.

I’m showing here a program I created using Processing that you can use to play around with the three-body problem. It is based on the Solar System model shown in previous posts in this blog, and performs a numerical integration based on the Runge-Kutta method. In this particular case, I defined three “suns” of mass 0.5, 1.0 and 1.5 solar masses, plus a planet with mass equal to one Earth mass. Every time you run the simulation, the initial state of the suns and the planet will be different. As you can see, in some cases the planet ends up having almost stable orbits, but most of the times an instability builds up and the planet is expelled from the system. You can see chaos theory acting in front you!

If you think this seems an uncommon scenario, think twice: observations suggest that half of the exoplanets discovered so far actually orbit around binary-systems! The scenario depicted in the novel The three body problem may not be so far fetched after all.

The big news in particle physics during the winter 2015-2016 is certainly the excess found by the ATLAS and CMS Collaborations in the invariant mass of two high-energetic photons in the data acquired in 2015 at a centre-of-mass energy of 13 TeV. If you don’t know what I’m talking about, then it’s time to catch up: the Resonaances blog and Boston Review offer two nice summaries.

The main questions we-as-physicists want to answer are: is this real? if so, what it is? As an experimentalist, I should try to answer the one, but the truth is: we don’t know yet and we need more data. However, in the latest update given at the Moriond conference, the statistical significance of this excess-over-background increased a little, as one would naïvely expect if this was due to a real particle. Many colleagues are arguing that, based on basic statistics arguments, we are in principle already able to claim an evidence, if not a discovery – if you sum in quadrature the local significances found by the two separate experiments, the figure is √(3.4^2 + 3.9^2) = 5.2 standard deviations, thus higher than the golden standard of 5σ. I can’t deny I’m excited about this, but I know that extraordinary claims require extraordinary evidence. If this is real, it would be so big that the discovery of the Higgs boson would pale in comparison. It would be comparable to the discovery of the constituents of the proton (quarks and gluon) or of the muon (who ordered that?).

So what’s the story? Albert Einstein once said that subtle is the Lord, malicious he’s not. Or is He? If we are producing this particle via the gluon-gluon fusion (by far the most likely process in proton-proton interactions), then it should also mostly decay to a pair of gluons, which in turn originate two particle jets. But the problem is that both experiments are almost blinded by light (sorry for the pun) in that mass region because of the super-high production rate of the di-jet process. Moreover, in run1, the di-photon analyses focused on a lower or a higher mass regions, almost overlooking this intermediate range. Now that we know that something might be there, with hindsight both ATLAS and CMS have looked better in that region, and in fact they have found some more events – ATLAS has in fact a 2σ peak in 2012 data, which could be just a fluctuation, or actually a positive fluctuation of this process.

I’m an experimentalist, but I’ve always been fascinated by theory. Even though I’m certainly not going to write any serious hep-ph paper about the interpretation of the 750 GeV excess, I would like to share with my readers two wild ideas. Who knows, maybe there is some truth in my craziness (or just ignorance):

We know that there is a U(1) interaction (electromagnetism), a SU(2) interaction (weak force), a SU(3) interaction (strong/colour force)..how about SU(4)? Is there any argument against the existence of a strongly coupled SU(4) interaction, not necessarily embedding the already-known forces? What would such a force look like?

My favourite scenario remains the spin-2 interpretation, although I understand that the non-observation of a similar excess in the di-lepton channel strongly contains this possibility. Anyway, what I’ve asked to some colleagues (without getting any answer so far) is whether it would be possible to have a Brout-Englert-Higgs (BEH) mechanism for spin-2 particles. In my head, the graviton being massless would play the role of the photon, and this allegedly new particle may be a massive spin-2 relative of the Z boson (I assume it’s charge-neutral). Given that a tensor has more degrees of freedom than a vector, I imagine that there may be other states associated with the these two. Would such a theory non-renormalizable? Then what: is renormalization an absolutely necessity, or is it just a mathematical trick? And if such a theory would not yet be the ultimate one, but just an intermediate step, should we really care about normalization?
Even if quite unrelated to this idea, it seems that this spin-2 BEH symmetry breaking resonated somehow in the heads of three famous theorists (arXiv:1602.07993). In their scenario, the Higgs boson “triggers” the appearance of gravity, or alternatively, at very high energies the change in the shape of the Higgs potential “turns off” gravity at once.

If you are interested in this saga, stay tuned and expect some update by this summer!

How big is the biggest apple you could buy from your favorite supermarket? Surprisingly enough, you can actually give a reasonable answer just buying a bag of apples.

In the example below, I weighed 8 apples and created a histogram with 5 bins in the range 100 – 150 g using CERN ROOT.

TH1D * h = new TH1D("h","h",10,100,150)
h->Fill(108)
h->Fill(120)
h->Fill(124,3)
h->Fill(126)
h->Fill(127)
h->Fill(129)
h->Fill(130)
h->Fill(131)
h->Fill(147)
h->Draw()

Then, I fitted the distribution with a gaussian function:

With these parameters, we can answer the question. Let’s say that it’s quite uncommon to find an apple that is 3 standard deviations away from the average, which corresponds roughly to 1 case in 1000. How much does it weigh? We can use the z-score to calculate this value:

z = (x - avg) / stdev = (x-126.3)/14.0 > 3

x > 3 * 14.0 + 126.3 = 168.3 g

To find such an apple in this kind of bags seems to be quite unlikely, but how about the most utterly uncommon one that we may expect to find from this producer? Usually, the threshold is set to 5 standard deviation:

X = 5.0 * 14.0 + 126.3 = 196.3 g

Now, think about it. Most of the apples you can buy from this supermarket have a weight within just 14 grams around the average. Do they have trees that make apples so precisely, or is there a selection bias? The most unlikely apple you can find is just twice as big as the common ones, not 10 times or more!

After I moved to Toronto, Canada, many friends keep me asking whether their hometown can fit into my new city’s borders. Intrigued by their questions, I superimposed some maps taken from google, with same scale (1cm = 5 km). I also sketched an approximate border of Toronto to make the comparisons easier. What I’ve found is quite impressive. While cities like London and to some extent Rome can challenge the size of Toronto, all the other places I lived in (or my friends are relatives are from) may fit amazingly into its borders! So, entertain yourself with my maps and let me know if you want to make some other comparison with your preferred place (a National Park, a lake, or even an island?)

Toronto alone, with my approximate city borders

Rome (Italy). The inner part of the city within the GRA (Rome’s circular motorway) fits nicely into Toronto.

Milan (Italy) and a large part of its hinterland fits into Toronto.

London (UK) is huge!

The Fucino region of Italy is almost as big as Toronto. This plateau used to be a lake until it was drained in 1875.

Toronto is as big as a large part of the province of Macerata (Italy).

Götheborg (Sweden) is a very nice city with its river’s estuary and archipelago, but can’t compare in size to Toronto.

Toronto and Zurich (CH) often figure among the top 10 cities in the world for their quality of life. Surely not for their size, though.

Not only Toronto is way larger than Geneva (CH), but even the LHC accelerator (red circle) could fit nicely inside the Canadian city!

While the city of Bologna is very compact in size, its surroundings spread over the countryside for quite a distance. However, most of its northern province is well-contained inside Toronto.