Lattice calculation of $F_K/F_\pi$ from a mixed domain-wall on HISQ action [talk]

American Physical Society bulletin [PDF of slides]

Title slide

Today I’m here to talk about my lattice determination of the ratio of the pseudoscalar decay rates $F_K$ and $F_\pi$ using a mixed domain-wall on HISQ action, which was only possible due to the work of other members of CalLat.

Why $F_K/F_\pi$?

As we all know, the quark eigenstates of the weak and strong interactions are different. One way this difference is manifested is through K, K-bar mixing, in which we see the quarks oscillate between flavors.

In the Standard Model, the difference between the quark eigenstates of the weak and strong interactions is encoded in the CKM matrix. If the eigenstates has been the same, the matrix would’ve been diagonal. However, the eigenstates are not, so we have off-diagonal entries that allow mixing between different generations and flavors.

Unitarity of the CKM Matrix

According to the Standard Model, the CKM matrix is unitary. From the top row of the CKM matrix, we get the following relation. The CKM matrix entry $V_{ud}$ can be precisely determined experimentally through superallowed beta decays; however, $V_{us}$ cannot and must instead be determined through lattice methods. The last entry in this relation, $V_{ub}$, is comparably small, so this equation predominantly relates $V_{ud}$ and $V_{us}$.

Marciano [mar-see-an-o] has related $F_K/F_\pi$ and $\vert V_{us} \vert/\vert V_{ud} \vert$ to kaon/pion decay rates, so by combining our $F_K/F_\pi$ result with experimental results for $V_{ud}$ determined via superallowed nuclear beta decays, we can precisely determine $V_{us}$.

Here are the definitions of the pseudoscalar decay constants, which we can use to generate values on the lattice.

Why Lattice QCD?

As previously stated, $V_{us}$ is easily accessed by lattice, not experiments, especially if you have a few hundred million dollars to spare.

So what is lattice QCD? Lattice QCD is a non-pertubative approach to QCD, which is particularly useful in the low-energy limit where the coupling constant becomes greater than 1. The basic idea behind lattice QCD is to imagine what would happen if quark and gluon fields were discretized to a lattice, rather than permitting them to lie anywhere in spacetime, and then considering the limit where the lattice spacing goes to 0. Perhaps unsurprisingly, there are infinitely many ways of discretizing the QCD action, but they aren’t all equally useful.

In contrast with experimentalists, lattice practitioners have the advantage of being able to tune QCD parameters, allowing us to perform lattice “experiments” in “alternative” universes, thereby probing how QCD observables are impacted by their underlying parameters.

Lattice methods can also be used in conjunction with effective field theory, increasing the precision of our results.

Why $F_K/F_\pi$ via Lattice QCD?

$F_K/F_\pi$ is a ``gold-plated” quantity. Unlike many other QCD observables, it can be easily calculated to high-precision on the lattice. The quantity is dimensionless, meaning we don’t have to worry about scale setting. The numerator and denominator are correlated, further improving statistics. The quantity is mesonic, so it doesn’t have the signal-to-noise issue associated with baryonic observables. And the full chiral expansion is known to NNLO, so we’re only limited by our statistics, not theory.

Comparison of Lattice Actions

As I previously stated, there are infinitely many ways of discretizing QCD. We use a mixed action, which is to say that we discretize the sea and valence quarks differently. Since the sea quarks are generally less important than the valence quarks, we use an action that allows us to cheaply produce many field configurations. This also means we can use generate additional pion and kaon data for the same amount of computational resources. Our action, unlike some others, has no $O(a)$ discretization errors.

$F_K/F_\pi$ models

The goal of this work is to determine $F_K/F_\pi$ at the physical point, that is, at the physical pion and kaon masses and in the continuum, infinite volume limit. To this end, we use chiral perturbation theory to expand $F_K/F_\pi$ in terms of the pseudoscalar masses.

At LO we expect $F_K/F_\pi = 1$ as this is the SU(3) flavor limit. In the SU(3) flavor limit, kaons and pion are identical. The top row, therefore, offers corrections to $F_K/F_\pi$ via $\chi$PT. The terms in the bottom row are lattice artifacts that must be accounted for.

When we perform our extrapolation, we don’t limit ourselves to a single model. Instead we consider 24 different models and then take the model average. The 24 different models come from the following choices:

  1. At NLO, whether we (a) use the NLO expressions for $F_K$ and $F_\pi$ in the numerator and denominator or (b) take the taylor expansion of $F_K$ and $F_\pi$. It sounds pedantic, but the latter choice removes one LEC at NLO.
  2. At N2LO, whether we use the full $\chi$PT expression, which includes chiral logs, or just use a taylor expansion. Regardless, the N3LO correction is just a taylor series correction.
  3. What we use for our renormalization/chiral cutoff
  4. Whether or not we include the $\alpha_S$ term, which is a lattice correction from radiative gluons and is a quirk particular to some action discretizations.

Model Paramters

Because we’re fitting a chiral expansion, we need to determine the parameters in this expansion. At LO, there are no parameters to be determined since $F_K/F_\pi$ is 1. At NLO, there is only a single chiral LEC, assuming we Taylor-expand the ratio: the Gasser-Leutwyler [gawh-ser loot-why-lehr] constant $L_5$. But at higher orders, there are many more parameters. At N2LO, there are 11 more; and at N3LO, there are 6 more.

We use 18 different ensembles in our lattice calculation, each of which is a datapoint in our fit. So we have essentially 18 parameters to fit with only 18 datapoints. While a frequentist might deem the endeavor hopeless at this point, a Bayesian would not. We can constraint the parameters by assigning them prior distributions. And from the graph, we see the fit is improved even as we add more parameters to our fit: the widest band has only two parameters if we include a lattice spacing correction, but the narrowest band has as many parameters as we have data points.

We have a rough idea of what the width of our parameters should be based on the size of our expansion parameters. Regardless, we can check whether our parameters are reasonable by using the empirical Bayes method, which uses the data to determine the most likely priors that would support that data.

Model Averaging

Again, we have 24 different candidate models to describe our data. We give each model a different weight in accordance to the model’s Bayes factor, which we then use to average each model’s extrapolation to the physical point. The Bayes factor is calculated by marginalizing over each of the model parameters and therefore allows us to compare models with different parameters. Additionally, it automatically penalizes overcomplicated models.

Comparison of Models

In this slide we see how our different model choices impact the model average.

In the top plot, we see that the data prefers $F_\pi$ for the cutoff. In the bottom right plot, we see that the the data heavily prefers using a pure Taylor expansion at N2LO, suggesting we have insufficient data to discern the N2LO chiral logs. In the last plot on the bottom right, we see that Taylor-expanding the ratio at NLO is not prefered.

While not shown here, models with and without the $\alpha_S$ correction have about equal weight.

Error Budget

Next we can break-down where our sources of error come from. The plot on the right largely reiterates what I said before, but there are a few additional things we can glean from it. For example, we have a single ensemble generated at $a=0.06$ fm. If we hadn’t generated this ensemble, our uncertainty would’ve slightly increased and our extrapolation would’ve shift down by roughly half a sigma.

Looking at our error budget, we find that the largest source of error came from statistics, and the second largest source came from discretization, giving us a clear path for improving our result: simply increase the number of configurations and ensembles.

Finally, the up and down sea quarks are degenerate in our action, we also calculate an SU(2) isospin correction.

Previous Results

Comparing our result with other collaborations, we see that we are our result is in good agreement. The blue band is our result, and the green band is the FLAG average, which is essentially the lattice equivalent of the PDG.

Again, we emphasize that each of these groups is using a different lattice action. Our goal here isn’t to determine the most precise value of $F_K/F_\pi$ but to check that our action is behaving reasonably, in much the same way that experimentalists calculate the same quantity in different ways to check that their methods are valid.

So while our result might not be the most precise, we have accomplished the goal we set out to do, which was to verify that our action yields reasonable results.

$|V_{us}|$ from $F_K/ F_\pi$

Finally, as I mentioned at the start of my presentation, we can use $F_K/F_\pi$ to determine $V_{us}$. Using Marciano’s relation, we get the red band. The blue band is the FLAG average for $V_{us}$, which was determined by a different method using semileptonic form factors and the Ademollo–Gatto [ah-di-mall-o gat-o] theorem.

The green band is the experimental result for $V_{ud}$ as determined by superallowed nuclear beta decays. The intersection of the green band and the red band, therefore, yields our determination of $V_{us}$. There’s a little bit of tension between our result and the FLAG average.

Finally, we calculate the unitarity condition for the CKM matrix mentioned before and find that our result supports it.

Summary

In conclusion, we can calculate $V_{us}$ from $F_K/F_\pi$, which allows us to test the unitarity condition of the CKM matrix. Further, $F_K/F_\pi$ is a gold-plated quantity, which we can use to compare lattice actions. We see that our action gives a result congruent with previous determinations of $F_K/F_\pi$. Finally, we see that model averaging is a method that allows us to evaluate the fitness of many models without biasing our result by commiting to a single one.

I’d like to once again thank my collaborators in CalLat. Thanks for listening!

2024

lsqfitics

2 minute read

Wrapper of lsqfit for computing various information criteria, particularly those listed in arXiv:2208.14983 [stat.ME], using vegas.

Back to top ↑

2023

Back to top ↑

2022

Back to top ↑

2021

lsqfit-gui

less than 1 minute read

Graphical user interface for lsqfit using dash.

Back to top ↑

2020

spacetime-plots

2 minute read

A python noteboook for plotting points and lines, expressly written for making spacetime diagrams. To get started with a tutorial, launch the binder inst...

Back to top ↑