The existence of dark matter bound states could arise in a simple dark sector model in which a dark photon (A0) is light enough to create an attractive force between dark fermions. The limits of the mixing power do are set for a large part of the parameter space.
Introduction
THE STANDARD MODEL OF PARTICLE PHYSICS
They describe the Higgs field and how the Higgs field gives mass to the gauge and fermion fields. 12 of them are fermions, which are the building blocks of matter in our universe.
BEYOND STANDARD MODEL PHYSICS
The main purpose of the experiment is to study CP violation in the decay of B mesons. With its high luminosity, a sensitive measurement of the CKMVub matrix element, which is a fundamental parameter in the Standard Model, can be made.
THE PEP-II ACCELERATOR
- Silicon Vertex Tracker (SVT)
- Drift Chamber (DCH)
- The Silicon Microstrip Detectors
- Detector of Internally Reflected Cherenkov Light (DIRC)
- Drift Chamber Design
- Electromagnetic Calorimeter (EMC)
- Instrumented Flux Return (IFR)
The front end plate is made thinner in the detector acceptance area compared to the rear end plate. EMC Layout: Side view showing dimensions (in mm) of calorimeter barrel and front end cap.
DATA RECORD AND LUMINOSITY
SEARCH FOR DARKONIUM IN ELECTRON-POSITRON COLLISIONS
Introduction
The dark photon can be massive and its mass can arise via Higgs or Stueckelberg mechanisms. In addition to the vector portal, there are several other indirect interactions that can connect the dark sector to the SM, including the scalar and Fermion portals.
Dark Matter Bound States Model
Left: Dark photon parameter space constraint from the B A B AR dark Higgsstrahlung search, fitted to the production and decay of the dark bound states ⌘D and ⌥D. Right: Current limits on the m mV plane for the SIDM scenario are shown with 2 = 10 7 and different values of ↵D.
2 As discussed in the introduction, sufficiently strong
Dataset and Signal Monte Carlo Simulations
Signal MC events with non-zero lifetimes are also generated using MadGraph and reconstructed using the same software chain as the experimental data using prompt reco sequence. This signal MC will be used to evaluate the impact of dark photon lifetime on the signal efficiency.
Candidate Reconstruction
ISR photon kinematics can then be derived, based on the dark photon and ΥD kinematics. The mass difference between candidates with the same sign tends to be smaller for background events than for signal events.
Event Selection
For our analysis with 6 charged tracks grouped into 3 neutral dark photon pairs, A0 candidates with the same sign are obtained by exchanging two particles of the same charge with opposite charges in different pairs. The ISR photon is considered to have been detected if a neutral cluster is found in the calorimeter with an energy within 10% of the energy reconstructed from the ΥD measurement, and the cosine of the angle between the neutral cluster and the reconstructed ISR photon direction less than 0.1. A_poslepton_helicity: The average helicity angle of the positive lepton for each dark photon decay, as illustrated in Fig.
We apply a binned fit to the ml_score histogram of the data events in the range 0 ≤ ml_score ≤ 8, by fitting a function of the form f(x) = ea·x2+b·x+c. The range of 0 ≤ ml_score ≤ 8 ensures that the fit is completely dominated by background events and still provides a good estimate of the data.
Signal Modeling and Efficiency Mass resolutionMass resolution
The size of the signal window is given by the mass resolutions at that time, namely 4∆mΥ and 4∆mA0. To evaluate the signal efficiency as a function of (mΥD,mA0), we also use a 2-dimensional smooth spline technique for interpolation that fits each category separately. The results of signal acceptance, selection efficiency and signal efficiency are shown in Fig.
For each signal window, the signal efficiencies of CO, Cl, and C2 are weighted by branching fractions to obtain the total signal efficiency of the signal window. When the flight length of the dark photon is less than 100 µm, we use the determined efficiency for fast decay, because the displaced decay vertices of the dark photon have a negligible effect on the signal efficiency.
Systematic Uncertainties
For dark photon flight lengths above 100 µm, this interpolated relationship will be used to correct the signal efficiency and determine the upper limit of the kinetic mixing strength. Efficiency interpolation (σint): This is the uncertainty due to the interpolation of the signal efficiency, as a function of mΥD and mA0. The uncertainty in the measurement of the ratio R [21] propagates to the product of the branching fraction.
All systematic uncertainties listed above are summed in quadrature to obtain the total systematic uncertainty for each category. The average scale of the systematic uncertainties is ~ 11%; the main contributor is the PID uncertainty.
Signal Significance Estimation
The distributions of sideband events provide a recipe for estimating the number of background events in the scan windows. Under the assumption that the background distributions in the signal region are uniform, a nearest neighbor method can be applied to estimate the number of background events. When the scanning window is above the dimuon threshold, the background uniformity guarantees the unbiasedness of the estimated number of background events.
The statistic we use is the maximum number of observed signal eventsNsigmax among all signal windows. First, we estimate the expected number of background events based on the optimal expected number of observed events based on data.
Signal Extraction and Upper Limit of Coupling
Signal significance is obtained from the distribution of the maximum number of signal events derived from the bootstrap procedure (Fig. 6.24). We do not see a clear signal; therefore, upper bounds on the signal cross-section are derived, as described below. We also assume that the estimated signal efficiency ¯ follows a Gaussian distribution whose mean value is the true signal efficiency and standard deviation is the estimated total systematic uncertainty σtot2.
To take the signal lifetime into account, we apply an iterative algorithm to find the converged signal efficiency and converged upper bounds. We use equation 6.7 to calculate the theoretical dark photon lifetime and equation 6.10 to correct the signal efficiency.
Summary and Outlook
Precise Measurement of Semileptonic B Meson Decays
Introduction
Individual decay modes will suffer from large hadronic uncertainties associated with shape factors and Cabibbo–Kobayashi–Maskawa (CKM) elements. Therefore, any deviation between the SM prediction and the observed value of R(D(∗)) may imply the existence of NP, incl. Supersymmetry: In the Minimum Supersymmetric Standard Model (MSSM), a charged Higgs boson is introduced which couples proportionally to the fermion masses.
The Higgs only significantly couples to τ, so that it contributes to the deviation between the SM prediction and the observed value of the measured quantity. Taking into account the R(D) −R(D∗)) correlation of -0.38, the difference with the SM predictions reported above corresponds to approximately 3.08σ.
Analysis Strategy Overview Event types definitionEvent types definition
One of the most important categories of backgrounds arises from the events aB → D∗∗lν where D∗∗ indicates heavier excited charm meson states than D∗ due to the similar decay topology to the signal events. At least oneBzdecomposes to D∗∗(l/τ)ν, where D∗∗denotes any excited state of the excited meson that is not in the 1Sdoublet ground state. When reconstructing signal events, we first reconstruct a Btag, then search for D(∗)l in the remaining traces and calorimeter clusters.
Signal events can be categorized into four disjoint subsets based on the reconstructed Dmeson type: D+l,D0l,D∗+l,D∗0l. 7.5) Nˆ(signalD), ˆN(signalD∗), ˆN(normD) and ˆN(normD∗) can be estimated as the extracted returns divided by the corresponding yields.
Simulation Samples
Multiplier is the factor by which the size of the corresponding on-peak data set exceeds the size of the given simulated data set.
Event Reconstruction
The reconstruction takes into account the D decay modes listed in Table 7.7, which together account for 19.2% and 30.1% of the branching fractions of D+ and D0. D decay without aπ0 in the final state: The mass resolution (σmD) for this type of D decay modes is about 5 MeV. D+ decay with aπ0 in the final state: The mass resolutions for this type of D decay modes are around 12 MeV.
D0decay with aπ0 in the final state: The mass resolutions for this type of D decay modes are around 15 MeV. In each event with a selected Btag candidate, we search for D(∗) among the remaining traces and calorimeter clusters to reconstruct Bsig, using the B decay modes described above.
Signal Detection
B−D(∗)l: Cosine of the angle between the 3-momentum of the Btag and the 3-momentum sum of its D and lepton daughters in the CM frame. The training sample is first used to train the classifier, the validation sample is then used to evaluate the performance of the classifiers. B−D(∗)l: Cosine of the angle between the 3-momentum of the Bzig and the 3-momentum sum of its D and lepton daughters.
For cosθsigB−D(∗)l, normalization events have only a single neutrino and the value must be from -1 to 1. Signal events tend to have a higher z2 score, while normalization events tend to have a lower z2- score.
Signal Extraction
The learned densities for all event components for each of the four subsets are shown in Figure. Given a subset with C components and the corresponding learned densities fj(z1,z2) (j = 1,2, ..,C), the z=( z1,z2)distribution of the subset can be written as. We first check the PDF modeling of normalization events by comparing the PDFs of MC with data on the normalization enriched region.
The normalization enriched region is defined in the space (z1,z2) asz1 > 2,z2 < -4, in which 94% of the events are normalization violations. We then check the PDF modeling of the backgrounds (after peak background calibration) by comparing the MC PDFs with the data in the background-enriched region.
Systematic Uncertainties ProcedureProcedure
Systematic uncertainties from the semileptonic decays of the Bmeson involving D∗∗ where D∗∗ denotes the L = 1 excitation of the ground state of the meson D: D∗ are also estimated. B → D(∗)lν decays are one of the dominant Bmeson decays in this analysis; corrected and we scaled their uncertainties to evaluate the systematic uncertainties against the values given in Table 7.12. To evaluate the resonant D∗∗(1P) for B→ D∗∗(l/τ)ν decays, the following decays in a generic MC sample were oscillated as listed in Table 7.15.
The samples simulated B → D∗∗lν decay are used to evaluate the systematic uncertainties from resonant B → D∗∗(2S)lν decay, as given in Table 7.17. For non-resonant B → D∗∗lν decays, we fluctuated both B → D(∗)π+lν and B → D(∗)π0lν decay modes simultaneously, and the corresponding systematic uncertainties are listed in Table 7.19.
Results
Summary
It is also feasible due to the customer package developed for the best PDF decay models with millions of MC samples. If we average the two semilepton labeling measurements together and compare them to the SM prediction, the total difference is about 0.46σ. However, with the future Belle II experiment target luminosity of 50 ab−1, the statistical uncertainty in R(D(∗)) measurements can be reduced to only a few percent, making it comparable to systematic uncertainties and better for probing new physics.
Conclusion
CONCLUSION
In the appendix, we demonstrate the idea of applying deep transfer learning algorithms to reduce systematic uncertainties. With the adversarial neural network architecture, the multivariate classifiers can be trained to be insensitive to the small variance of event distributions. This framework can be applied to most high-energy physics analysis to reduce the systematics of incorrect modeling of signal or background samples.
BIBLIOGRAPHY
APPENDIX A: DEEP ADVERSARIAL NETWORK FOR SYSTEMATIC UNCERTAINTY REDUCTION
Reverse validation
- Using the training set, which contains both source and target domain data, train the model and extract the optimal set of parameters ✓, which does not include the
- Using model defined by ✓ to predict the classes of the target domain portion of the training set, which gives us a pseudo-label of the target data set (x t , y ˆ t )
- Now, swap the roles of the source and target domains: treat (x t , y ˆ t ) as the labeled source domain data and x s as the unlabeled target domain data
- Train a new model the new dataset giving us a new set of model parameters ✓ 0
- The reverse validation error is the error of the new model with ✓ 0 predicting on x s , which has known labels y s