-
Notifications
You must be signed in to change notification settings - Fork 7
/
Copy pathch-exec-nd-appx.tex
1420 lines (928 loc) · 207 KB
/
ch-exec-nd-appx.tex
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
171
172
173
174
175
176
177
178
179
180
181
182
183
184
185
186
187
188
189
190
191
192
193
194
195
196
197
198
199
200
201
202
203
204
205
206
207
208
209
210
211
212
213
214
215
216
217
218
219
220
221
222
223
224
225
226
227
228
229
230
231
232
233
234
235
236
237
238
239
240
241
242
243
244
245
246
247
248
249
250
251
252
253
254
255
256
257
258
259
260
261
262
263
264
265
266
267
268
269
270
271
272
273
274
275
276
277
278
279
280
281
282
283
284
285
286
287
288
289
290
291
292
293
294
295
296
297
298
299
300
301
302
303
304
305
306
307
308
309
310
311
312
313
314
315
316
317
318
319
320
321
322
323
324
325
326
327
328
329
330
331
332
333
334
335
336
337
338
339
340
341
342
343
344
345
346
347
348
349
350
351
352
353
354
355
356
357
358
359
360
361
362
363
364
365
366
367
368
369
370
371
372
373
374
375
376
377
378
379
380
381
382
383
384
385
386
387
388
389
390
391
392
393
394
395
396
397
398
399
400
401
402
403
404
405
406
407
408
409
410
411
412
413
414
415
416
417
418
419
420
421
422
423
424
425
426
427
428
429
430
431
432
433
434
435
436
437
438
439
440
441
442
443
444
445
446
447
448
449
450
451
452
453
454
455
456
457
458
459
460
461
462
463
464
465
466
467
468
469
470
471
472
473
474
475
476
477
478
479
480
481
482
483
484
485
486
487
488
489
490
491
492
493
494
495
496
497
498
499
500
501
502
503
504
505
506
507
508
509
510
511
512
513
514
515
516
517
518
519
520
521
522
523
524
525
526
527
528
529
530
531
532
533
534
535
536
537
538
539
540
541
542
543
544
545
546
547
548
549
550
551
552
553
554
555
556
557
558
559
560
561
562
563
564
565
566
567
568
569
570
571
572
573
574
575
576
577
578
579
580
581
582
583
584
585
586
587
588
589
590
591
592
593
594
595
596
597
598
599
600
601
602
603
604
605
606
607
608
609
610
611
612
613
614
615
616
617
618
619
620
621
622
623
624
625
626
627
628
629
630
631
632
633
634
635
636
637
638
639
640
641
642
643
644
645
646
647
648
649
650
651
652
653
654
655
656
657
658
659
660
661
662
663
664
665
666
667
668
669
670
671
672
673
674
675
676
677
678
679
680
681
682
683
684
685
686
687
688
689
690
691
692
693
694
695
696
697
698
699
700
701
702
703
704
705
706
707
708
709
710
711
712
713
714
715
716
717
718
719
720
721
722
723
724
725
726
727
728
729
730
731
732
733
734
735
736
737
738
739
740
741
742
743
744
745
746
747
748
749
750
751
752
753
754
755
756
757
758
759
760
761
762
763
764
765
766
767
768
769
770
771
772
773
774
775
776
777
778
779
780
781
782
783
784
785
786
787
788
789
790
791
792
793
794
795
796
797
798
799
800
801
802
803
804
805
806
807
808
809
810
811
812
813
814
815
816
817
818
819
820
821
822
823
824
825
826
827
828
829
830
831
832
833
834
835
836
837
838
839
840
841
842
843
844
845
846
847
848
849
850
851
852
853
854
855
856
857
858
859
860
861
862
863
864
865
866
867
868
869
870
871
872
873
874
875
876
877
878
879
880
881
882
883
884
885
886
887
888
889
890
891
892
893
894
895
896
897
898
899
900
901
902
903
904
905
906
907
908
909
910
911
912
913
914
915
916
917
918
919
920
921
922
923
924
925
926
927
928
929
930
931
932
933
934
935
936
937
938
939
940
941
942
943
944
945
946
947
948
949
950
951
952
953
954
955
956
957
958
959
960
961
962
963
964
965
966
967
968
969
970
971
972
973
974
975
976
977
978
979
980
981
982
983
984
985
986
987
988
989
990
991
992
993
994
995
996
997
998
999
1000
%\chapter{Near Detector Executive Summary}
\appendix
\chapter{The Near Detector Purpose and Conceptual Design}
\label{ch:appx-nd}
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
\section{Overview of the DUNE Near Detector}
\label{sec:appx-nd-overview}
\subsection{Motivation} %Need for the Near Detector}
\label{sec:appx-nd:BriefOverview-need}
A primary aim of the \dword{dune} experiment is to measure the oscillation probabilities for muon neutrino and muon antineutrinos to either remain the same flavor or oscillate to electron (anti)neutrinos.
Measuring these probabilities as a function of the neutrino energy will allow definitive determination of the neutrino mass ordering, observation of leptonic \dword{cp} violation for a significant range of $\delta_{\rm{CP}}$ values, and precision measurement of \dword{pmns} parameters.
The role of the \dword{nd} is to serve as the experiment's control. The \dword{nd} establishes the null hypothesis (i.e., no oscillations) and constrains systematic errors. It measures the initial unoscillated \numu and \nue energy spectra, and that of the corresponding antineutrinos. Of course, neutrino energy is not measured directly. What is seen in the detector is a the convolution of flux, cross section, and detector response to the particles produced in the neutrino interactions, all of which have energy dependence. The neutrino energy is reconstructed from observed quantities. \footnote{In experimental neutrino physics, it is common practice to refer to the neutrino energy (and spectra) when, in fact, it is the reconstructed neutrino energy (spectra) which is meant, along with all of the flux, cross section, and detector response complexities that implies.}
To first order, a ``far/near'' ratio (or migration matrix), derived from the simulation, can predict the unoscillated energy spectra at the \dword{fd} based on the \dword{nd} measurements. The energy spectra at the \dword{fd} are then sensitive to the oscillation parameters, which can be extracted via a fit. The \dword{nd} plays a critical role in establishing what the oscillation signal spectrum should look like in the \dword{fd} because the expectations for the spectra in both the disappearance and appearance signals are based on the precisely measured spectra for $\nu_{\mu}$ and $\overline{\nu}_{\mu}$ interactions in the \dword{nd}.
To achieve the precision needed for \dword{dune}, the experiment will have to operate beyond the first-order paradigm. With finite energy resolution and nonzero biases, the reconstructed energy spectrum is an unresolved convolution of cross section, flux, and energy response. The \dword{nd} must independently constrain each of those components. The \dword{nd} must provide information that can be used to model well each component. Models of the detector, beam, and interactions fill in holes and biases left by imperfect understanding and they are used to estimate the size of many systematic effects. When imperfect models are not able to match observations, the \dword{nd} must provide the information needed to deal with that and estimate the impact of the imperfect modeling on final measurements. In general, this requires that the \dword{nd} significantly outperform the \dword{fd}, which is limited by the need for a large, underground mass. The \dword{nd} must have multiple methods for measuring neutrino fluxes as independently of cross section uncertainties as possible. With the necessity of relying on models, the \dword{nd} needs to measure neutrino interactions with much better detail than the \dword{fd}. This includes having a larger efficiency across the kinematically allowed phase space of all relevant reaction channels, superior identification of charged and neutral particles, better energy reconstruction, and better controls on experimental biases. The \dword{nd} must also have the ability to measure events in a similar way to the \dword{fd}, so that it can determine the ramifications of the more limited \dword{fd} performance, provide corrections, and take advantage of effects canceling to some extent in the near to far extrapolation.
The conceptual design of the \dword{nd} is based on the collective experience of the many \dword{dune} collaborators who have had significant roles in the current generation of neutrino experiments (\dword{minos}, MiniBooNE, \dword{t2k}, \dword{nova}, \dword{minerva}, and the \dword{sbn} program). These programs have provided (and will provide) a wealth of useful data and experience that has led to improved neutrino interaction models, powerful new analyses and reconstruction techniques, a deep appreciation of analysis pitfalls, and a better understanding of the error budget.
These experiments, while similar to \dword{dune}, were all %either
done with a lower precision, in a different energy range, or with very different detector technologies. While the existing and projected experience and data from those experiments provide a strong base for \dword{dune}, it is not sufficient to enable \dword{dune} to accomplish its physics goals without a highly performing \dword{nd}.
The \dword{dune} \dword{nd} will also have a physics program of its own measuring cross sections, non-standard interactions (\dshort{nsi}), searching for sterile neutrinos, dark photons, and other exotic particles. These are important aims that expand the physics impact of the \dword{nd} complex. %Also
Furthermore, the cross section program is coupled to the oscillation measurement insofar as the cross sections will be useful as input to theory and model development. (Note that many of the \dword{nd} data samples are incorporated into the oscillation fits directly.) %The \dword{dune} \dword{nd} program of beyond the standard model physics is discussed more in Appendix~\ref{ch:appx-ndbsm:BSMappendix}.
\subsection{Design} %Overview of the Near Detector}
\label{sec:appx-nd:BriefOverview}
The \dword{dune} \dword{nd} is formed from three primary detector components and the capability for two of these components to move off the beam axis. The three detector components serve important individual and overlapping functions with regard to the mission of the \dword{nd}. Because these components have standalone features, the \dword{dune} \dword{nd} is often discussed as a suite or complex of detectors and capabilities. The movement off axis provides a valuable extra degree of freedom in the data which is discussed extensively in this report. The power in the \dword{dune} \dword{nd} concept lies in the collective set of capabilities. It is not unreasonable to think of the component detectors in the \dword{dune} \dword{nd} as being somewhat analogous to subsystems in a collider experiment, the difference being that, with one important exception (higher momentum muons), individual events are contained within the subsystems.
The \dword{dune} \dword{nd} is shown in the \dword{dune} \dword{nd} hall in Figure~\ref{fig:NDHallconfigs}. Table~\ref{tab:NDsummch} provides a high-level overview of the three components of the \dword{dune} \dword{nd} along with the off-axis capability that is sometimes described as a fourth component.
\begin{comment} repeat from ch 1
\begin{dunefigure}[DUNE ND hall; component detectors on- and off-axis]{fig:NDHallconfigs}
{\dword{dune} \dword{nd} hall shown with component detectors all in the on-axis configuration (left) and with the \dword{lartpc} and \dword{mpd} in an off-axis configuration (right). The KLOE magnet containing the \dword{3dst} is shown in position on the beam axis . The beam axis is shown. The beam enters the hall at the bottom of the drawings moving from right to left.}
\includegraphics[width=0.49\textwidth]{graphics/NDHall_onaxis.jpg}
\includegraphics[width=0.49\textwidth]{graphics/NDHall_offaxis.jpg}
\end{dunefigure}
\end{comment}
The core part of the \dword{dune} \dword{nd} is a \dword{lartpc} called \dword{arcube}. The particular implementation of the \dword{lartpc} technology in this detector is described in Section~\ref{sec:appx-nd:lartpc} below.
This detector has the same target nucleus and shares some aspects of form and functionality with the \dword{fd}, while the differences are necessitated by the expected intensity of the beam at the \dword{nd}. This similarity in target nucleus and, to some extent, technology, reduces sensitivity to nuclear effects and detector-driven systematic errors in the extraction of the oscillation signal at the \dword{fd}. The \dword{lartpc} is large enough to provide high statistics ($\num{1e8}{\numu \text{-CC events/year}}$ on axis) and a sufficient volume to provide good hadron containment. The tracking and energy resolution, combined with the mass of the \dword{lartpc}, will allow for the measurement of the flux in the beam using several techniques, including the rare process of $\nu$-e$^{-}$ scattering.
The \dword{lartpc} begins to lose acceptance for muons above $\sim$\SI{0.7}{GeV/c} due to lack of containment. Because the muon momentum is a critical component of the neutrino energy determination, a magnetic spectrometer is needed downstream of the \dword{lartpc} to measure the charge sign and momentum of these muons. In the \dword{dune} \dword{nd} concept, this function is accomplished by the \dword{mpd}, which consists of a \dword{hpgtpc} surrounded by an \dword{ecal} in a \SI{0.5}{T} magnetic field. The \dword{hpgtpc} provides a lower density medium with excellent tracking resolution for the muons from the \dword{lartpc}. In addition, with this choice of technology for the tracker, neutrinos interacting on the argon in the \dword{hpgtpc} constitute a large (approximately \num{1e6}$\numu$-\dword{cc} events/year on axis) independent sample of $\nu$-Ar events that can be studied with a very low momentum threshold for tracking charged particles, excellent resolution, and with systematic errors that differ from the liquid detector. These events will be valuable for studying the charged particle activity near the interaction vertex, since this detector can access lower-momentum protons than the \dword{lartpc} and has better particle identification of charged pions. Misidentification of pions as knock-out protons (or vice versa) causes a mistake in the reconstructed neutrino energy, moving it away from its true value by the amount of a pion mass. This mistake can become quite significant at the lower-energy second oscillation maximum. The gas detector will play an important role in mitigating this mistake, since pions are rarely misidentified as protons in the \dword{hpgtpc}. In addition, the relatively low level of secondary interactions in the gas samples will be helpful for identifying the particles produced in the primary interaction and modeling secondary interactions in denser detectors, which are known to be important effects\cite{Friedland:2018vry}. The high pressure increases the statistics for these studies, improves the particle identification capabilities, and improves the momentum resolution.
The \dword{mpd} is discussed further in Section~\ref{sec:appx-nd:mpd}.
The \dword{lartpc} and \dword{mpd} can move to take data in positions off the beam axis. This capability is referred to as \dword{duneprism}. As the detectors move off-axis, the incident neutrino flux spectrum changes, with the mean energy dropping and the spectrum becoming somewhat monochromatic. Though the neutrino interaction rate drops off-axis, the intensity of the beam and the size of the \dword{lartpc} combine to yield ample statistics even in the off-axis positions.
Data taken at different off-axis angles allow deconvolution of the neutrino flux and interaction cross section and the mapping of the reconstructed versus true energy response of the detector. This latter mapping is applicable at the \dword{fd} up to the level to which the near and far \dword{lar} detectors are similar. Stated a different way, it is possible to use information from a linear combination of the different fluxes to create a data sample at the \dword{nd} with an effective neutrino energy distribution that is close to that of the oscillated spectrum at the \dword{fd}. This data-driven technique will reduce systematic effects coming from differences in the energy spectra of the oscillated signal events in the \dword{fd} and the \dword{nd} samples used to constrain the interaction model. Finally, the off-axis degree of freedom provides a sensitivity to some forms of mismodeling in the beam and/or interaction models. The \dword{duneprism} program is discussed further in Section~\ref{sec:appx-nd:DP}.
The final component of the \dword{dune} \dword{nd} suite is the beam monitor, called the \dword{sand}. The core part of it, the \dword{3dst}, is a plastic scintillator detector made of \SI{1}{\cubic\centi\meter} cubes read out along each of three orthogonal dimensions. The design eliminates the typical planar-strip geometry common to scintillator detectors, leading to improved acceptance at large angles relative to the beam direction. It is mounted
inside an envelope of high-resolution, normal pressure \dwords{tpc} and an \dword{ecal}, all
of which are surrounded by a magnet, as illustrated in Figure~\ref{fig:3dst-geometry}. The reference design uses a repurposed magnet and \dword{ecal} from the \dword{kloe} experiment.
The \dword{3dst} serves as a dedicated neutrino spectrum monitor that never moves off-axis. %stays on-axis when \dword{arcube} and the \dword{mpd} have moved to an off-axis position.
It also provides an excellent on-axis, neutrino flux determination using many of the methods discussed in Section~\ref{sec:appx-nd:fluxappendix}.
The neutrino flux determined using this detector, with %differing %detectors,
technologies, targets, and interaction systematic errors that are different from \dword{arcube}, is an important point of comparison and a systematic cross-check for the flux as determined by \dword{arcube}.
\dword{sand} provides very fast timing and can isolate small energy depositions from neutrons in three dimensions. This provides the capability to incorporate neutrons in the event reconstruction using energy determination via time-of-flight with a high efficiency. %In addition, the \dword{3dst} has very fast timing and the ability to isolate small energy depositions from neutrons in three dimensions. This provides the capability to incorporate neutrons in the event reconstruction using energy determination via time-of-flight with a high efficiency.
This capability is expected to be useful for the low-$\nu$ flux determination since it allows for tagging of events with a significant neutron energy component\footnote{The low-$\nu$ technique involves measuring the flux for events with low energy transfer because the cross section is approximately constant with energy for this sample. It provides a nice way to measure the shape of the spectrum. This is discussed further in Section~\ref{sec:appx-nd:fluxappendix}.}.
The inclusion of the neutron reconstruction also provides a handle for improving the neutrino energy reconstruction in $\overline{\nu}_\mu$ \dword{ccqe} events, which is helpful for the $\overline{\nu}_\mu$ flux determination.
The
different mass number $A$ of the carbon target relative to argon may prove to be useful for developing models of nuclear effects and building confidence in the interaction model and the size of numerous systematic errors. The addition of the neutron reconstruction capability extends the DUNE ND theme of including regions of phase space in neutrino interactions not seen in previous experiments. This capability may provide insights that foster improvements in the neutrino interaction model on carbon. Though extrapolating such improvements to argon is not straightforward, the development of current generators has benefited from data taken with different nuclear targets, including carbon.
The \dword{sand} component of the \dword{nd} is discussed more in Section~\ref{sec:appx-nd:mpt-3dst}.
Table~\ref{tab:fluxrates} shows the statistics expected in the different \dword{nd} components for a few processes that are important for constraining the neutrino flux. Some additional information on constraining the flux is provided in Section~\ref{sec:appx-nd:fluxappendix}.
\begin{dunetable}[Event rates for flux constraining processes]{llll}{tab:fluxrates}{Event rates for processes that can be used to constrain the neutrino flux. The rates are given per year for a \SI{1}{ton} (\dword{fv}) \dword{hpgtpc}, a \SI{25}{ton} (\dword{fv}) \dword{lartpc} \cite{bib:docdb12388}, and a \SI{9}{t} (\dword{fv}) \dword{3dst}. The flux for the \dword{hpgtpc} and \dword{lartpc} is from the simulated ``2017 engineered'' \dword{lbnf} beam with a primary momentum of \SI{120}{GeV/c} and \SI{1.1e21}{POT/year}. The flux for the \dword{3dst} is the \SI{80}{GeV}, three-horn, optimized beam with \SI{1.46e21}{POT/year}. The detectors are assumed to be on-axis. Fiducial volumes are analysis dependent and in the case of the \dword{lartpc}, it is likely the volume could be made larger by a factor of two for many analyses.}
Event class & \dword{lartpc} & \dword{hpgtpc} & \dword{3dst} \\ \toprowrule
\numu + $e^-$ elastic ($E_e>\SI{500}{MeV}$) & \num{3.3e3} & \num{1.3e2} & \num{1.1e3} \\ \colhline
\numu low-$\nu$ ($\nu<\SI{250}{MeV})$ & \num{5.3e6} & \num{2.1e5} & \num{1.48e6} \\ \colhline
\numu \dword{cc} coherent & \num{2.2e5} & \num{8.8e3} & \\ \colhline
\anumu \dword{cc} coherent & \num{2.1e4} & \num{8.4e2} & \\
\end{dunetable}
\section{Role of the ND in the DUNE Oscillation Program}
\label{sec:appx-nd:exsum-nd-role}
Oscillation experiments need to accomplish three main tasks. First, they must identify the flavor of interacting neutrinos in \dword{cc} events, or identify the events as \dword{nc} interactions. Second, they need to measure the energy of the neutrinos since oscillations occur as a function of baseline length over neutrino energy, \dword{l/e}. Third, they need to compare the observed event spectrum in the \dword{fd} to predictions based on differing sets of oscillation parameters, subject to constraints from the data observed in the \dword{nd}. That comparison and how it varies with the oscillation parameters allows for the extraction of the measured oscillation parameters and errors.
The connection between the observations in the \dword{nd} and the \dword{fd} is made using a simulation that convolves models of the neutrino flux, neutrino interactions, nuclear effects, and detector response.
This gives rise to a host of complicating effects that
muddy the simple picture. They come from two main sources. First, the identification efficiency is not \SI{100}{\%} and there is
some background (e.g., \dword{nc} events with a $\pi^0$ are a background to \nue \dword{cc} interactions). Both the efficiency and the background are imperfectly known. Generally, it is helpful to have a \dword{nd} that is as similar as feasible to the \dword{fd} because a bias in the efficiency as a function of energy will cancel between the two detectors. Since the background level tends to be similar between the two detectors, it is helpful if the \dword{nd} is more capable than the \dword{fd} at characterizing backgrounds, either due to its technology, or by leveraging the much larger statistics and freedom to take data in alternative beam configuration modes (e.g., different horn currents or movement off the beam axis).
The second major source of complication occurs because the \dword{fd} (and the similar \dword{nd}) has to be made of heavy nuclei rather than hydrogen. Neutrino interactions can be idealized as a three stage process: (1) a neutrino impinges on a nucleus with nucleons in some initial state configuration, (2) scattering occurs with one of the nucleons, perhaps creating mesons, and (3) the hadrons reinteract with the remnant nucleus on their way out (so called \dword{fsi}). The presence of the nucleus impacts all three stages in ways that ultimately drive the design of the \dword{nd} complex. To better understand this it is useful to consider what would happen if the detectors were made of hydrogen.
%*** thoughts in this section were influenced by the S.C. group (R. Petti, S. Mishra) ***
In a detector made of hydrogen, the initial state is a proton at rest and there are no \dword{fsi}. The scattering consists of a variety of processes. The simplest is \dword{qe} scattering: $\bar{\nu}_\ell p \to \ell^+ n$. The detector sees a lepton (which establishes the flavor of the neutrino), no mesons, and perhaps a neutron interaction away from the lepton's vertex. Because there are no mesons the kinematics is that of two body scattering and the neutrino energy can be reconstructed from the the lepton's angle (with respect to the $\nu$ beam) and energy. This is independent of whether the neutron is observed.
For $\nu_\ell$ interactions on hydrogen there is no \dword{qe} process. The simplest scattering channel is single pion production $\nu_\ell p \to \ell^- \pi^{(+,0)} (n,p)$. In that case the neutrino energy may be reconstructed from the energy of the muon and pion, and their angles with respect to the beam\footnote{The nucleon does not need to be observed. This is a consequence of having four energy-momentum conservation constraints, which allows $E_\nu$ and $\vec{p}_N$ to be computed.}. In both cases, the neutrino energy can be measured without bias so long as the detector itself measures lepton and meson momenta and angles without bias. The neutrino energy in complicated scattering channels, such as ones with multiple pions or heavy baryons can be measured in a similar way (at least in principle).
A key simplifying feature offered by a hypothetical hydrogen detector is simply that there are enough constraints to measure the neutrino energy without needing to measure the single nucleon (especially a neutron escaping the detector). Additionally, the cross sections for different scattering channels (particularly the simpler ones) can be expressed in terms of leptonic and hadronic currents. The leptonic current is well understood. The structural elements of the hadronic current are known on general theoretical grounds. The current is often represented by form factors that are constrained by electron scattering experiments, beta decay, and neutrino scattering measurements that the detector can make itself (or take from other experiments).
The situation is significantly more complicated in a detector with heavy nuclei. The nucleons in the initial state of the nucleus are mutually interacting and exhibit Fermi motion. This motion ruins the key momentum conservation constraint available in hydrogen due to the target being at rest. Scattering at lower momentum transfer is suppressed because the nucleon in the final state would have a momentum that is excluded by the Pauli principle.
The nucleon momentum distribution in heavy nuclei is commonly modeled as a Fermi gas with a cutoff momentum $k_F \approx \SI{250}{MeV/c}$ \cite{Smith:1972xh}.
This picture is overly simplistic. For example, there are nucleons with momenta larger than $k_F$ due to short-range correlated nucleon-nucleon interactions (\dword{src})\cite{Bodek:2014jxa}. Scattering on a nucleon with $p>k_F$ implies that there is a spectator nucleon recoiling against the target with a significant momentum. \dword{src} have been the subject of much investigation but are not fully understood or fully implemented in neutrino event generators.
Additionally, there is a second multi-body effect. For the few-GeV neutrinos of interest to \dword{dune}, the typical momentum transfer corresponds to a probe that has a wavelength on par with the size of a nucleon. In this case, the scattering can occur on two targets in the nucleus which may be closely correlated (\dword{2p2h} scattering). Experiments can easily confuse this process for \dword{qe} scattering since there are no mesons and one or both of the two nucleons may have low energy, evading detection. The presence of two nucleons in the initial and final state again ruins the kinematic constraints available in hydrogen. It is now known that \dword{2p2h} scattering is a significant part of the total scattering cross section at \dword{dune} energies \cite{Ruterbories:2018gub}. The \dword{2p2h} cross section is difficult to compute because it cannot be expressed as the sum over cross sections on individual nucleons. The dependence on atomic number and the fine details of the interaction (e.g., the final energies of the two particles) are also currently unknown. Finally, it is widely expected that there are components of \dword{2p2h} and \dword{src} scattering that result in meson production. Event generators do not currently include such processes.
Neutrino scattering on nuclei is also subject to \dword{fsi}. \dword{fsi} collectively refers to the process by which nucleons and mesons produced by the neutrino interaction traverse the remnant nucleus. The hadrons reinteract with a variety of consequences: additional nucleons can be liberated; ``thermal'' energy can be imparted to the nucleus; pions can be created and absorbed; and pions and nucleons can undergo charge exchange scattering (e.g., $\pi^- p \to \pi^0 n$). Event generators include phenomenological models for \dword{fsi}, anchoring to hadron-nucleus scattering data.
The heavy nuclei in a detector also act as targets for the particles that have escaped the struck nucleus. Generally speaking, the denser the detector and the more crudely it samples deposited energy, the more difficult it is to observe low-energy particles. Negatively and positively charged pions leave different signatures in a detector since the former are readily absorbed while the latter are likely to decay. Neutrons can be produced from the struck nucleus, but also from follow-on interactions of the neutrino's reaction products with other nuclei. The energy carried away by neutrons is challenging to detect and can bias the reconstructed neutrino energy.
Finally, it is important to note that a significant fraction of the neutrino interactions in DUNE will come from deep inelastic scattering rather than the
simpler \dword{qe} scattering discussed above. This leads typically to a more complex morphology for events (beyond the heavy nucleus complications) and greater challenges for the detector and the modeling.
\section{Lessons Learned}
\label{sec:appx-nd:overview-lessons}
\subsection{Current Experiments}
Neutrino beams are notoriously difficult to model at the precision and accuracy required for modern accelerator-based experiments. Recent \dword{lbl} experiments make use of a \dword{nd} placed close to the beam source, where oscillations are not yet a significant effect. The beam model, the neutrino interaction model, and perhaps the detector response model are tuned, or calibrated, by the data recorded in the \dword{nd}. The tuned model is used in the extraction of the oscillation signal at the \dword{fd}. Known effects that are not understood or modeled well must be propagated into the final results as part of the systematic error budget. Unknown effects that manifest as disagreements between the model and observations in the \dword{nd} also must be propagated into the final results as part of the systematic error budget. These kinds of disagreements have happened historically to every precision accelerator oscillation experiment. When such disagreements arise, some assumption or range of assumptions must be made about the source of the disagreement. Without narrowing down the range of possibilities, this can become a leading systematic error.
Since the final results depend on the comparison of what is seen in the \dword{fd} to that in the \dword{nd}, having functionally identical detectors (i.e., the same target nucleus and similar detector response) is helpful. In a similar vein, differences between the neutrino spectrum at the \dword{nd} and the oscillated spectrum seen at the \dword{fd} lead to increased sensitivity to systematic effects propagated from the \dword{nd} to the \dword{fd}.
The past experience of the neutrino community is a driving force in the design of the \dword{dune} \dword{nd} complex.
The performance of current, state-of-the-art long baseline oscillation experiments provides a practical guide to many of the errors and potential limitations \dword{dune} can expect to encounter, as well as case studies of issues that arose which were unanticipated at the design stage.
The \dword{t2k} experiment uses an off-axis neutrino beam that has a narrow energy distribution peaked below \SI{1}{GeV}. This means, relative to \dword{dune}, interactions in \dword{t2k} are predominantly \dword{ccqe} and have relatively simple morphologies. The data sample has little feed-down from higher energy interactions. The \dword{t2k} \dword{nd} (plastic scintillator and
%74
\dword{tpc}) technology is very different from its \dword{fd} (water Cerenkov), though the \dword{nd} contains embedded water targets that provide samples of interactions on the same target used in the \dword{fd}.
The experiment relies on the flux and neutrino interaction models, as well as the \dword{nd} and \dword{fd} response models to extrapolate the constraint from the \dword{nd} to the \dword{fd}. In the most recent oscillation results released by \dword{t2k}, the \dword{nd} data constraint reduces the flux and interaction model uncertainties at the \dword{fd} from 11---14\% down to 2.5---4\%~\cite{Abe:2018wpn}. Inclusion of the water target data was responsible for a factor of two reduction in the systematic uncertainties, highlighting the importance of measuring interactions on the same target nucleus as the \dword{fd}.\footnote{These numbers are not used directly in the analysis but were extracted to provide an indication of the power of the \dword{nd} constraint.}
The \dword{nova} experiment uses an off-axis neutrino beam from \dword{numi} that has a narrow energy distribution peaked around \SI{2}{GeV}. The \dword{nova} \dword{nd} is functionally identical to its \dword{fd}. Still, it is significantly smaller than the \dword{fd} and it sees a different neutrino spectrum due to geometry and oscillations. Even with the functionally identical near and far detectors, \dword{nova} uses a model to subtract \dword{nc} background and relies on a model-dependent response matrix to translate what is seen in the \dword{nd} to the ``true'' spectrum, which is then extrapolated to the \dword{fd} where it is put through a model again to predict what is seen in the \dword{fd}~\cite{NOvA:2018gge, WolcottNUINT2018}. Within the extrapolation, the functional similarity of the near and far detectors reduces but does not eliminate many systematic effects. Uncertainties arising from the neutrino cross section model dominate the \dword{nova} $\nu_{e}$ appearance systematic error budget and are among the larger errors in the $\nu_{\mu}$ disappearance results. The \dword{nd} constraint is significant. For the $\nu_{e}$ appearance signal sample in the latest \dword{nova} results, for example, a measure of the systematic error arising from cross section uncertainties without using the \dword{nd} extrapolation is 12\,\% and this drops to 5\,\% if the \dword{nd} extrapolation is used \cite{WolcottNUINT2018}.
The process of implementing the \dword{nd} constraint in both \dword{t2k} and \dword{nova} is less straightforward than the typical description implies. It will not be any more straightforward for \dword{dune}. One issue is that there are unavoidable near and far differences. Even in the case of functionally identical detectors, the beam spectrum and intensity are very different near to far. For \dword{dune}, in particular,
\dword{arcube} is smaller than the \dword{fd} and is divided into modular, optically isolated regions that have a pixelated readout rather than the wire readout of the \dword{fd}. Space charge effects will differ near to far. All of this imposes model dependence on the extrapolation from near to far. This is mitigated by collecting data at differing off-axis angles with \dword{duneprism}, where an analysis can be done with an \dword{nd} flux that is similar to the oscillated \dword{fd} flux (see Section~\ref{sec:appx-nd:DP}). (Data from \dword{protodune} will also be useful to understand the energy-dependent detector response for the \dword{fd}.) Regardless, near to far differences will persist and must be accounted for through the beam, detector, and neutrino interaction models.
Although long baseline oscillation experiments use the correlation of fluxes at the \dword{nd} and the \dword{fd} to reduce sensitivity to flux modeling, the beam model is a critical component in understanding this correlation. Recently, the \dword{minerva} experiment used spectral information in the data to diagnose a discrepancy between the expected and observed neutrino event energy distribution in the \dword{numi} medium energy beam \cite{JenaNUINT2018}. In investigating this issue, \dword{minerva} compared the observed and simulated neutrino event energy distribution for low-$\nu$ events, as shown in Figure~\ref{fig:minervameflux}. Since the cross section is known to be relatively flat as a function of neutrino energy for this sample, the observed disagreement as a function of energy indicated a clear problem in the flux model or reconstruction.
\dword{minerva} believes the observed discrepancy between the data and simulation is best described by what is a mismodeling in horn focusing combined with an error in the muon energy reconstruction (using range traversed in the downstream spectrometer). This is notable, in part, because the two identified culprits in this saga would manifest differently in the extrapolation to the far detector in an oscillation experiment. The spectral analysis provided critical information in arriving at the final conclusion. This experience illustrates the importance of good monitoring/measurements of the neutrino beam spectrum.
\begin{dunefigure}[MINERvA medium energy \dshort{numi} flux for low-$\nu$ events]{fig:minervameflux}
{Reconstructed \dword{minerva} medium energy \dword{numi} neutrino event spectrum for low-energy transfer events compared to simulation (left) and same comparison shown as a ratio (right). From \cite{JenaNUINT2018}.}
\includegraphics[width=0.49\textwidth]{graphics/minerva_enu.jpg}
\includegraphics[width=0.49\textwidth]{graphics/minerva_enuratio.jpg}
\end{dunefigure}
Another important issue is that the neutrino interaction model is not perfect, regardless of the experiment and implementation. With an underlying model that does not describe reality, even a model tuned to \dword{nd} data will have residual disagreements with that data. These disagreements must be accounted for in the systematic error budget of the ultimate oscillation measurements. Although the model(s) may improve before \dword{dune} operation, the degree of that improvement cannot be predicted and the \dword{dune} \dword{nd} complex should have the capability to gather as much information as possible to help improve and tune the model(s) during the lifetime of the experiment. In other words, the \dword{nd} needs to be capable of narrowing the range of plausible possibilities giving rise to data-model differences at the \dword{nd} in order to limit the systematic error incurred in the results extracted from the \dword{fd}.
Recent history provides illustrations of progress and continuing struggles to improve neutrino interaction models. The MiniBooNE collaboration published results in 2010 showing a disagreement between the data and the expected distribution of \dword{ccqe} events as a function of Q$^{2}$ \cite{AguilarArevalo:2010cx,Gran:2006jn}. They brought the model into agreement with the data by increasing the axial mass form factor used in the model. K2K \cite{Gran:2006jn} and \dword{minos} \cite{Adamson:2014pgc} made similar measurements. It has since been shown that the observed disagreement is due to the need to include multi-nucleon processes and that the use of the large effective axial mass form factor used by these experiments to fit the data leads to a misreconstruction of the neutrino energy.
The importance of modeling multi-nucleon (\dword{2p2h}) processes for oscillation experiments is underscored by the fact that such interactions when reconstructed as a \dword{ccqe} (1p1h) process lead to a significant low-side tail in the reconstructed neutrino energy \cite{Martini:2012uc}. Multi-nucleon processes also change the hadronic calorimetric response. The first \dword{nova} $\nu_{\mu}$ disappearance oscillation results had a dominant systematic error driven by the disagreement of their model to the data in their hadronic energy distribution \cite{Adamson:2016xxw}. In more recent work, the inclusion of multi-nucleon processes in the interaction model contributed to a substantial reduction of this disagreement \cite{NOvA:2018gge}.
The \dword{minerva} experiment has compiled a significant catalog of neutrino and antineutrino results and recently developed a model tuned to their \dword{qe}-like (\dword{numi} low energy) data \cite{Ruterbories:2018gub}. The tune is based on a modern neutrino interaction generator (\dword{genie} 2.8.4 \cite{Andreopoulos:2009rq}, using a global Fermi gas model \cite{Smith:1972xh} with a Bodek-Ritchie tail \cite{Bodek:1981wr} and the INTRANUKE-hA \dword{fsi} model \cite{Dytman:2007zz}). Even so, \dword{minerva} scales down non-resonance pion production \cite{Rodrigues:2016xjj}, includes a random phase approximation model (RPA) \cite{Nieves:2004wx,Gran:2017psn}, and incorporates a multi-nucleon model \cite{Nieves:2011pp, Gran:2013kda, Schwehr:2016pvn} with an empirical enhancement in the dip region between the \dword{qe} and $\Delta$ region that is determined by a fit to the neutrino data \cite{Ruterbories:2018gub}. The same tune as developed on the neutrino data also fits well the \dword{minerva} antineutrino \dword{qe}-like data (with no additional tuning or ingredient). The required enhancement of the multi-nucleon contribution to the model implies shortcomings in the interaction model, but the decent fit to data for both neutrinos and antineutrinos implies that the tune is effectively making up for some imperfections in the model.
More recent versions of \dword{genie} include some of the modifications incorporated by \dword{minerva} in the tune discussed above \cite{Alam:2015nkk}. This illustrates the dynamic nature of neutrino interaction modeling and the interplay between the experiments and generator developers. The evolution of the field continues as illustrated with a snapshot of some of the current questions and areas of focus:
\begin{itemize}
\item There is a pronounced deficit of pions produced at low Q$^{2}$ in \dword{cc}1$\pi^{\circ}$ events as compared to expectations \cite{BercellieNUINT2018,Altinok:2017xua,Aliaga:2015wva,McGivern:2016bwh,novaminosPC}. Current models take this into account by tuning to data without any underlying physical explanation for how or why this happens.
\item The \dword{minerva} tune that fits both neutrino and antineutrino \dword{ccqe} data involves a significant enhancement and distortion of the \dword{2p2h} contribution to the cross section. The real physical origin of this cross section strength is unknown. Models of multi-nucleon processes disagree significantly in predicted rates.
\item Multi-nucleon processes likely contribute to resonance production. This is neither modeled nor well constrained.
\item Cross section measurements used for comparison to models are a convolution of what the models view as initial state, hard scattering, and final state physics. Measurements able to deconvolve these contributions are expected to be very useful for model refinements.
\item Most neutrino generators make assumptions about the form of form factors and factorize nuclear effects in neutrino interactions into initial and final state effects via the impulse approximation. These are likely oversimplifications. The models will evolve and the systematic errors will need to be evaluated in light of that evolution.
\item Neutrino detectors are largely blind to neutrons and low-momentum protons and pions (though some $\pi^{+}$ are visible via Michel decay). This leads to smearing in the reconstructed energy and transverse momentum, as well as a reduced ability to accurately identify specific interaction morphologies. The closure of these holes in the reconstructed particle phase space is expected to provide improved handles for model refinement.
\item There may be small but significant differences in the $\nu_{\mu}$ and $\nu_{e}$ \dword{ccqe} cross sections which are poorly constrained \cite{Day-McFarland:2012}.
\end{itemize}
Given the critical importance of neutrino interaction models and the likelihood that the process of refining these models will continue through the lifetime of \dword{dune}, it is important the \dword{dune} \dword{nd} suite be highly capable.
%\subsection{Past Experience Motivates the ND Design}
\subsection{Past Experience}
\label{sec:appx-nd:overview-experience}
The philosophy driving the \dword{dune} \dword{nd} concept is to provide sufficient redundancy to address areas of known weaknesses in previous experiments and known issues in the interaction modeling insofar as possible, while providing a powerful suite of measurements that is likely to be sensitive to unanticipated issues and useful for continued model improvements. Anything less reduces \dword{dune}'s potential to achieve significantly improved systematic errors over previous experiments in the \dword{lbl} analyses.
The \dword{dune} \dword{nd} incorporates many elements in response to lessons learned from previous experiments.
The massive \dword{nd} \dword{lartpc} has the same target nucleus and a similar technology to the \dword{fd}. These characteristics reduce the detector and target systematic sensitivity in the extrapolation of flux constraints from this detector to the \dword{fd}. This detector is capable of providing the primary sample of \dword{cc} $\nu_{\mu}$ interactions to constrain the flux at the \dword{fd}, along with other important measurements of the flux from processes like $\nu$-e$^{-}$ scattering and low-$\nu$. Samples taken with this detector at off-axis angles (\dword{duneprism}) will allow the deconvolution of the flux and cross section errors and provide potential sensitivity to mismodeling. The off-axis data can, in addition, be used to map out the detector response function and construct effective \dword{nd} samples that mimic the energy distribution of the oscillated sample at the \dword{fd}.
The \dword{dune} \dword{nd} provides access to particles produced in neutrino interactions that have been largely invisible in previous experiments, such as low-momentum protons and charged pions measured in the \dword{hpgtpc} and neutrons in the \dword{3dst} and \dword{ecal}. The \dword{hpgtpc} provides data on interactions that minimize the effect of secondary interactions on the produced particles. These capabilities improve the experiment's ability to identify specific interaction morphologies, study samples with improved energy resolution, and extract samples potentially useful for improved tuning of model(s) of multi-nucleon processes. The neutron content in neutrino and antineutrino interactions is different and this will lead to differences in the detector response. For an experiment that is measuring \dword{cpv}, data on neutron production in neutrino interactions is likely to be an important handle in the tuning of the interaction model and the flavor-dependent detector response function model.
The \dword{3dst} provides dedicated beam spectrum monitoring on axis, as well as high statistics samples useful for the on-axis flux determination as a crosscheck on the primary flux determination (which has different detector and target systematic errors). The beam spectrum monitoring is useful for identifying and diagnosing unexpected changes in the beam. This proved useful for \dword{numi} and is likely to be more important for DUNE given the need to associate data taken at different times and off-axis angles.
The large data sets that will be accumulated by the three main detectors in the \dword{nd} suite will allow for differential studies and the use of transverse kinematic imbalance variables, where each detector brings its unique strengths to the study: the \dword{lartpc} has good tracking resolution and containment and massive statistics; the \dword{hpgtpc} has excellent tracking resolution, very low charged particle tracking thresholds, and unambiguous track charge sign determination; and the \dword{3dst} has good containment and can include neutrons on an event-by-event basis. The neutrino interaction samples acquired by this array of detectors will constitute a powerful laboratory for deconvoluting the initial state, hard scattering, and final state physics, which, in turn, will lead to improved modeling and confidence in the final results extracted from the \dword{fd}.
\section{Constraining the Flux in the ND}
\label{sec:appx-nd:fluxappendix}
The \dword{dune} \dword{fd} will not measure the neutrino oscillation probability directly. Instead, it will measure the neutrino interaction rate for different neutrino flavors as a function of the reconstructed neutrino energy. It is useful to formalize the measurements that are performed in the near and far \dwords{detmodule} in the following equations:
\begin{align}
\label{eq:fdrate}
\frac{dN^{FD}_{x}}{dE_{rec}}(E_{rec}) & =
\int \Phi^{FD}_{\numu}(E_\nu)P_{\numu\rightarrow x}(E_\nu)\sigma^{Ar}_x(E_\nu)T^{FD,Ar}_x(E_\nu,E_{rec})dE_\nu\\
\frac{dN^{ND}_{x}}{dE_{rec}}(E_{rec}) & =
\int \Phi^{ND}_{x}(E_\nu)\sigma^m_x(E_\nu)T^{d,m}_x(E_\nu,E_{rec})dE_\nu\
\end{align}
with
\begin{itemize}
\item $x$ = \nue , \numu
\item $d$ = \mbox{detector index}(\dword{nd},\dword{fd})
\item $m$ = \mbox{interaction target/material, (e.g., H, C, or Ar)}
\item $E_\nu$ = \mbox{true neutrino energy}
\item $E_{rec}$ = \mbox{reconstructed neutrino energy}
\item $T^{d,m}_x(E_\nu,E_{rec})$ = \mbox{true-to-reconstruction transfer function}
\item $\sigma^m_x(E_\nu)$ = \mbox{neutrino interaction cross section}
\item $\Phi^{d}_x(E_\mu)$ = \mbox{un-oscillated neutrino flux}
\item $\frac{dN^{d}_{x}}{dE_{rec}}(E_{rec})$ = \mbox{measured differential event rate per target (nucleus/electron)}
\end{itemize}
There are equivalent formulae for antineutrinos. For simplicity, the instrumental backgrounds (wrongly selected events) and the intrinsic beam contaminations (\nue interactions in case of the appearance measurement) have been ignored. But an important function of the \dword{nd} is also to quantify and characterize those backgrounds.
It is not possible to constrain the \dword{fd} neutrino flux directly, but the near-to-far flux ratio is believed to be tightly constrained by existing hadron production data and the beamline optics. As such Equation~\ref{eq:fdrate} can be rewritten as
\begin{align}
\frac{dN^{FD}_{x}}{dE_{rec}}(E_{rec}) & =
\int \Phi^{ND}_{\numu}(E_\nu)R(E_\nu)P_{\numu\rightarrow x}(E_\nu)\sigma^{Ar}_x(E_\nu)T^{d,Ar}_x(E_\nu,E_{rec})dE_\nu\\
\end{align}
with
\begin{align}R(E_\nu) = \frac{\Phi^{FD}_{\numu}(E_\nu)}{\Phi^{ND}_{\numu}(E_\nu)}
\end{align}
taken from the beam simulation.
It is not possible to measure only a near-to-far event ratio and extract the oscillation probability since many effects do not cancel trivially. This is due to the non-diagonal true-to-reconstruction matrix, which not only depends on the underlying differential cross section, but also on the detector used to measure a specific reaction.
%\begin{align*}
\begin{align}
\frac{dN^{FD}_{x}}{dE_{rec}}(E_{rec})/{\frac{dN^{ND}_{\numu}}{dE_{rec}}(E_{rec})} & \neq R(E_\nu)P_{\numu\rightarrow x}(E_\nu)\frac{\sigma^{Ar}_x(E_\nu)}{\sigma^{m}_{\numu}(E_\nu)}
\end{align}
%\end{align*}
It is therefore important that the \dword{dune} \dword{nd} suite constrain as many components as possible.
While the near-to-far flux ratio is tightly constrained to the level of \SIrange{1}{2}{\%}, the same is not true for the absolute flux itself. \dword{t2k}, using hadron production data obtained from a replica target, can constrain the absolute flux at the \dword{nd} to \SIrange{5}{6}{\%} in the peak region and to around 10\% in most of its energy range. The \dword{numi} beam has been constrained to 8\% using a suite of thin target hadron production data. The better the \dword{nd} flux is known, the easier it is to constrain modeling uncertainties by measuring flux-integrated cross sections. Predicting the event rate at the \dword{fd} to a few percent will require additional constraints to be placed with the \dword{nd} or substantial improvements in our understanding of the hadron production and focusing uncertainties.
%There are
Several handles to constrain the flux %, which
are addressed below. Briefly they offer the following constraints:
\begin{itemize}
\item The overall flux normalization and spectrum can be constrained by measuring neutrino scattering off of atomic electrons.
\item The energy dependence (``shape'') of the \numu and \anumu %\numubar
flux can be constrained using the ``low-$\nu$'' scattering process.
\item The flux ratio $\anumu/\numu$ can be constrained using \dword{cc} coherent neutrino scattering.
\item The $\nue/\numu$ flux ratio in the energy region where standard oscillations occur is well-constrained by the beam simulation. The experiment can also measure the $\nue/\numu$ interaction ratio and constrain the flux ratio using cross section universality.
\end{itemize}
\subsection{Neutrino-Electron Elastic Scattering}
\label{sec:appx-nd:fluxintro-e-nu-scatt}
Neutrino-electron scattering ($\nu \ e \rightarrow \nu \ e$) is a pure electroweak process with calculable cross section at tree level. The final state consists of a single electron, subject to the kinematic constraint
\begin{equation}
1 - \cos \theta = \frac{m_{e}(1-y)}{E_{e}},
\end{equation}
where $\theta$ is the angle between the electron and incoming neutrino, $E_{e}$ and $m_{e}$ are the electron mass and total energy, respectively, and $y = T_{e}/E_{\nu}$ is the fraction of the neutrino energy transferred to the electron. For \dword{dune} energies, $E_{e} \gg m_{e}$, and the angle $\theta$ is very small, such that $E_{e}\theta^{2} < 2m_{e}$.
The overall flux normalization can be determined by counting $\nu \ e \rightarrow \nu \ e$ events. Events can be identified by searching for a single electromagnetic shower with no other visible particles. Backgrounds from $\nu_{e}$ \dword{cc} scattering can be rejected by looking for large energy deposits near the interaction vertex, which are evidence of nuclear breakup. Photon-induced showers from \dword{nc} $\pi^{0}$ events can be distinguished from electrons by the energy profile at the start of the track. The dominant background is expected to be $\nu_{e}$ \dword{cc} scattering at very low $Q^{2}$, where final-state hadrons are below threshold, and $E_{e}\theta^{2}$ happens to be small. The background rate can be constrained with a control sample at higher $E_{e}\theta^{2}$, but the shape extrapolation to $E_{e}\theta^{2} \rightarrow 0$ is uncertain at the \SIrange{10}{20}{\%} level.
For the \dword{dune} flux, approximately \num{100} events per year per ton of fiducial mass are expected with electron energy above \SI{0.5}{GeV}. For a \dword{lartpc} mass of 25 tons, this corresponds to \num{3300} events per year. The statistical uncertainty on the flux normalization from this technique is expected to be $\sim$1\%. \dword{minerva} has achieved a systematic uncertainty just under 2\%
% 18 June 2020 preparing for JINST: this number in the updated reference is 2.3%. MKordosky.
and it seems plausible that \dune could do at least as well\cite{Valencia:2019mkf}. %{bib:minervanue}.
The \dword{3dst} can also do this measurement with significant statistics and with detector and reconstruction systematics largely uncorrelated with \dword{arcube}. The signal is independent of the atomic number $A$ and the background is small; so, it seems plausible the samples can be combined to good effect.
\subsection{The Low-$\nu$ Method}
\label{ssec:intro-low-nu}
The inclusive cross section for \dword{cc} scattering $(\nu_l+N\rightarrow l^-+X)$ does not depend on the neutrino energy in the limit where the energy transferred to the nucleus $\nu = E_\nu-E_{l} $ is zero~\cite{bib:original_lownu}. In that limit, the event rate is proportional to the flux, and by measuring the rate as a function of energy, one can get the flux ``shape.'' This measurement has been used in previous experiments and has the potential to provide a constraint in \dune with a statistical uncertainty $<1\%$.
In practice, one cannot measure the rate at $\nu=0$. Instead it is necessary to restrict $\nu$ to be less than a few \SI{100}{MeV}. This introduces a relatively small $E_\nu$ dependence into the cross section that must be accounted for to obtain the flux shape. Thus the measurement technique depends on the cross section model but the uncertainty is manageable~\cite{bib:bodek_lownu}. This is particularly true if low-energy protons and neutrons produced in the neutrino interaction can be detected.
\subsection{Coherent Neutrino-Nucleus Scattering}
The interactions $\nu_\ell + A \rightarrow \ell^- + \pi^+ + A$ and
$\overline{\nu}_\ell + N \rightarrow \ell^+ + \pi^- + N$
occur with very low three momentum transfer to the target nucleus (A). As such, the interactions proceed coherently with the entire nucleus, and do not suffer from nuclear effects (though background channels certainly do). These coherent interactions are most useful as a constraint on the $\anumu/\numu$ flux ratio. Identifying with high efficiency and purity requires a detector with excellent momentum and angular resolution.
\subsection{Beam \nue Content}
\label{ssec:beam-nue}
Electron neutrinos in a wide-band beam come from two primary sources: kaon decays and muon decays. These ``beam'' \nue are an irreducible background in $\numu \to \nue$ oscillation searches. As such, the \dword{lbnf} beam was optimized to make the \nue flux as small as possible while maximizing the \numu flux. In the energy range relevant for oscillations (\SI{0.5}{GeV} - \SI{4.0}{GeV}) the predicted $\nue/\numu$ ratio varies between 0.5\% and 1.2\% as a function of energy. The beam \nue flux in the same energy range is strongly correlated with the \numu flux due to the decay chain $\pi^+\to\mu^+\numu$ followed by $\mu^+ \to \anumu{} e^+ \nue $ (and likewise for \anue). As a result, the \dword{lbnf} beam simulation predicts that the uncertainty on the $\nue/\numu$ ratio varies from \SIrange{2.0}{4.5}{\%}. At the \dword{fd}, in a 3.5 year run, the statistical uncertainty on the beam \nue component is expected to be 7\% for the $\nu$ mode beam and 10\% for the $\bar{\nu}$ mode beam. The systematic uncertainty on the beam \nue flux is therefore subdominant, but not negligible.
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
\section{Movable components of the ND and the DUNE-PRISM program}
\label{sec:appx-nd:nd-movable}
\subsection{Introduction to DUNE-PRISM}
One of the primary challenges for \dword{dune} will be controlling systematic uncertainties from the modeling of neutrino-argon interactions. The relationship between the observable final state particles from a neutrino interaction and the incident neutrino energy is currently not understood with sufficient precision to achieve \dword{dune} physics goals. This is due in part to mismodeling of the outgoing particle composition and kinematics and due to missing energy from undetected particles, such a neutrons and low energy charged pions, and misidentified particles. The latter effects tend to cause a ``feed-down" in reconstructed neutrino energy relative to the true energy. Since neutrino energy spectra at the \dword{fd} and \dword{nd} have substantially different features due to the presence of oscillations at the \dword{fd}, these mismodeling and neutrino energy feed-down effects do not cancel in a far-to-near ratio as a function of neutrino energy, and lead to biases in the measured oscillation parameters.
Understanding \dword{nd} constraints on neutrino-nucleus interaction uncertainties is challenging, since no complete model of neutrino-argon interactions is available. If it were possible to construct a model that was known to be correct, even with a large number of unknown parameters, then the task of a \dword{nd} would much simpler: to build a detector that can constrain the unknown parameters of the model. However, in the absence of such a model, this procedure will be subject to unknown biases due to the interaction model itself, which are difficult to quantify or constrain.
The \dword{duneprism} \dword{nd} program consists of a mobile \dword{nd}
%(\dword{arcube} and \dword{mpd}, in particular)
that can perform measurements over a range of angles off-axis from the neutrino beam direction in order to sample many different neutrino energy distributions, as shown in Figure~\ref{fig:offaxisfluxes}. By measuring the neutrino-interaction final state observables over these continuously varying incident neutrino energy spectra, it is possible to experimentally determine the relationship between neutrino energy and what is measured in the detector (i.e., some observable such as reconstructed energy).
\begin{dunefigure}[Variation of neutrino energy spectrum as a function of off-axis angle]{fig:offaxisfluxes}
{The variation in the neutrino energy spectrum is shown as a function of detector off-axis position, assuming the nominal \dword{nd} location \SI{574}{m} downstream from the production target.}
\includegraphics[width=0.8\textwidth]{offaxisfluxes.pdf}
\end{dunefigure}
In the DUNE \dword{nd}, the movable components of the detector that are used in the \dword{duneprism} program are \dword{arcube} and the \dword{mpd}. These components of the \dword{nd} will take data both on the beam axis and off-axis. In the following sections, \dword{arcube} and the \dword{mpd} will be described in some detail and then the \dword{duneprism} program will be described in more detail.
\subsection{LArTPC Component in the DUNE ND: ArgonCube}
\label{sec:appx-nd:lartpc}
%\section{Introduction} \label{sec:Intro}
As the \dword{dune} \dwords{fd} have \dword{lar} targets, there needs to be a major \dword{lar} component in the \dword{dune} \dword{nd} complex in order to reduce cross section and detector systematic uncertainties for oscillation analyses~\cite{Acciarri:2016crz, Acciarri:2015uup}. However, the intense neutrino flux and high event rate at the \dword{nd} makes traditional, monolithic, projective wire readout \dwords{tpc} unsuitable. This has motivated a program of R\&D into a new \dword{lartpc} design, suitable for such a high-rate environment, known as \dword{arcube}~\cite{argoncube_loi}. \dword{arcube} utilizes detector modularization to improve drift field stability, reducing \dword{hv} and the \dword{lar} purity requirements; pixelized charge readout~\cite{Asaadi:2018oxk, larpix}, which provides unambiguous \threed imaging of particle interactions, drastically simplifying the reconstruction; and new dielectric light detection techniques with \dword{arclt}~\cite{Auger:2017flc}, which can be placed inside the \dword{fc} to increase light yield, and improve the localization of light signals. Additionally, \dword{arcube} uses a resistive field shell, instead of traditional field shaping rings, to maximize the active volume, and to minimize the power release in the event of a breakdown~\cite{bib:docdb10419}.
The program of \dword{arcube} R\&D has been very successful to date, working on small component prototypes and is summarized in references~\cite{ Ereditato:2013xaa, Zeller:2013sva, art_cold_ero, Asaadi:2018oxk, Cavanna:2014iqa, larpix, bib:docdb10419, Auger:2017flc}.
With the various technological developments demonstrated with small-scale \dwords{tpc}, the next step in the \dword{arcube} program is to demonstrate the scalability of the pixelized charge readout and light detection systems, and to show that information from separate modules can be combined to produce high-quality event reconstruction for particle interactions. To that end, a mid-scale (\SI[product-units=repeat]{1.4x1.4x1.2}{\metre}) modular \dword{tpc}, dubbed the \dword{arcube} 2$\times$2 demonstrator, with four independent \dword{lartpc} modules arranged in a 2$\times$2 grid has been designed, and is currently under construction.
After a period of testing at the University of Bern, the \dword{arcube} 2$\times$2 demonstrator will be placed in the \dword{minos} \dword{nd} hall at \dword{fnal} where it will form the core of a prototype \dword{dune} \dword{nd}, \dword{pdnd}~\cite{bib:docdb12571}. As part of \dword{protodune} \dword{nd}, the \dword{arcube} concept can be studied and operated in an intense, few-GeV neutrino beam. This program aims to demonstrate stable operation and the ability to handle backgrounds, relate energy associated with a single event across \dword{arcube} modules, and connect tracks to detector elements outside of \dword{arcube}. The \dword{arcube} 2$\times$2 demonstrator is described below in some detail since the \dword{dune} \dword{nd} modules are anticipated to be very similar.
\subsubsection{ArgonCube in ProtoDUNE-ND}
\label{sec:appx-nd:2x2-design}
The \dword{arcube} concept is a detector made of self-contained \dword{tpc} modules sharing a common cryostat. Each module is made of a rectangular box with a square footprint and a height optimized to meet the physics goals and/or sensitivity constraints. The \dword{arcube} 2$\times$2 demonstrator module will be housed within an existing \lntwo-cooled and vacuum-insulated cryostat,
which is $\sim$\SI{2.2}{\metre} in diameter and $\sim$\SI{2.8}{\metre} deep, for a total volume of $\sim$\SI{10.6}{\metre\cubed}. The size of the cryostat sets the dimensions of the modules for the demonstrator. The square base of each module will be \SI{0.67 x 0.67}{\metre}, and the height will be \SI{1.81}{\metre}. This makes the modules comparable in size to, but slightly smaller than, the proposed \dword{arcube} \dword{dune} \dword{nd} modules, which will have a base of \SI{1 x 1}{\metre} and a \SI{3.5}{\metre} height.
\begin{dunefigure}[\dshort{arcube} 2$\times$2 demonstrator] % module]
{fig:2x2_extraction}
{Illustration of the \dword{arcube} 2$\times$2 demonstrator module. The four modules are visible, with one of them partly extracted, on the right. This figure has been reproduced from Ref.~\cite{argoncube_loi}.}
\includegraphics[width=\textwidth]{graphics/BathAndModule.jpeg}
\end{dunefigure}
Individual modules can be extracted or reinserted into a common \dword{lar} bath as needed, as is illustrated in Figure~\ref{fig:2x2_extraction}. This feature will be demonstrated during a commissioning run at the University of Bern, but is not intended to be part of the detector engineering studies in the \dword{minos}-\dword{nd} hall. The pressure inside the modules is kept close to the bath pressure, putting almost no hydrostatic force on the module walls. This allows the walls to be thin, minimizing the quantity of inactive material in the walls. The purity of the \dword{lar} is maintained within the modules, independent of the bath, as will be described below. The argon surrounding the modules need not meet as stringent purity requirements as the argon inside. Under normal operating conditions, all modules are inserted with clearance distances of only \SI{1.5}{\milli\metre} between modules. Cooling power to the bath is supplied by liquid nitrogen circulated through lines on the outer surface of the inner cryostat vessel.
\begin{dunefigure}[Cutaway drawing of an ArgonCube 2$\times$2 demonstrator module]{fig:ac_module}
{Cutaway drawing of a \SI{0.67 x 0.67 x 1.81}{\metre} \dword{arcube} module for the 2$\times$2 demonstrator module. For illustrative purposes the drawing shows traditional field-shaping rings instead of a resistive field shell. The G10 walls will completely seal the module, isolating it from the neighboring modules and the outer \dword{lar} bath. %It is also worth noting that
The 2$\times$2 modules will not have individual pumps and filters.}
\includegraphics[width=0.8\textwidth]{graphics/Normal-Module-4K_labelled.jpeg}
\end{dunefigure}
A cutaway drawing of an individual 2$\times$2 module is shown in Figure~\ref{fig:ac_module}. The side walls of each module are made from \SI{1}{\centi\metre} G10 sheets, to which the resistive field shell is laminated. The G10 radiation length ($X_{\mathrm{0}} = \SI{19.4}{\centi\metre}$) and hadronic interaction length ($\lambda_{\mathrm{int}} = \SI{53.1}{\centi\metre}$)~\cite{Tanabashi:2018oca} %Yao:2006px}
are both comparable to \dword{lar} (14.0~cm and 83.7~cm respectively).
G10 provides a strong dielectric, capable of \SI{200}{\kilo\volt\per\centi\metre} when \SI{1}{\centi\metre} thick~\cite{G10Breakdown}. This dielectric shielding eliminates the need for a clearance volume between the \dwords{tpc} and the cryostat, while also shielding the \dword{tpc} from field breakdowns in a neighboring module.
Each module is split into two \dwords{tpc} by a central cathode made of an additional resistive layer on a G10 substrate. The segmented drift length does not require a high cathode voltage, and minimizes stored energy. For the 2$\times$2 module footprint of \SI{0.67 x 0.67}{\metre}, and an \efield of \SI{1}{\kilo\volt\per\centi\metre}, a cathode potential of only \SI{33}{\kilo\volt} is required. Operating a \dword{lartpc} at this voltage is feasible without a prohibitive loss of active volume~\cite{Zeller:2013sva}. The high field is helpful for reducing drift time and the potential for pileup, minimizing the slow component of the scintillation light, reducing space charge effects, and providing robustness against loss of \dword{lar} purity.
The detector is oriented such that the cathodes are parallel to the beam. This minimizes the load on the readout electronics by spreading the event over more channels and reducing the required digitization rate for hit channels. In turn, this reduces the heat load generated at the charge readout and prevents localized boiling.
During filling and emptying of the cryostat, the argon flow is controlled by hydrostatic check valves located at the lower flange of the module, which require a minimal differential pressure of \SI{15}{\milli\bar} to open. The purity inside each module is maintained by means of continuous \dword{lar} recirculation through oxygen traps. Dirty argon is extracted from the base of the module, and is then pushed through oxygen traps outside the cryostat, clean argon then re-enters the module above the active volume. For optimal heat transport, the argon flow is directed along the cold electronics. To prevent dirty argon from the bath entering the modules, their interior is held at a slight over-pressure. For the 2$\times$2, the dirty argon from all four modules is extracted by a single pump at the base of the cryostat with a four-to-one line, and after being filtered and cooled, the clean argon is pumped back in the module via a one-to-four line.
A more extensive version of the same scheme is envisaged for the \dword{dune} \dword{nd}.
\dword{arcube} offers true \threed tracking information using the \dword{larpix} cryogenic \dword{asic}~\cite{Dwyer:2018phu} pixelated charge readout. \dword{larpix} \dwords{asic} amplify and digitize the charge collected at single-pixels in the cold to mitigate the need for analogue signal multiplexing, and thus produce unambiguous \threed information. Sixty-four pixels can be connected to a single \dword{larpix} \dword{asic}. The baseline design for the 2$\times$2 is a \SI{4}{\milli\metre} pixel pitch, corresponding to 62.5k pixels m$^{-2}$. Pixelated anode planes are located on the two module walls parallel to the cathode; each plane is \SI[product-units=repeat]{1.28x0.64}{\metre\squared}. The total area across all four modules is \SI{6.6}{\metre\squared}, which corresponds to 410k pixels. The readout electronics utilize two \dword{fpga} boards per module, connected to a single Ethernet switch. It should be noted that the pixel pitch may be reduced as prototypes develop, but this can be accommodated in the readout design.
\begin{figure}[!ht]
\centering
\subfloat[\dword{arclt} paddle] {\includegraphics[width=0.454\textwidth]{graphics/arclight.jpg}}
\subfloat[\dword{arclt} mounted on a pixel readout PCB] {\includegraphics[width=0.51\textwidth]{graphics/Pixlar.jpeg}}
\caption[A prototype ArgonCube light readout paddle and a mounted ArCLight paddle]{(a) A prototype \dword{arcube} light readout paddle. The paddle is \SI{50}{cm} long and \SI{10}{cm}, with four \dwords{sipm} coupled to one end. Reproduced from Ref.~\cite{argoncube_loi}. (b) \dword{arclt} paddle mounted on the PixLAr pixelated charge readout plane, as used in test beam studies at \dword{fnal}.}
\label{fig:arclight}
\end{figure}
The charge readout window (drift time) of \SI{137}{\micro\second} is long compared to the \SI{10}{\micro\second}~\cite{Adamson:2015dkw} beam spill length in the \dword{numi} and \dword{lbnf} beams.
For a \SI{1}{MW} beam intensity, the expected rate of neutrino interactions at the \dword{dune} \dword{nd} is roughly 0.5 per spill per \dword{arcube} module.
With \dword{larpix}, reconstruction issues are greatly simplified compared to a projective readout \dword{tpc}.
Tracks and connected energy deposits will frequently overlap in any \twod projection, but can be resolved easily with the full \threed readout.
However, disconnected energy deposits, such as those from photon conversions or neutron interactions in the detector, cannot be associated easily to a specific neutrino interaction.
This problem can be solved by incorporating fast timing information from the prompt scintillation light emitted in \dword{lar}.
The module's opaque cathode and walls contain scintillation light within each \dword{tpc} (half module), improving the detection efficiency of the prompt component of the scintillation light.
Furthermore, attenuation due to Rayleigh scattering, characterized by an attenuation length of \SI{0.66}{\metre} in \dword{lar}~\cite{Grace:2015yta}, is mitigated by the maximum photon propagation length of \SI{0.3}{\metre}.
It is desirable to have a large area \dword{pds} to maximize the utility of scintillation light signals in the detector.
To minimize any dead material within the active volume, it is also desirable that the light detection be as compact as possible.
The solution pursued for the \dword{arcube} effort is \dword{arclt}~\cite{Auger:2017flc}, which is a very compact dielectric light trap that allows for light collection from a large area, inside high \efield{}s.
An example \dword{arclt} sheet is shown in Figure~\ref{fig:arclight}. These sheets are mounted on the walls of the module, inside the field shell, aligned with the drift direction, between the anode and the cathode.
The additional \SI{5}{\milli\metre} deep dead volume is similar to the one caused by the charge readout in the perpendicular direction.
\subsubsection{Dimensions of the ArgonCube Component of the DUNE ND}\label{sec:appx-nd:had_containment}
Since it is unrealistic to build a \SI{25}{\metre} long \dword{lartpc} in order to contain a \SI{5}{\giga\electronvolt} muon, the \dword{lartpc} dimensions have instead been optimized for hadronic shower containment~\cite{lartpcSizeChris}, relying on a downstream spectrometer to analyze crossing muons.
Hadronic showers are defined as contained if a reasonable efficiency across a wide range of kinematics is maintained, and there is no phase space with zero acceptance.
The specific metric used is that \textgreater95\% of hadronic energy has to be contained for interactions in the \dword{fv}, excluding neutrons and their descendants.
To assess the efficiency, detector volumes of varying sizes were simulated in a neutrino beam.
This provides a good measure of the efficiency of a given volume to contain different events, but it is not necessarily a good quantity to assess the required detector size.
Many events are not contained because of their specific location and/or orientation.
Cross section coverage remedies this deficiency by looking at the actual extent of the event, instead of its containment, at a random position inside a realistic detector volume.
However, events extending through the full detector will very likely never be contained in a real detector due to the low probability of such an event happening in exactly the right location (e.g., at the upstream edge of the detector).
Therefore, the maximum event size needs to be smaller than the full detector size.
For the \dword{nd} simulation this buffer was chosen to be \SI{0.5}{\metre} in all directions.
In this way, this measure of cross section coverage allows us to look for phase-space regions which are inaccessible to particular detector volume configurations.
To find the optimal detector size in each dimension, two are held constant at their nominal values, while the third dimension is varied and the cross section coverage is plotted as a function of neutrino energy.
This is shown for the dimension along the beam direction in Figure~\ref{fig:dune-nd_lartpc-size}. In this case, Figure~\ref{fig:dune-nd_lartpc-size} shows us that
\SI{4.5}{\metre} would be sufficient, but to avoid model dependencies, \SI{5}{\metre} has been selected.
Increasing the length beyond \SI{5}{\metre} does little to improve cross section coverage, but reducing to \SI{4}{\metre} begins to limit coverage at higher energies.
Note that 1 minus the cross section coverage gives the fraction of events that cannot be well reconstructed no matter where their vertex is, or how they are rotated within the \dword{fv}. The optimized dimensions found using this technique were \SI{3}{\metre} tall, \SI{4}{\metre} wide, and \SI{5}{\metre} along the beam direction. There is also a need to measure large angle muons that do not go into the \dword{hpgtpc}. Widening the detector to \SI{7}{\metre} accomplishes that goal without the added complication of a side muon detector.
\begin{dunefigure}[Influence of the \dshort{lartpc} size on hadron containment]{fig:dune-nd_lartpc-size}
{Influence of the \dword{lartpc} size on hadron containment, expressed in terms of cross section coverage as a function of neutrino energy.
Two dimensions are held constant at their nominal values, while the third is varied, in this case the height is held at \SI{2.5}{\metre} and the width at \SI{4}{\metre}.
The optimal length is found to be \SI{5}{\metre}.
See text for explanation of cross section coverage~\cite{lartpcSizeChris}.}
\includegraphics[width=0.5\textwidth]{graphics/length.png}
\end{dunefigure}
%\subsection{Module Dimensions}
\subsubsection{ArgonCube Module Dimensions}
The \dword{dune} \dword{nd} \dword{arcube} module dimensions are set to maintain a high drift field, \SI{1}{\kilo\volt\per\centi\metre}, with minimal bias voltage, and to allow for the detection of prompt scintillation light while mitigating the effects of diffusion on drifting electrons.
The prompt scintillation light, $\tau<$\SI{6.2}{\nano\second}~\cite{Heindl:2015yaa}, can be efficiently measured with a dielectric light readout with $\mathcal{O}\left(1\right)\,\mathrm{ns}$ timing resolution, such as \dword{arclt}~\cite{Auger:2017flc}.
To reduce attenuation and smearing due to Rayleigh scattering, the optical path must be kept below the \SI{0.66}{\metre}~\cite{Grace:2015yta} scattering length. Additionally, the slow scintillation component can be further suppressed by operating at higher \efield{}s~\cite{PhysRevB.20.3486}, effectively reducing the ionization density~\cite{PhysRevB.27.5279} required to produce excited states.
A module with a \SI{1x1}{\metre} footprint split into two \dwords{tpc} with drift lengths of \SI{50}{\centi\metre} requires only a \SI{50}{\kilo\volt} bias.
With \dword{arclt} mounted either side of the \SI{1}{\metre} wide \dword{tpc}, the maximal optical path is only \SI{50}{\centi\metre}.
For a nonzero drift field, diffusion needs to be split into longitudinal and transverse components. Gushchin~\cite{gushchin} report a transverse diffusion of \SI{13}{\centi\metre\squared\per\second} at \SI{1}{\kilo\volt\per\centi\metre}.
This results~\cite{Chepel:2012sj} in a transverse spread of \SI{0.8}{\milli\metre} for the drift time of \SI{250}{\micro\second}, well below the proposed pixel pitch of \SI{3}{\milli\metre}.
The longitudinal component is smaller than the transverse ~\cite{Chepel:2012sj}, and is therefore negligible.
%\subsection{Detector Dimensions}
\subsubsection{ND Dimensions}
\label{sec:appx-nd:det_dimensions}
%Given that the space transverse to the beam is not a constrained commodity in the \dword{nd} hall,
Though the acceptance study discussed in Section~\ref{sec:appx-nd:had_containment} indicated a width of \SI{4}{\metre} is sufficient to contain the hadronic component of most events of interest, the width has been increased to
\SI{7}{\metre} in order to mitigate the need for a side-going muon spectrometer.
Figure~\ref{fig:actual-size} shows the overall dimensions of the planned \dword{arcube} deployment in the \dword{dune} \dword{nd}.
With an active volume of \SI{1x1x3}{\metre} per module, the full \dword{arcube} detector corresponds to seven modules transverse to the beam direction, and five modules along it.
It should be noted that the cryostat design is currently based on \dword{protodune}~\cite{Abi:2017aow}, and will be optimized for the \dword{nd} pending a full engineering study.
\begin{dunefigure}[The current \dshort{arcube} dimensions for the \dshort{dune} \dshort{nd}]{fig:actual-size}
{The current \dword{arcube} Dimensions for the \dword{dune} \dword{nd}. The cryostat is based on \dword{protodune}~\cite{Abi:2017aow}, and yet to be optimized for the \dword{dune} \dword{nd}.}
\includegraphics[width=0.7\textwidth]{graphics/actual-size.png}
\end{dunefigure}
\subsubsubsection{Statistics in Fiducial Volume}\label{sec:appx-nd::rates}
Figure~\ref{fig:all_ey} shows 37 million total \dword{cc} $\numu$ neutrino events per year within a \SI{25}{\tonne} \dword{fv} in \dword{fhc} mode at \SI{1.07}{\mega\watt} (on-axis). Figure~\ref{fig:hadContNorm_ey} shows only the event rate for events where the visible hadronic system is fully contained, for the same \dword{fv} and beam configuration. Note that for the visible hadronic system to be contained, all energy not associated with the outgoing lepton, or outgoing neutrons, was required to be contained.
For hadronic containment, there is a \SI{30}{\centi\metre} veto region upstream and on all sides of the active volume, and \SI{50}{\centi\metre} veto region downstream. The \dword{fv} is then defined as \SI{50}{\centi\metre} from all edges, with \SI{150}{\centi\metre} downstream. Within the \SI{25}{\tonne} \dword{fv} in \dword{fhc} mode at \SI{1.07}{\mega\watt} the number of fully reconstructed (contained or matched muon, discussed below, plus contained hadrons) \dword{cc} $\numu$ events per year is 14 million.
\begin{dunefigure}[All neutrino events in the nominal \SI{25}{\tonne} \dshort{arcube} fiducial volume]{fig:all_ey}
{All neutrino events in the nominal \SI{25}{\tonne} \dword{fv}, in \dword{fhc} at \SI{1.07}{\mega\watt}, per year, rates are per bin. The elasticity is the fraction of the original neutrino energy carried by the outgoing lepton.}
\includegraphics[width=0.6\textwidth]{graphics/all_ey.png}
\end{dunefigure}
\begin{dunefigure}[Events where the visible hadronic system is contained in \dshort{arcube}
fiducial volume]{fig:hadContNorm_ey}
{Events where the visible hadronic system is contained within the nominal \SI{25}{\tonne} \dword{fv}, in \dword{fhc} at \SI{1.07}{\mega\watt}, per year, rates are per bin. The elasticity is the fraction of the original neutrino energy that is carried by the outgoing lepton.}
\includegraphics[width=0.6\textwidth]{graphics/hadContNorm_ey.png}
\end{dunefigure}
\subsubsubsection{Muon Acceptance}\label{sec:appx-nd:muacc}
Muons are considered as useful for physics if they stop in the active region of \dword{arcube} or if they leave the \dword{lar} detector and are reconstructed in a magnetic spectrometer downstream. Under the assumption that the downstream magnetic spectrometer is the multipurpose detector described in Section~\ref{sec:appx-nd:mpd}, Figure~\ref{fig:muonacc} shows the muon acceptance as a function of true neutrino energy (on the left) and muon energy (on the right). The acceptance dip at \SI{1}{GeV} in muon energy is from muons that exit \dword{arcube} and are not reconstructed in the \dword{mpd} downstream. This dip can be reduced by minimizing the passive material between the liquid argon and high pressure gaseous argon detectors.
\dword{icarus} and \dword{microboone} have used multiple Coulomb scattering to determine muon momentum \cite{Abratenko:2017nki}.
This technique may prove to be useful for muons in \dword{arcube} and could mitigate somewhat the size of the dip in Figure~\ref{fig:muonacc}.
\begin{dunefigure}[Muon acceptance as a function of true neutrino energy and true muon energy]{fig:muonacc}
{Muon acceptance shown as a function of true neutrino energy (left) and true muon energy (right). The acceptance for muons that stop in \dword{arcube} is shown in red and that for muons reconstructed in the downstream magnetic spectrometer is shown in blue.}
\includegraphics[width=0.45\textwidth]{muReco_Ev.png}
\includegraphics[width=0.45\textwidth]{muReco_Emu.png}
\end{dunefigure}
\subsubsection{Acceptance vs.\ energy and momentum transfer}
The acceptance of \dword{arcube} with the \dword{mpd} acting as a downstream spectrometer can be studied in a more nuanced way by looking at it as a function of the energy $q_0$ and three-momentum $q_3$ transferred to the target nucleus. The energy transfer is simply $q_0=E_\nu - E_\mu$. The three-momentum transfer is related to the four-momentum transfer $Q$ and $q_0$ by $q_3 = \sqrt{Q^2 + q_0^2}$. These variables have long been used to study nuclear structure in electron scattering experiments.
\begin{dunefigure}[Neutrino acceptance as a function of energy and momentum transfer]{fig:q0q3acc}
{Neutrino acceptance shown as a function of energy transfer and momentum transfer ($q_0$ and $q_3$) to the target nucleus. The units for $q_0$ and $q_3$ are GeV and GeV/c, respectively. The figures show the event rate (left) and the acceptance (right) for reconstructing the muon and containing the hadronic system. The top row was made for neutrinos with true neutrino energy between \num{1.0} and \SI{2.0}{GeV} and the bottom was made for neutrinos between \num{4.0} and \SI{5.0}{GeV}.}
\includegraphics[width=0.45\textwidth]{graphics/rate_q0q3_Ev_1000_2000.png}
\includegraphics[width=0.45\textwidth]{graphics/eff_q0q3_Ev_1000_2000.png}
\includegraphics[width=0.45\textwidth]{graphics/rate_q0q3_Ev_4000_5000.png}
\includegraphics[width=0.45\textwidth]{graphics/eff_q0q3_Ev_4000_5000.png}
\end{dunefigure}
Figure~\ref{fig:q0q3acc} shows the event rate (left figures) and acceptance (right figures) in bins of $(q_3,q_0)$. The rows correspond to two neutrino energy bins. The top row is for $E_\nu$ between \num{1.0}-\SI{2.0}{GeV} and it covers the first oscillation maximum. The second bin is for $E_\nu$ between \num{4.0}-\SI{5.0}{GeV}. The rate histograms have ``islands'' corresponding to hadronic systems with fixed invariant mass. The islands are smeared by Fermi motion and decay width. The lower island in $(q_3,q_0)$ corresponds to the quasi-elastic peak while the upper corresponds to the $\Delta$ resonance. One should note that the axes in the lower row cover a larger range of kinematic space than those in the upper row.
The acceptance is generally very good in the kinematic region where the vast majority of the events occur but is nowhere perfect. This is not necessarily a problem because the loss is chiefly geometrical. Losses typically occur in events with a vertex near one boundary of the detector where the muon, or hadronic system exits out that boundary. However for each lost event there is generally a set of symmetric events that are accepted because the final state is rotated by some angle about the neutrino beam axis ($\phi$ symmetry) or is closer to the center of the fiducial volume (x,y symmetry).
Regions where the acceptance is zero are problematic because they will introduce model dependence into the prediction of the rate at the far detector (which has a nearly $4\pi$ acceptance). Acceptances of even a few \% in some kinematic regions are not necessarily a problem as long as the event rate is large enough to accumulate a statistically significant number of events. %MK: how do we attach a number to this?
There is a potential danger if the acceptance varies quickly as a function of the kinematic variables because a small mismodeling of the detector boundaries or neutrino cross-sections could translate into a large mismodeling in the number of accepted events.
The size of the accepted event set decreases as a function of both $q_0$ and $q_3$ (and therefore $E_\nu$) due to more energetic hadronic systems and larger angle muons. This can clearly be seen in the transition from the colored region to the black region in the $\num{4.0} < E_\nu < \SI{5.0}{GeV}$ acceptance histogram shown in the lower right-hand corner of Figure~\ref{fig:q0q3acc}. The transition is smooth and gradual. % MK: how to make quantitative?
The acceptance for $\num{1.0} < E_\nu < \SI{2.0}{GeV}$ (shown in the upper right-hand corner of Figure~\ref{fig:q0q3acc}) is larger than 10\% except in a small region at high $q_0$ and $q_3$. Events in that region have a low-energy muon and are misidentified as neutral-current according to the simple event selection applied in the study. The fraction of events in that region is quite small, as can be seen in the upper left-hand plot of Figure~\ref{fig:q0q3acc}.
\begin{dunefigure}[Neutrino acceptance in the $(q_3,q_0)$ plane as a function of neutrino energy]{fig:q0q3acc_vs_enu}
{This figure summarizes the neutrino acceptance in the $(q_3,q_0)$ plane, as shown in Figure~\ref{fig:q0q3acc}, for all bins of neutrino energy (plotted in GeV). Here the quantity on the vertical axis is the fraction of events that come from bins in $(q_3,q_0)$ with an acceptance greater than $A_{cc}$. As an example we consider the \num{4.0}-\SI{5.0}{GeV} neutrino energy bin. The $A_{cc}>0.03$ curve in that neutrino energy bin indicates that 96\% of events come from $(q_3,q_0)$ bins that have an acceptance greater than 3\%. }
\includegraphics[width=0.6\textwidth]{graphics/frac_with_acc.png}
\end{dunefigure}
Figure~\ref{fig:q0q3acc_vs_enu} summarizes the neutrino acceptance in the $(q_3,q_0)$ plane as function of neutrino energy. The $y$ axis shows the fraction of events coming from $(q_3,q_0)$ bins with an acceptance greater than $A_{cc}$. The $A_{cc}>0.00$ curve shows the fraction of events for which there is nonzero acceptance. For $E_\nu < \SI{5.0}{GeV}$ (the oscillation region) that fraction is greater than 99\%. So, there are no significant acceptance holes. In the same energy region, more than 96\% of events come from regions where the acceptance is greater than 3\%. % MK: Is this good enough? How do we know?
\subsubsection{Muon and Electron Momentum Resolution and Scale Error}
%This has not yet been investigated fully
For muons stopping in the \dword{lar} and for those with momentum measured in the downstream tracker (\dword{mpd}), the energy scale uncertainty from \dword{arcube} is driven by the material model of the \dword{lar} and passive materials. This is expected to be known to \textless 1\%. Note that the B field in the \dword{mpd} is expected to be known to about 0.5\% from simulation and field maps made with Hall and NMR probes.
For electrons, the energy will be measured calorimetrically, rather than by range. The \dword{mip} energy scale (charge/MeV) will be set by rock muons. The scaling to more dense deposits from EM showers can give rise to uncertainties, i.e., recombination could be different. Such uncertainties can be reduced by taking data with \dword{arcube} modules in a test beam. Outside of this, a useful calibration sample
of electrons up to \SI{50}{MeV} comes from Michel electrons from stopping rock muons. The $\pi^0$ invariant mass peak is another good standard candle.
\subsubsection{Tagging Fast Neutrons}
Studies have shown that contained prompt scintillation light provides an important handle for neutron tagging, allowing for the association of detached energy deposits to the correct neutrino interaction using timing information. Such neutron tagging is important for minimizing the uncertainty on neutrino energy reconstruction, both for neutrons generated at a neutrino vertex and for hadronic showers that fluctuate to neutrons.
Figure~\ref{fig:NDSpill} shows a simulated beam spill in the \SI[product-units=repeat]{5x4x3}{\metre} \dword{lar} component of the \dword{dune} \dword{nd}\footnote{Note that this study was performed before the detector width was increased to \SI{7}{m}, as described in Section~\ref{sec:appx-nd:det_dimensions}.}.
It highlights the problem of associating fast-neutron induced energy deposits to a neutrino vertex using only collected charge.
\begin{dunefigure}[A beam spill in the \dshort{lar} component of the \dshort{dune} ND]{fig:NDSpill}
{A beam spill in the \dword{lar} component of the \dword{dune} \dword{nd}.
The detector volume is \SI[product-units=repeat]{5x4x3}{\metre}.
Fast-neutron induced recoiling proton tracks, with an energy threshold greater than $\sim\,$\SI{10}{\mega\electronvolt}, are shown in white.
The black tracks are all other energy deposits sufficient to cause charge collected at the pixel planes.}
\includegraphics[width=.7\textwidth]{graphics/NeutronNDSpill.png}
\end{dunefigure}
By containing scintillation light, prompt light signals can be used to associate fast-neutron induced deposits back to a neutrino vertex anywhere within the detector.
Figure~\ref{fig:Timing} shows the temporal distribution of neutrino vertices within a representative, randomly selected, beam spill.
The mean separation of neutrino vertices is \SI{279}{\nano\second}, with all fast-neutron induced energy deposits occurring $<$\SI{10}{\nano\second} after each neutrino interaction.
\begin{dunefigure}[Temporal distribution of $\nu$ vertices within a beam spill in the ND LAr component] % of DUNE ND.]
{fig:Timing}
{The temporal distribution of neutrino vertices (red lines) within a beam spill in the \dword{lar} component of \dword{dune} \dword{nd}.
The mean separation of neutrino vertices is \SI{279}{\nano\second}. The filled bins show the number of hits due to recoiling protons, crosses indicate a hit due to a recoiling $^{2}$H, $^3$H, $^2$He or $^3$He nucleus.
All fast-neutron induced energy deposits occur $<$\SI{10}{\nano\second} after each neutrino interaction.}
\includegraphics[width=0.7\textwidth]{graphics/recoil_proton_edep_vs_vtx_time_a.png}
\end{dunefigure}
\subsubsection{Neutrino-Electron Elastic Scattering}
\label{sec:appx-nd:lartpc-nu-electron-scatt}
Neutrino scattering on atomic shell electrons, $\nu_{l}(\overline{\nu}_{l}) + e^{-} \rightarrow \nu_{l}(\overline{\nu}_{l}) + e^{-}$,
is a purely electroweak process with a known cross section as function of neutrino energy, $E_{\nu}$, in which all neutrino flavors participate, albeit with different cross sections. This process is not affected by nuclear interactions and has a clean signal of a single very forward-going electron. \dword{minerva}~\cite{Park:2015eqa} has used this technique to characterize the \dword{numi} beam flux normalization (running in the \dword{numi} low-energy mode), although the rate and detector resolution were insufficient to make a shape constraint. It has been investigated as a cross section model-independent way to constrain the neutrino flux at the \dword{dune} \dword{nd}.
For a neutrino-electron sample, $E_{\nu}$ could, in principle, be reconstructed event-by-event in an ideal detector using the formula
\begin{equation}
E_{\nu} = \frac{E_{e}}{1 - \frac{E_{e}(1-\cos\theta_{e})}{m_{e}}},
\label{eq:nue}
\end{equation}
\noindent where $m_e$ and $E_e$ are the electron mass and outgoing energy, and $\theta_e$ is the angle between the outgoing electron and the incoming neutrino direction. The initial energy of the electrons are low enough to be safely neglected ($\sim$\SI{10}{keV}). It is clear from Equation~\ref{eq:nue} that the ability to constrain the shape of the flux is critically dependent on the energy- and, in particular, angular-resolution of electrons. For a realistic detector, the granularity of the $E_{\nu}$ shape constraint (the binning) depends on its performance. Additionally, the divergence of the beam (few \si{mrad}) at the \dword{dune} \dword{nd} site is a limiting factor to how well the incoming neutrino direction is known.
In work described in Ref.~\cite{PhysRevD.101.032002}, the ability for various proposed \dword{dune} \dword{nd} components to constrain the \dword{dune} flux is shown using the latest three-horn optimized flux and including full flavor and correlation information. This was used to determine what is achievable relative to the best performance expected from hadron production target models. When producing the input flux covariance matrix, it was assumed that an \dword{na61}~\cite{Laszlo:2009vg} style replica-target experiment was already used to provide a strong prior shape constraint. Detector reconstruction effects and potential background processes are included, and a constrained flux-covariance is produced following the method used in Ref.~\cite{Park:2015eqa}.
\begin{figure}[htbp]
\centering
\subfloat[FHC pre-fit] {\includegraphics[width=0.45\textwidth]{graphics/FHC_LAR_nominal_cov_nom.png}}
\subfloat[FHC post-fit] {\includegraphics[width=0.45\textwidth]{graphics/FHC_LAR_nominal_cov_post.png}}
\caption[\dshort{fhc} flux covariance matrices for nominal \SI{35}{t} \dshort{arcube}]{Pre- and post-fit \dword{fhc} flux covariance matrices for the nominal \SI{35}{t} \dword{arcube} \dword{lar} detector using a five-year exposure.}
\label{fig:LAR_nominal_covariances}
\end{figure}
The impact of the neutrino-electron scattering constraint on the flux covariance is shown in Figure~\ref{fig:LAR_nominal_covariances} for \dword{fhc} and a five year exposure of the nominal \SI{35}{t} \dword{arcube} \dword{lar} detector (corresponding to $\sim$22k neutrino-electron events). It is clear that the overall uncertainty on the flux has decreased dramatically, although, as expected, an anticorrelated component has been introduced between flavors (as it is not possible to tell what flavor contributed to the signal on an event-by-event basis). Similar constraints are obtained for \dword{rhc} running.
\begin{figure}[htbp]
\centering
\subfloat[Rate+shape] {\includegraphics[width=0.45\textwidth]{graphics/FHC_DET_nominal_constraint_numu.png}}
\subfloat[Shape-only] {\includegraphics[width=0.45\textwidth]{graphics/FHC_DET_nominal_constraint_SHAPE_numu.png}}
\caption[Rate+shape and shape-only bin-by-bin flux uncertainties]{Rate+shape and shape-only bin-by-bin flux uncertainties as a function of neutrino energy for a five year exposure with various detector options, compared with the input flux covariance matrix before constraint.}
\label{fig:nominal_det_constraint}
\end{figure}
Figure~\ref{fig:nominal_det_constraint} shows the flux uncertainty as a function of $E_{\nu}$ for the $\nu_{\mu}$-\dword{fhc} flux, for a variety of \dword{nd} options. In each case, the constraint on the full covariance matrix is calculated (as in Figure~\ref{fig:LAR_nominal_covariances}), but only the diagonal of the $\nu_{\mu}$ portion is shown. In the flux peak of $\sim$2.5 GeV, the total flux uncertainty can be constrained to $\sim$2\% for the nominal \dword{lar} scenario, and the constraint from other detector types is largely dictated by the detector mass. Clearly the neutrino-electron scattering sample at the \dword{dune} \dword{nd} will be a powerful flux constraint. However, it is also clear that the ability to constrain the shape of the flux is not a drastic improvement on the existing flux covariance matrix, and none of the possible detectors investigated added a significantly stronger constraint. That said, the neutrino-electron sample is able to make in situ measurements of the flux prediction, and is able to diagnose problems with the flux prediction in a unique way.
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
\subsection{Multipurpose Detector}
\label{sec:appx-nd:mpd}
The multipurpose detector (\dword{mpd}) extends and enhances the capabilities of the \dword{lartpc}. It does this by providing a system that will measure the momentum and sign of charged particles exiting the \dword{lartpc} and, for neutrino interactions taking place in the \dword{mpd}, it will extend charged particle measurement capabilities to lower energies than achievable in the far or near \dwords{lartpc}. This capability enables further constraints of systematic uncertainties for the \dword{lbl} oscillation analysis.
The \dword{mpd} is a magnetized system consisting of a high-pressure gaseous argon time projection chamber (\dword{hpgtpc}) and a surrounding \dword{ecal}. The detector design will be discussed in more detail in a later section.
\paragraph{\dword{mpd} goals}
\begin{itemize}
\item {Measure particles that leave the LAr \dword{nd} component and enter the \dword{mpd}
The LAr component of the DUNE \dword{nd} will not fully contain high-energy muons or measure lepton charge. A downstream \dword{mpd} will be able to determine the charge sign and measure the momenta of the muons that enter its acceptance, using the curvature of the associated track in the magnetic field. }
\item {Constrain neutrino-nucleus interaction systematic uncertainties
In its 1-ton gaseous argon \dword{fv}, the \dword{mpd} will collect \num{1.5e6} \dword{cc} muon neutrino interactions per year (plus \num{5e5} \dword{nc} muon neutrino interactions). The very low energy threshold for reconstruction of tracks in the \dword{hpgtpc} gives it a view of interactions that is more detailed than what is seen in the \dword{lar}, and on the same target nucleus. The associated \dword{ecal} provides excellent ability to detect neutral pions, enabling the \dword{mpd} to measure this important component of the total event energy while also tagging the presence of these pions for interaction model studies.
The ability to constrain ``known unknowns'' is a high priority of the \dword{mpd}. One example is nucleon-nucleon correlation effects and meson exchange currents in neutrino-nucleus scattering. Although a few theoretical models that account for these effects are available in neutrino event generators, no model reproduces well the observed data in \dword{nova}, \dword{minerva}, or \dword{t2k}. These experiments therefore use empirical models tuned to the limited observables in their detector data. Tuning results in better agreement between simulation and data, although still not perfect. In addition, this type of empirical tuning leaves some large uncertainties, such as the four-momentum transfer response, the neutrino energy dependence of the cross sections (where models disagree, and a ``model spread'' is typically used for the uncertainty), and the relative fractions of final state nucleon pairs ($pp$ vs. $np$).
Another example of a ``known unknown'' for which the \dword{mpd} will provide a more stringent cross section constraint than the \dword{lartpc} is the case of single and multiple pion production in \dword{cc} neutrino interactions. An \dword{mpd}-based measurement of these processes will be implemented in the DUNE \dword{lbl} oscillation analysis in the near future, making use of the high-purity samples of \dword{cc}-$0\pi$, \dword{cc}-$1\pi$, and \dword{cc}-multi-$\pi$ events in the gaseous argon, separated into $\pi^+$ and $\pi^-$ subsamples and binned in neutrino energy and other variables of interest. Figure~\ref{fig:chgpi_diffs} illustrates two simple differences among the \dword{hpgtpc} \dword{cc}-$1\pi$ subsamples; it is still to be determined which variables will be the most useful in the \dword{lbl} oscillation analysis.
The relative lack of secondary interactions for particles formed in neutrino interactions in the gaseous argon \dword{fv} will provide samples with a less model dependent connection to the particles produced in the primary interaction. These secondary interactions are a significant effect in denser detectors \cite{Friedland:2018vry} and this crosscheck/validation of the reinteraction model is likely to be useful in understanding the full energy response of the liquid detectors.
The \dword{mpd} will measure ratios of inclusive, semi-exclusive, and exclusive cross sections as functions of neutrino energy, where the flux cancels in the ratio. These ratios will be measured separately for \dword{nc} and \dword{cc} events, and the \dword{nc} to \dword{cc} ratio itself will be measured precisely with the \dword{mpd}. The ratios of cross sections for different pion, proton, and kaon multiplicity will help constrain interaction models used in the near and far liquid detectors.
The \dword{hpgtpc} will have better capability than the \dword{lartpc} to distinguish among particle species at low momentum using $dE/dx$ measurements. Some muon/pion separation is possible via $dE/dx$ for very low momenta, and protons are very easily distinguished from pions, muons, and kaons for momenta below 2~GeV/$c$, as shown in Figure~\ref{fig:ALICE_dEdx}. At higher momenta, the magnet makes it possible to easily distinguish $\pi^+$ from $\mu^-$ (or $\pi^-$ from $\mu^+$), as has been done in T2K near-detector fits for oscillation analyses. The fact that pions will interact hadronically far less often in the gas than in the liquid will give another important handle for constraining uncertainties in the \dword{lartpc}. These aspects give the \dword{mpd} a significant complementarity to the \dword{lartpc}, which is not magnetized.
Since the target nucleus in the \dword{mpd} is the same as that in the near and far \dwords{lartpc} this information feeds directly into the interaction model constraints without complicating nuclear physics concerns.
Finally, having a \dword{nd} that can see one level deeper than the far detector keeps open the possibility to investigate ``unknown unknowns'' as well. Since the \dword{mpd} will identify and measure interactions more accurately than can be done in the \dword{lartpc}, it will provide the ability to investigate more deeply our observations in the liquid argon, and the flexibility to address other unexpected things we may encounter.
}
\item {{\bf Precisely and accurately measure all components of the neutrino flux}
The magnetic field of the \dword{mpd} enables the precise determination of momenta of charged particles escaping the upstream \dword{lartpc}.
Because the \dword{nd} is necessarily smaller than the \dword{fd}, near-far differences arising from the different containment fractions are compensated by the fact that the \dword{nd} has a magnetic spectrometer. Also, higher-energy particles from the neutrino interaction will be measured better in the \dword{mpd} than in the liquid \dword{nd} or \dword{fd} (for example, non-contained muons), constraining the effects of energy feed-down in the liquid detectors.
The ability to separate charge signs will allow the \dword{mpd} to measure the relative contributions of $\nu_\mu$ and $\bar{\nu}_\mu$ in both the neutrino beam and the antineutrino beam, as well as distinguishing $\nu_e$ from $\bar{\nu}_e$ components. These components are important to anchor the oscillation fit. Otherwise, reliance on the beam modeling is needed to predict the small but uncertain fractions of wrong-sign neutrinos in the beams. Stopping muons' Michel signatures can be used on a statistical basis in the far detector, as the decay rates differ for $\mu^+$ and $\mu^-$, but that is after oscillation has distorted the spectrum. No corresponding test is present for $\nu_e$.
}
\item {{\bf Constrain $\pi^0$ backgrounds to $\nu_e$ events}
An accurate measurement of backgrounds to the $\nu_e$ appearance measurement is a critical input for far detector oscillation analyses. In the \dword{lartpc}, the largest background to $\nu_e$'s is \dword{nc}-$\pi^0$ interactions in which one photon is not detected and the other is mistakenly identified as an electron. The \dword{hpgtpc} and \dword{ecal} together provide a unique capability to constrain \dword{nc}-$\pi^0$ backgrounds that are misidentified as $\nu_e$~\dword{cc} in the \dword{lartpc}. The \dword{hpgtpc} will collect a reduced background sample of $\sim20$k $\nu_e$~\dword{cc} events per year. The \dword{lartpc} detector measures $\nu_e$+ mis-ID'ed~$\pi^0$ events, while the \dword{mpd} measures $\nu_e$~\dword{cc} events alone (by rejecting all $\pi^0$ events using the \dword{ecal}). The \dword{mpd} sample will reduce backgrounds from \dword{nc}-$\pi^0$ events because the photon conversion length in gas is much greater than that in the liquid, and photons from $\pi^0$ decays will not often convert in the gas volume of the \dword{hpgtpc} in such a way as to fake $e^\pm$ from $\nu_e$ interactions. The \dword{ecal}, however, will have excellent ability to detect the $\pi^0$ decays, enabling the \dword{mpd} to reject $\pi^0$ events as background to $\nu_e$'s.
The \dword{mpd} measurement of $\nu_e$~\dword{cc} events can be scaled to the \dword{lartpc} density and volume and corrected to the same acceptance as the \dword{lartpc} in order to provide a constraint on the $\pi^0$-misID. The difference of the two: $(\nu_e^{{\textrm{LAr}}}$ + mis-ID'ed~$\pi^0)-(\nu_e^{\textrm{GAr-scaled-to-LAr}})$ yields the $\pi^0$-misID rate in \dword{lartpc}. This measurement of the backgrounds to $\nu_e$'s would not be possible at all if the \dword{mpd} were replaced by a simple muon range detector. It would also not be easy to extrapolate to the \dword{lartpc} if the target material of the \dword{mpd} were not argon.
}
\item{{\bf Measure energetic neutrons from $\nu$-Ar interactions via time-of-flight with the \dword{ecal}}
Neutron production in neutrino and antineutrino interactions is highly uncertain, and is a large source of neutrino energy misreconstruction. In the \dword{hpgtpc}+\dword{ecal} system, a preliminary study of the time-of-flight from the \dword{hpgtpc} neutrino interaction point to hits in the \dword{ecal} is encouraging, indicating that ToF can be used to detect and correctly identify neutrons. With the current \dword{ecal} design, an average neutron detection efficiency of 60\% is achieved by selecting events in which an \dword{ecal} cell has one hit with more than \SI{3}{MeV}. This is still very preliminary work, and further studies to understand the impact of backgrounds and \dword{ecal} optimization (strip vs. tile, overall thickness) are underway.
}
\item{{\bf Reconstruct neutrino energy via spectrometry and calorimetry}
Although all neutral particles from an event must be measured with the \dword{ecal} in the \dword{mpd}, the \dword{hpgtpc} will be able to make very precise momentum measurements of charged particle tracks with a larger acceptance than the upstream \dword{lartpc}, including tracks created by high-momentum exiting particles, which allows the measurement of the entire neutrino spectrum. In addition, short and/or stopping tracks will be measured via $dE/dx$. The sum of this capability provides a complementary event sample to that obtained in the \dword{lar}, whose much higher density leads to many secondary interactions for charged particles. The two methods of measurement in the \dword{mpd} will help in understanding the \dword{lar} energy resolution.
}
\item{{\bf Constrain LArTPC detector response and selection efficiency}
The \dword{mpd} will collect large amounts of data in each of the exclusive neutrino interaction channels, with the exception of $\nu-e$ elastic scattering, where the \dword{hpgtpc} sample will be too small to be useful. The high statistics $\nu$-Ar interaction samples will make it possible to directly crosscheck every kinematic distribution that will be used to constrain the fluxes and cross sections. Typically these checks will be over an extended range of that variable. The high purity of the \dword{mpd} samples and low detection threshold for final state particles in the \dword{hpgtpc} will give a benchmark or constraint on \dword{lartpc} detector response and selection efficiencies for each of the interaction channels.
Using the events collected in the \dword{hpgtpc} (where selection and energy reconstruction are easy), the performance of \dword{lar} event selection and energy reconstruction metrics can be tested by simulating the well-measured \dword{hpgtpc} four-vectors in the \dword{lartpc}. This allows the validation of the \dword{lartpc} reconstruction performance on these events. This process is expected to reduce the errors in the \dword{lartpc} detector energy response model.
}
\end{itemize}
\paragraph{\dword{mpd} strengths}
The strengths of the \dword{mpd} enable it to reach the goals above and to augment the capabilities of the \dword{lartpc}. Below are a few examples of its strengths relative to the \dword{lartpc}:
\begin{itemize}
\item High-fidelity particle charge determination via magnetic curvature. This is the only detector that can measure electron and positron charge.
\item Precise and independent measurement of particle momentum, via magnetic curvature, will allow the measurement of the momentum of higher-energy charged particles without requiring containment. This extends the utility of the \dword{nd}, especially for the high-energy beam tune. The absolute momentum scale is easily calibrated in the magnetic spectrometer and provides a cross-check on energy loss through ionization measurements. Calibration strategies for the magnetic tracking include pre-assembly field maps, {\it in situ} NMR probes, and $K^0_s$ and $\Lambda\!^0$ reconstruction.
\item Particle identification through $dE/dx$. The gaseous argon TPC has better separation power of particle species by $dE/dx$ than the liquid because the momentum can be measured along with energy loss.
\item High-resolution imaging of particles emerging from the $\nu$-Ar vertex (including nucleons). The reconstruction threshold in the gas phase is much lower than the threshold in liquid because particles travel further in the low density medium, e.g., a proton requires only \SI{3.7}{MeV} kinetic energy to make a \SI{2}{cm} track in 10~atmospheres of gaseous argon, while a \SI{3.7}{MeV} proton in liquid can only travel \SI{0.02}{cm}. Figure~\ref{fig:LArGArThresholds} demonstrates the difference in the thresholds for reconstructing protons in the \dword{hpgtpc} and the \dword{lartpc} in light of the energy spectra of final state protons from a selection of types of neutrino interactions at the DUNE \dword{nd}. The \dword{lartpc} threshold is what has been achieved in \dword{microboone} up to now, and the \dword{hpgtpc} threshold is what has been achieved with the tools discussed in Section~\ref{sec:TPC_ML}.
\item Separation of tracks and showers for less-ambiguous reconstruction. Particle tracks are locally helical and tend to bend away from each other in the magnetic field as they travel from a dense vertex. Electromagnetic showers do not occur in the gas, but in the physically separate \dword{ecal}. By contrast, in a \dword{lartpc} tracks and showers frequently overlap. The measurement resolution scales are comparable between the liquid and the gas, but the distance scales on which interactions occur are much longer in the gas, allowing particles to be identified and measured separately more easily.
\item The measurement of energetic neutrons through time-of-flight with \dword{ecal} is a potential game-changer for validating energy reconstruction. Preliminary studies of the \dword{hpgtpc}+\dword{ecal} system indicate that an average neutron detection efficiency of 60\% can be achieved via a time-of-flight analysis. A study of the impact of backgrounds is underway, but initial studies are encouraging.
\item{The \dword{hpgtpc} is able to measure the momentum of particles over almost the full solid angle. Particles that are emitted at a large angle with respect to the beam have a high probability of exiting the \dword{lar} without leaving a matching track in the \dword{mpd}. However, events collected in the \dword{hpgtpc}, with its $\simeq 4\pi$ coverage, can be used in the regions of phase space where the exiting fraction is high in the liquid argon to ensure that the events are accurately modeled in all directions in the \dword{fd}. }
\item{The \dword{mpd} neutrino event sample, while smaller than the \dword{lartpc} sample, is a statistically independent sample. Moreover, the systematic uncertainties of the \dword{mpd} will be very different than the \dword{lartpc} and likely smaller in many cases. This will allow the \dword{mpd} to act as a systematics constraint for the \dword{lartpc}.
}
\end{itemize}
\begin{dunefigure}[Reconstructed $\nu$ energy spectra for \dshort{cc} $\nu_{\mu}$ interactions with charged pions]{fig:chgpi_diffs}
{Representative differences among subsamples of \dword{cc} $\nu_{\mu}$ interactions with one $\pi^+$ (solid lines) and \dword{cc} $\bar{\nu}_{\mu}$ interactions with one $\pi^-$ (dashed lines). The forward- and reverse- horn current samples are shown in black and red, respectively. Left: Reconstructed neutrino energy spectra, normalized to the same number of protons on target. Right: Angle of outgoing muon relative to neutrino direction, normalized to unit area for shape comparison.}
\includegraphics[width=0.49\textwidth]{graphics/Ereco_numu_1pi.png}
\includegraphics[width=0.49\textwidth]{graphics/mu_theta_numu_6.png}
\end{dunefigure}
\begin{dunefigure}[Momentum spectra of protons ejected from $\nu$ interactions in Ar] %; several %categories of interaction types]
{fig:LArGArThresholds}
{The momentum spectra of protons ejected from neutrino interactions in argon, for several categories of interaction types. The vertical lines indicate the lowest momentum protons that have been reconstructed using existing automated reconstruction tools, where the dotted line is the \dword{hpgtpc} threshold, and the solid line is the \dword{lartpc} threshold (from \dword{microboone}).}
\includegraphics[width=0.6\textwidth]{graphics/Threshold.pdf}
\end{dunefigure}
%====================
%\newpage
\subsubsection{\dshort{mpd} Technical Details}
\subsubsubsection{High-Pressure Gaseous Argon TPC}
The basic geometry of the \dword{hpgtpc} is a gas-filled cylinder with a \dword{hv} electrode at its mid-plane, providing the drift field for ionization electrons. It is oriented inside the magnet such that the magnetic and \efield{}s are parallel, reducing transverse diffusion to give better point resolution. Primary ionization electrons drift to the end plates of the cylinder, which are instrumented with multi-wire proportional chambers to initiate avalanches (gas gain) at the anode wires. Signals proportional to the avalanches are induced on cathode pads situated behind the wires; readout of the induced pad signals provides the hit coordinates in two dimensions. The drift time provides the third coordinate of the hit.
The details of the \dword{hpgtpc} design will be based closely on the design of the ALICE detector~\cite{Dellacasa:2000bm} shown in Figure~\ref{fig:ALICETPC}. Two readout planes sandwich a central \dword{hv} electrode (25$\,\mu$m of aluminized mylar) at \dword{hv} that generates the drift field, which is parallel to a \SI{0.5}{T} magnetic field. On each side of the electrode, primary ionization electrons drift up to \SI{2.5}{m} to reach the endplates, which are segmented azimuthally into 18 trapezoidal regions instrumented with \dwords{roc} that consist of \dword{mwpc} amplification regions and cathode pad planes to read out the signals. A cross sectional view of an ALICE MWPC-based \dword{roc} is shown in Figure~\ref{fig:ALICE_ROC_MWPC}. The \dwords{roc} are built in two sizes: a smaller \dword{iroc} and a larger \dword{oroc}. The trapezoidal segments of the endplates are divided radially into inner and outer sections, and the \dwords{iroc} and \dwords{oroc} are installed in those sections. The existing \dwords{iroc} and \dwords{oroc} will become available in 2019, when they are scheduled to be replaced by new GEM-based \dwords{roc} for upgraded pileup capability in the high rate environment of the \dword{lhc}. For the DUNE \dword{hpgtpc}, the existing \dwords{roc} are more than capable of providing the necessary performance in a neutrino beam.
\begin{dunefigure}[The ALICE TPC]{fig:ALICETPC}
{Diagram of the ALICE \dword{tpc}, from Ref.~\cite{Alme:2010ke}. The drift \dword{hv} cathode is located at the center of the \dword{tpc}, defining two drift volumes, each with \SI{2.5}{m} of drift along the axis of the cylinder toward the endplate. The endplates are divided into 18 sectors, and each endplate holds 36 readout chambers.}
\includegraphics[width=0.7\textwidth]{graphics/alice_tpc_highres.jpg}
\end{dunefigure}
\begin{dunefigure}[The ALICE MWPC-based ROC with pad plane readout]{fig:ALICE_ROC_MWPC}
{Schematic diagram of the ALICE MWPC-based \dword{roc} with pad plane readout, from Ref.~\cite{Alme:2010ke}.}
\includegraphics{graphics/TPC_ROC_MWPC.jpg}
\end{dunefigure}
%====================
In the ALICE design, the innermost barrel region was isolated from the \dword{tpc} and instrumented with a silicon-based inner tracker; for the DUNE \dword{hpgtpc}, the inner field cage labeled in Figure~\ref{fig:ALICETPC} will be removed, and the entire inner region will be combined to make a single gas volume for the \dword{tpc}. New \dwords{roc} will be built to fill in the central uninstrumented region, which is \SI{1.6}{m} in diameter, left by reusing the existing ALICE chambers. The active dimensions of the \dword{hpgtpc} will be \SI{5.2}{m} in diameter and \SI{5}{m} long, which yields an active mass of $\simeq$ \SI{1.8}{t}.
\paragraph{\dword{mpd} Pressure Vessel}\label{sec:TPC_PV}
The preliminary design of the pressure vessel, presented in Figure~\ref{fig:TPC_PV}, accounts for the additional volume needed to accommodate the TPC field cage, the \dword{roc} support structure, \dword{fe} electronics, and possibly part of the \dword{ecal}.
The pressure vessel can be fabricated from aluminum or stainless steel, has a cylindrical section that is 6~m in diameter and \SI{6}{m} long and utilizes two semi-elliptical end pieces with flanges. The walls of the cylinder barrel section are $\simeq$~1.6X$_0$ in thickness in the case of stainless steel or $\simeq$ 0.3X$_0$ in the case of Al 5083. Further reduction of the thickness in radiation lengths can be accomplished with the addition of stiffening rings. This preliminary design includes two flanged endcaps. However, these large flanges are the cost driver for the pressure vessel and, therefore, vessel designs with a single flange will also be considered. As an example, DOE/NETL-2002/1169 (Process Equipment Cost Estimation
Final Report) indicates that a horizontal pressure vessel of the size indicated here and rated at \SI{1034}{kPag} (\SI{150}{psig}) (approximately 10~atmospheres) is costed at \$150k ($\simeq$ \$210k in 2019 dollars). The budgetary estimate for a vessel with two flanges was \$1.2M with the flanges driving the cost. DOE/NETL-2002/1169 also indicates that pressure is not a significant cost driver. Reducing the pressure from \SI{1034}{kPag} to \SI{103}{kPag} (\SI{15}{psig}) only reduces the basic (\$150k) vessel cost by a factor of two.
%====================
\begin{dunefigure}[Pressure vessel preliminary design]{fig:TPC_PV}
{Pressure vessel preliminary design.}
\includegraphics[width=0.6\textwidth]{graphics/tpc_pressurevessel.png}
\end{dunefigure}
%====================
\subsubsubsection{Electromagnetic Calorimeter (\dshort{ecal})}
The \dword{mpd} \dword{ecal} concept is based on a high granularity calorimeter to provide direction information in addition to the energy measurement of electromagnetic showers and an efficient rejection of background. The principal role of the \dword{ecal} is to reconstruct photons directly produced in neutrino interactions and originating from $\pi^0$ decays, providing a measurement of the photon's energy and direction to enable the association of photons to interactions observed in the \dword{hpgtpc} and the determination of the decay vertex of the $\pi^0$s. In the case of $\nu_e$ measurements in the \dword{hpgtpc}, the \dword{ecal} will play an important role in rejecting events with $\pi^0$ decays, which represent a background to $\nu_e$ interactions in the \dword{lartpc}. The \dword{ecal} can also be used to reject external backgrounds, such as rock neutrons and muons, providing a sub-nanosecond timestamp \cite{Simon:2013zya} for each hit in the detector. As the \dword{ecal} uses hydrogen-rich scintillator, it is assumed to have capabilities to provide neutron detection, and studies are underway to determine the performance of neutron detection.
\paragraph{\dword{ecal} Design}
The \dword{ecal} design is inspired by the design of the CALICE analog hadron calorimeter (AHCAL) \cite{collaboration:2010hb}.
\begin{dunefigure}[\dshort{mpd} \dshort{ecal} conceptual design]{fig:ConceptDesign_NDECAL}
{On the left, the conceptual design of the \dword{mpd} system for the \dword{nd}. The \dword{ecal} (orange) is located outside the \dword{hpgtpc} pressure vessel. On the right, a conceptual design of the \dword{ecal} endcap system.}
\includegraphics[width=0.45\textwidth]{graphics/MPDdrawing.jpg}
\includegraphics[width=0.42\textwidth]{graphics/ECAL_Endcap_System.png}
\end{dunefigure}
The \dword{ecal} is shown in Figure~\ref{fig:ConceptDesign_NDECAL}. The barrel has an octagonal shape with each quadrant composed of several trapezoidal modules. The \dword{ecal} endcap has a similar design providing hermeticity and a large solid-angle coverage. Each module consists of scintillating layers of polystyrene as active material read out by \dwords{sipm}, sandwiched between absorber sheets. The scintillating layers consist of a mix of tiles with dimensions between $2\times2$ cm$^2$ to $3\times3$ cm$^2$ (see Figure~\ref{fig:ConceptTile_NDECAL}) and cross-strips with embedded wavelength shifting fibers to achieve a comparable effective granularity. The high-granularity layers are concentrated in the front part of the detector, since that has been shown to be the most relevant factor for the angular resolution \cite{Emberger:2018pgr}. With the current design, the number of channels is of the order 2.5 to 3 million. A first design of the \dword{ecal} and the simulated performance has already been studied in \cite{Emberger:2018pgr}.
\begin{dunefigure}[Conceptual layout of the \dshort{mpd} \dshort{ecal}] % showing the absorber structure, scintillator tiles, SiPM and PCB.]
{fig:ConceptTile_NDECAL}
{Conceptual layout of the \dword{ecal} showing the absorber structure, scintillator tiles, \dwords{sipm} and \dword{pcb}.}
\includegraphics[width=0.8\textwidth]{graphics/TileConcept.png}
\end{dunefigure}
In the preliminary design, it was assumed that the full \dword{ecal} barrel is outside the pressure vessel. The thickness of the pressure vessel has an impact on the calorimeter energy resolution \cite{Emberger:2018pgr}, and more recent designs of the pressure vessel have reduced its thickness.
Currently, the \dword{ecal} design is undergoing a detailed design study in order to further optimize the detector design, cost, and performance.
%
\subsubsubsection{Magnet}
\label{sssec:nd:appx:mpd-magnet}
%
Two magnet designs are under consideration to house the \dword{hpgtpc} and the \dword{ecal}. One is a \dword{ua1}-type conventional electromagnet, the other is based on a superconducting Helmholtz-coil-like design. The common requirement is a central magnetic field of 0.5\,T with $\pm$20\% uniformity over the TPC volume (5\,m long and 5\,m in diameter). With the current design of the access shaft (11.8\,m diameter), the clear diameter is about 7.8\,m. Recent studies for the construction of an electromagnet similar to the \dword{ua1} magnet predict that the cost of the design, procurement, infrastructure (power and cooling) and assembly will be in excess of \$20 million, with operation costs of approximately \$1.6M per year of running. Because of this, the main focus has been on the superconducting design.
%
\paragraph{Superconducting Magnet}
%
The SC magnet design is a Helmholtz-coil-like configuration, air core, five coil magnet system. Three central coils produce the analyzing field and two outer shielding coils help contain stray field. The advantage of this design is that little or no iron is used for field containment or shaping. This eliminates background coming from neutrino interactions in the iron, which for the normal-conducting magnet case is the largest component of the background. Figure~\ref{fig:dune_nd_magnet_sc_layout} shows the magnet concept indicating the five-coil arrangement and support structure.
\begin{dunefigure}[Helmholz coil arrangement]{fig:dune_nd_magnet_sc_layout}
{Helmholz coil arrangement for \dword{mpd} superconducting magnet.}
\includegraphics[width=0.95\textwidth]{graphics/SC_mag_st.png}
\end{dunefigure}
%
All five coils have the same inner radius of \SI{3.5}{m}. The center and shielding coils are identical with the same number of ampere-turns. The side coils are placed at \SI{2.5}{m}, the shielding coils at \SI{6}{m} from the magnet center along $z$. The case where the shielding coils are at \SI{5}{m} from the magnet center so that the magnet system would be the same width as the \dword{lar} detector is also being examined. The magnet system will have a stored energy of about \SI{110}{MJ}, using a conventional NbTi superconducting cable design, a \dword{ssc}-type Rutherford cable soldered in a copper channel with a 50\% margin. All coils should be wired in series to reduce imbalanced forces during a possible quench. Small transverse centering force components are possible due to coil de-centering from mechanical errors.
%
Shown in Figure~\ref{fig:dune_nd_magnet_sc_fieldmap} is the field along the $z$-axis at different radii. The peak field in the coils is \SI{2.14}{T} (center), \SI{5}{T} (side) and \SI{2.03}{T} (shield). The resulting forces are only along the $z$-axis, $F_{z}$ is \SI{0.0}{MN} (center), \SI{-6.81}{MN} (side) and \SI{2.2}{MN} (shield). The fringe field at the shielding coil is rather large but can be reduced further; more studies will be needed. There is a preliminary mechanical support design. A first glimpse at the radiative heat load assumes a coil and support surface of 180\,m$^{2}$, resulting in a load of \SI{5.4}{W} from \SI{77}{K} to \SI{4.5}{K}. The coil support and leads will likely have a much larger contribution (power leads usually have \SI{15}{W} for \SI{10}{kA}). With a mass of \SI{42}{t} the magnets are in some aspects similar to the \dword{mu2e} solenoids.
\begin{dunefigure}[Field map of the superconducting magnet along the $z$ axis]{fig:dune_nd_magnet_sc_fieldmap}
{Field map of the superconducting magnet along the $z$ axis. The colors represent different radii from the center line.}
\includegraphics[width=0.70\textwidth]{graphics/dune_nd_magnet_sc_fieldmap.png}
\end{dunefigure}
%
%\newpage
%
\paragraph{Normal Conducting Magnet}
Although the SC magnet design is the favored option, the normal conducting magnet design produced for the LBNE CDR is also being revised and studied. Due to the cylindrical geometry imposed by the \dword{hpgtpc}, a cylindrical coil design for the normal conducting magnet is the baseline. The cooling requirement of the coil is approximately \SI{3.5}{MW} and involves a substantial cooling water flow. A thermal shield between the coils and the detector volume is required in order to minimize heat flow to the \dword{hpgtpc} and the \dword{ecal}. The coil thickness becomes excessive (in order to maintain a maximum 5$^\circ$ C temperature in the coil) if the thermal shield is not used. The shield does take up space in the magnet volume, however. %Figure~\ref{fig:dune_nd_magnet_nc_layout} shows the current concept for the normal conducting magnet.
Note: the iron end-walls will most likely not be needed. The estimated magnet weight is well over 1\,kt, and this mass provides a significant source of background for the high pressure gas TPC and, perhaps, the \dword{lar}. There is a significant amount of material between the \dword{lartpc} and the \dword{hpgtpc} in the \dword{mpd} in this configuration, which will affect the acceptance for muons emanating from events in the \dword{lar}. This option will continue to be studied as part of the optimization process.
\subsubsubsection{Size optimization}
The process of optimizing the design of the \dword{mpd} is in progress. One of the more critical issues is the size of the \dword{mpd}. This is an important factor in the angular acceptance of particles exiting the upstream \dword{lartpc}. A preliminary study of geometries shows that reducing the \dword{hpgtpc} diameter by more than 1 meter, or reducing the length by more than 1.5 meters would have significant consequences on the acceptance. Reducing the \dword{hpgtpc} diameter from its nominal 5~meters to a slightly smaller 4.5~meters while increasing its length in the direction transverse to the neutrino beam improves acceptance, since the \dword{hpgtpc} would better match the 7-meter width of the \dword{lartpc} in the transverse direction. It should be noted, however, that reducing the diameter may actually result in a higher-cost \dword{mpd}, since the ALICE TPC readout chambers would not be used in the configuration for which they were designed. Increasing the length of the \dword{hpgtpc} is feasible, but will require additional studies of high voltage stability in the gas, since HV breakdown in gas is proportional to the pressure (in the absence of field enhancements). The \dword{hpgtpc} operating pressure will be nominally 10 times that of ALICE, so extending the drift distance from 2.5 meters to 3 meters while keeping the same drift velocity will require raising the drift HV by approximately 20~kV.
\subsubsection{\dshort{mpd} performance}
The expected performance of the \dword{mpd} is summarized in Table~\ref{tab:TPCperformance}. Details of the \dword{hpgtpc} performance are based upon experience from operation of the PEP-4~\cite{PEP4_results_Layter,PEP4_Stork,Madaras:1982cj} and ALICE~\cite{Alessandro:2006yt} time projection chambers. Performance of the \dword{ecal} is based on experience from operation of similar \dwords{ecal} and on simulations.
\begin{dunetable}[\dshort{mpd} performance parameters]{l|c|c}{tab:TPCperformance}{Expected \dword{mpd} performance, extrapolated from \dword{alice}}
Parameter & Value & units \\ \toprowrule
$\sigma_x$ & 250 & $\mu$m\\ \colhline
$\sigma_y$ & 250 & $\mu$m\\ \colhline
$\sigma_z$ & 1500 & $\mu$m\\ \colhline
$\sigma_{r\phi}$ & <1000 & $\mu$m\\ \colhline
Two-track separation & 1 & cm \\ \colhline
Angular resolution & 2-4 & mrad \\ \colhline
$\sigma$($dE/dx$) & 5 & \% \\ \colhline
$\sigma_{p_T}/p_T$ & 0.7 & \% (10-1 GeV/c)\\ \colhline
$\sigma_{p_T}/p_T$ & 1-2 & \% (\SIrange{1}{0.1}{GeV/c})\\ \colhline
Energy scale uncertainty & $\lesssim$ 1 & \% (dominated by $\delta_p/p$) \\ \colhline
Charged particle detection thresh. & 5 & MeV (K.E.)\\ \colhline
ECAL energy resolution & 5-7/$\sqrt{E/{\rm{GeV}}}$ & \% \\ \colhline
ECAL pointing resolution & $\simeq 6$ at 500 MeV & degrees\\
\end{dunetable}
%
\subsubsubsection{Track Reconstruction and Particle Identification}
%The multi-purpose detector will provide unparalleled event reconstruction capability.
The combination of very high resolution magnetic analysis and superb particle identification from the \dword{hpgtpc}, coupled with a high-performance \dword{ecal} will lead to excellent event reconstruction capabilities and potent tools to use in neutrino event analysis.
As an example of this capability, the top panel of Figure~\ref{fig:GAr} shows a $\nu_e + {}^{(N)}Ar \xrightarrow{} e^- + \pi^+ +n + {}^{(N-1)}Ar$ in the \dword{hpgtpc} with automatically-reconstructed tracks. The same event was simulated in a \dword{fd} \dword{spmod},
and is shown in the bottom panel of Figure~\ref{fig:GAr}.
\begin{dunefigure}[Track-reconstructed $\nu_e$ \dshort{cc} event in the \dshort{hpgtpc}]{fig:GAr}
{(Top) Track-reconstructed $\nu_e$ \dword{cc} event in the \dword{hpgtpc}, simulated and reconstructed with GArSoft. The annotations are from \dword{mc} truth. (Bottom) The same $\nu_e$ \dword{cc} event, but simulated in a \dword{spmod} using \dword{larsoft}. The topmost blue panel shows the collection-plane view, the middle blue panel shows the $V$ view, and the bottom blue panel shows the $U$ view. Wire number increases on the horizontal axes and sample time along the vertical axes. The wire number in the collection view is labeled on the top of the panel, while the $V$ and $U$ wire numbers are below their respective panels. Simulated \dword{adc} values are indicated by the colors. The curve in the bottom-most panel is a simulated waveform from a collection-plane wire. The annotations are from \dword{mc} truth.}
\includegraphics[width=0.99\textwidth]{graphics/Garsoft_evt.png}
\includegraphics[width=0.99\textwidth]{graphics/nuenppinliquid.png}
\end{dunefigure}
Since important components of the hardware and design for the \dword{hpgtpc} are taken from or duplicated from the ALICE detector, the ALICE reconstruction is a useful point of reference in this discussion.
Track reconstruction in ALICE is achieved by combining hits recorded on the ROC pads into tracks following a trajectory that a charged particle traveled through the TPC drift volume. The \dword{hpgtpc} is oriented so that the neutrino beam is perpendicular to the magnetic field, which is the most favorable orientation for measuring charged particles traveling along the neutrino beam direction.
The GArSoft simulation and reconstruction package borrows heavily from \dword{lartpc}, and is based on the {\it art} event processing framework and {\tt GEANT4}. It is designed to be able to reconstruct tracks with a full $4\pi$ acceptance. GArSoft simulates a 10~atmosphere gaseous argon detector with readout chambers filling in the central holes in the ALICE geometry. GArSoft's tracking efficiency has been evaluated in a large sample of \dword{genie} $\nu_\mu$ events interacting in the TPC gas at least 40 cm from the edges, generated using the optimized \dword{lbnf} forward horn current beam spectra. The efficiency
for reconstructing tracks associated with pions and muons as a function of track momentum $p$ is shown in Figure~\ref{fig:garsoft_efficiency}. The efficiency is above 90\% for tracks with $p>40$~MeV/$c$, and it steadily rises with increasing momentum.
Also shown is the efficiency for reconstructing all charged particles with $p>\SI{200}{MeV/c}$ as a function of $\lambda$, the track angle with respect to the center plane. The tracking efficiency for protons is shown in Figure~\ref{fig:TEpr} as a function of kinetic energy, $T_p$. Currently, the tracking works well down to $T_p \simeq \SI{20}{MeV}$. For $T_p < \SI{20}{MeV}$, a machine-learning algorithm is in development, targeting short tracks near the primary vertex. This algorithm, although currently in a very early stage of development, is already showing good performance, and efficiency improvements are expected with more development. The machine learning algorithm is described in Section~\ref{sec:TPC_ML}.
The ALICE detector, as it runs at the LHC, typically operates with particle densities ranging from 2000 to 8000 charged particles per unit rapidity ($dN/dy$) for central Pb-Pb interactions~\cite{Cheshkov:2006ym}. The expected particle densities in the DUNE neutrino beam will be much lower and less of a challenge for the reconstruction.
\begin{dunefigure}[Efficiency of track finding in the HPgTPC]{fig:garsoft_efficiency}
{(Left) The efficiency to find tracks in the \dword{hpgtpc} as a function of momentum, $p$, for tracks in a sample of \dword{genie} events simulating \SI{2}{GeV} and $\nu_\mu$ interactions in the gas, using GArSoft. (Right) The efficiency to find tracks as a function of $\lambda$, the angle with respect to the center plane, for tracks with $p>200\,$MeV/$c$.}
\includegraphics[width=0.49\textwidth]{graphics/effvsp.png}\includegraphics[width=0.49\textwidth]{graphics/effvslambdagt200MeV.png}
\end{dunefigure}
\begin{dunefigure}[Tracking efficiency for protons in the HPgTPC]{fig:TEpr}{Tracking efficiency for protons in the \dword{hpgtpc} as a function of kinetic energy.}
\includegraphics[width=0.65\columnwidth]{graphics/effvske.png}
\end{dunefigure}
ALICE chose to use neon, rather than argon, for the primary gas in their first run; the decision was driven by a number of factors, but two-track separation capability was one of the primary motivations due to the extremely high track multiplicities in the experiment. Neon performs better than argon in this regard. A better comparison for the \dword{hpgtpc}'s operation in DUNE is the two-track separation that was obtained in PEP4~\cite{PEP4_Stork}. PEP4 ran an 80-20 mixture of Ar-CH$_4$ at 8.5~atmospheres, yielding a two-track separation performance of \SI{1}{cm}.
In ALICE, the ionization produced by charged particle tracks is sampled by the TPC pad rows (there are 159 pad rows in the TPC) and a truncated mean is used for the calculation of the PID signal. Figure~\ref{fig:ALICE_dEdx} (left) shows the ionization signals of charged particle tracks in ALICE for pp collisions at $\sqrt{s} = 7$~TeV. The different characteristic bands for various particles are clearly visible and distinct at momenta below a few GeV. When repurposing ALICE as the \dword{hpgtpc} component of the \dword{mpd}, better performance is expected for particles leaving the active volume, since the detector will be operating at higher pressure (10~atmospheres vs. the nominal ALICE 1~atmosphere operation), resulting in ten times more ionization per unit track length available for collection. Figure~\ref{fig:ALICE_dEdx} (right) shows the charged particle identification for PEP-4/9~\cite{Grupen:1999by}, a higher pressure gas TPC that operated at 8.5~atmospheres, which is very close to the baseline argon gas mixture and pressure of the DUNE \dword{hpgtpc}.
\begin{dunefigure}[ALICE and PEP-4 $dE/dx$-based particle identification as a function of momentum]{fig:ALICE_dEdx}
{Left: ALICE TPC $dE/dx$-based particle identification as a function of momentum (from~\cite{ALICE_Lippmann}). Right: PEP-4/9 TPC (80:20 Ar-CH4, operated at 8.5~Atm, from~\cite{Grupen:1999by}) $dE/dx$-based particle identification.}
\includegraphics[width=0.49\textwidth]{graphics/ALICE_TPC_dEdx_Lippmann_2012.png}
\includegraphics[width=0.49\textwidth]{graphics/PEP4-TPC-80Ar-20CH4-8_5atm_dEdx.png}
\end{dunefigure}
\subsubsubsection{Momentum and Angular Resolution for Charged Particles}
%
The ability to determine the sign of the charge of a particle in the \dword{hpgtpc} tracking volume is limited by the spatial resolution of the measured drift points in the plane perpendicular to the magnetic field, as well as multiple Coulomb scattering (MCS) in the gas. For a fixed detector configuration, the visibility of the curvature depends on the particle's $p_{\rm{T}}$, its track length in the plane perpendicular to the field, and the number and proximity of nearby tracks. Because primary vertices are distributed throughout the tracking volume, the distribution of the lengths of charged-particle tracks is expected to start at very short tracks, unless sufficient \dword{fv} cuts are made to ensure enough active volume remains to determine particle's track sign. The kinetic energies of particles that leave short tracks and stop in the detector will be better measured from their tracks' lengths than from their curvatures. Protons generally stop before their tracks curl around, but low-energy electrons loop many times before coming to rest in the gas.
Within the \dword{fv} of the \dword{hpgtpc}, charged particles can be tracked over the full 4$\pi$ solid angle. Even near the central electrode, tracking performance will not be degraded due to the very thin (25 $\mu$m of mylar) nature of the central electrode. Indeed, tracks crossing the cathode provide an independent measurement of the event time, since the portions of the track on either side of the cathode will only line up with a correct event time assumed when computing drift distances. The 4$\pi$ coverage is true for all charged particles. ALICE ran with a central field of 0.5~T and their momentum resolution from $p$--Pb data~\cite{Abelev:2014ffa} is shown in Figure~\ref{fig:ALICE_MOMres}.
\begin{dunefigure}
[The TPC stand-alone p$_T$ resolution in ALICE for $p$--Pb collisions]
{fig:ALICE_MOMres}
{The black squares show the TPC stand-alone p$_T$ resolution in ALICE for $p$--Pb collisions. From Ref.~\cite{Abelev:2014ffa}.}
\includegraphics[width=0.65\columnwidth]{ALICE_mom_res.png}
\end{dunefigure}
%
The momentum resolution of muons in neutrino scatters using the GArSoft simulation and reconstruction package is shown in Figure~\ref{fig:garsoftpamuonres1}, using a sample of 2~GeV $\nu_\mu$~\dword{cc} events. This resolution differs from ALICE's achieved resolution due to the higher pressure, the heavier argon nucleus compared with neon, the non-centrality of muons produced throughout the detector, and the fact that the GArSoft simulation and reconstruction tools have yet to be fully optimized. The momentum resolution achieved for muons is $\Delta p/p = 4.2$\%, and is expected to improve with optimization of the simulation and reconstruction tools. The 3D angular resolution of muons is approximately 0.8~degrees, as shown in Figure~\ref{fig:garsoftpamuonres1}.
\begin{dunefigure}[Momentum and angular resolutions for muons in GArSoft]{fig:garsoftpamuonres1}
{Left: the momentum resolution for reconstructed muons in GArSoft, in a sample of \SI{2}{GeV} $\nu_\mu$~\dword{cc} events simulated with \dword{genie}. The Gaussian fit to the $\Delta p/p$ distribution has a width of 4.2\%. Right: the \threed angular resolution for the same sample of muons in GArSoft.}
\includegraphics[width=0.49\columnwidth]{graphics/dpmuon.png}\includegraphics[width=0.49\columnwidth]{graphics/anglediffmuon.png}
\end{dunefigure}
\subsubsubsection{Machine Learning for Low Energy Protons}\label{sec:TPC_ML}
As a complement to the existing reconstruction, an initial exploration of several machine learning methods has been performed.
The main goal of this effort has been to attempt to reconstruct very low energy protons and pions where traditional
tracking methods might struggle.
While this study is still in very early stages, there has been success so far in using a fully connected multi-layer perceptron (MLP) to both regress
the kinetic energy of and classify between protons and pions. Additionally a Random Sample Consensus (RANSAC) based
clustering algorithm has been developed to group hits into short tracks for events where there are multiple particles.
Together, these two algorithms can be used to measure the kinetic energy of multiple particles in a single event.
As a demonstration, a test sample of multiple proton events was generated where each event has:
\begin{itemize}
\item 0-4 protons, number determined randomly with equal probabilities
\item all protons share a common starting point (vertex) whose position in the TPC is randomly determined
\item each proton is assigned independently and randomly:
\begin{itemize}
\item a direction in space (isotropically distributed)
\item a scalar momentum between 0 and 200 MeV/$c$ (flat distributed)
\end{itemize}
\end{itemize}
The RANSAC-based clustering algorithm assigns individual hits to proton candidate sets of hits which are
passed to a MLP that was trained on a set of individual proton events in the TPC to predict kinetic energy. Figure~\ref{fig:ML_residuals} shows the kinetic energy residuals, the reconstruction efficiency,
and a 2D scatter plot of the measured kinetic energy versus the true kinetic energy
for each individual proton with kinetic energy between 3 and 15 MeV in the test sample. Additionally, the residual for the total kinetic energy in each multi-proton event is given.
\begin{dunefigure}[Machine learning residuals for protons in the \dshort{mpd}]{fig:ML_residuals}
{(Top left) Kinetic energy residual, (Top right) measured KE vs. true KE, and (Bottom right) reconstruction efficiency for individual protons with \SIrange{3}{15}{MeV} KE in the test set. (Bottom left) Residual of the total kinetic energy of all protons in each event in the test sample.}
\includegraphics[width=0.49\textwidth]{graphics/residuals_hist_KE.png}
\includegraphics[width=0.49\textwidth]{graphics/pred_true_2D_KE_hist.png}
\vspace{1mm}
\includegraphics[width=0.49\textwidth]{graphics/total_KE_residual.png}
\includegraphics[width=0.49\textwidth]{graphics/efficiency_wasgood_KE.png}
\end{dunefigure}
%
%
\subsubsubsection{\dword{ecal} Performance}
The expected performance of the calorimeter was studied with Geant4-based \cite{Agostinelli:2002hh} simulations and GArSoft \cite{GArSoftwebsite}. In the following, a first scenario referred to as scenario A (shown by the red curve in the figures below) in which the \dword{ecal} is located inside the pressure vessel is considered. The barrel geometry consists of 55 layers with the following layout:
\begin{itemize}
\item 8 layers of \SI{2}{\mm} copper + \SI{10}{\mm} of $2.5\times2.5$ cm$^2$ tiles + \SI{1}{\mm} FR4
\item 47 layers of \SI{4}{\mm} copper + \SI{10}{\mm} of cross-strips \SI{4}{\cm} wide
\end{itemize}
For the present studies, copper has been chosen as absorber material as initial studies have shown that this material provides a good compromise between calorimeter compactness, energy resolution, and angular resolution. However, the choice of absorber material is still under study. The choice of granularity, scintillator thickness, and the arrangement of tiles and strips is still under optimization in order to reduce the number of readout channels while keeping the calorimeter performance. Two alternative scenarios are shown below: scenario B (black curve) has a different arrangement of the tile and strip layers, and scenario C (blue curve) has thinner absorbers in the front layers.
Digitization effects are accounted for by introducing an energy threshold of 0.25~MIPs ($\sim$\SI{200}{\keV}) for each detector cell/strip, a Gaussian smearing of \SI{0.1}{\MeV} for the electronic noise, SiPM saturation effects, and single photon statistics.
\paragraph{Energy Resolution} The energy resolution is determined by fitting the visible energy with a Gaussian. Converted photons are rejected based on Monte-Carlo information. A fit function of the form $\frac{\sigma_{E}}{E} = \frac{A}{\sqrt{E}} \oplus \frac{B}{E} \oplus C$ is used, where $A$ denotes the stochastic term, $B$ the noise term, $C$ the constant term, and $E$ is in GeV. Figure~\ref{fig:EResARes_NDECAL} shows the energy resolution as a function of the photon energy. For scenario A, shown in red, the energy resolution is around $\frac{6.7\%}{\sqrt{E}}$. With further optimization, it is believed that an energy resolution of (or below) $\frac{6\%}{\sqrt{E}}$ is achievable. It should be noted that due to the lack of non-uniformities, dead cells, and other effects in the simulation, the energy resolution is slightly optimistic.
\begin{dunefigure}[Energy and angular resolutions for photons in the \dshort{mpd} ECAL]{fig:EResARes_NDECAL}
{Left: energy resolution in the barrel as a function of the photon energy for three \dword{ecal} scenarios. The energy resolution is determined by a Gaussian fit to the visible energy. Right: the angular resolution in the barrel as a function of the photon energy for the three \dword{ecal} scenarios. The angular resolution is determined by a Gaussian fit to the 68\% quantile distribution. For both figures, the scenario A is shown by the red curve, scenario B by the black curve and scenario C by the blue curve. The fit function is of the form $\frac{\sigma_{E}}{E} = \frac{A}{\sqrt{E}} \oplus \frac{B}{E} \oplus C$.}
\includegraphics[width=0.45\textwidth]{graphics/Comparison_Setups_EnergyResolution.pdf}
\includegraphics[width=0.45\textwidth]{graphics/Comparison_Setups_AngularResolution.pdf}
\end{dunefigure}
\paragraph{Angular Resolution} The angular resolution of the calorimeter has been determined using a principal component analysis (PCA) of all reconstructed calorimeter hits. The direction is taken as the first eigenvector (main axis) of all the reconstructed hits. The angular resolution is determined by taking the 68\% quantile of the reconstructed angle distribution and fitting a Gaussian distribution. The mean of the Gaussian is taken as the angular resolution and the error as its variance. Figure~\ref{fig:EResARes_NDECAL} shows the angular resolution as a function of the photon energy. In scenario A, shown in red, an angular resolution of $\frac{\SI{3.85}{\degree}}{\sqrt{E}} \oplus \SI{2.12}{\degree}$ can be achieved. This can potentially be further improved with a different arrangement of the tile and strip layers, an optimization of the absorber thickness, and an improved reconstruction method. However, the requirements will be further refined and will impact the detector optimization. The angular resolution is mainly driven by the energy deposits in the first layers of the \dword{ecal}. Using an absorber with a large $X_{0}$ creates an elongated shower that helps in determining the direction of the shower. In general, high granularity leads to a better angular resolution, however, studies have shown that there is no additional benefit to having cell sizes below $2\times2$ cm$^2$ \cite{Emberger:2018pgr}.
\paragraph{Neutron detection} The \dword{ecal} is sensitive to neutrons due to the scintillator containing hydrogen. Previous simulation studies showed that a detection efficiency above 60\% can be achieved for neutron energies greater than \SI{50}{\MeV}. However, the energy measurement is not very accurate (around 50-60\% below \SI{600}{\MeV}) \cite{Emberger:2018pgr}. Other methods of detection such as time of flight (ToF) could be used to improve the neutron energy measurement by measuring precisely the hit time of the neutron and its travel distance in the calorimeter. This is currently under study.
\paragraph{$\pi^0$ reconstruction} For identification of neutral pions, both the energy and angular resolution are relevant. In an initial study, the position of the neutral pion is determined by using a $\chi^2$-minimization procedure taking into account the reconstructed energy of the two photons and the reconstructed direction of the photon showers \cite{Emberger:2018pgr}. The location of the decay vertex of the neutral pion can be determined with an accuracy between \SIrange{10}{40}{\cm}, depending on the distance from the downstream calorimeter and the $\pi^0$ kinetic energy. This is sufficient to associate the $\pi^0$ to an interaction in the \dword{hpgtpc}, since the gas will have less than one neutrino interaction per beam spill.
%However, this study needs to be reviewed again with the new design.
The pointing accuracy to the pion decay vertex may be further improved by a more sophisticated analysis technique and by using precision timing information, and is a subject of current study.
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
\subsection{The DUNE-PRISM Program}
\label{sec:appx-nd:DP}
The goals of the off-axis measurements are twofold:
\begin{itemize}