Skip to content

10 october 2024

Mashy Green edited this page Oct 18, 2024 · 1 revision

Notes from meeting with Ed to run through the code

using test in: Grid/tests/sp2n/Test_hmc_Sp_WF_2_Fund_3_2AS.cc

  • TheHMC.Run();

  • GenericSpHMCRunnerHirep is a HMCWrapperTemplate wrapper object. (HMC = Hamiltonian Monte Carlo) Location: Grid/qcd/hmc/GenericHMCrunner.h

  • Run() method, calls void Runner (GenericHMCrunner.h:184).

    • Creates the Field class U(Ugrid)
    • Does some more setup with HybridMonteCarlo<TheIntegrator> HMC(...)
    • calls:
  • HCM.evolve() This calls void evolve(void) (Grid/qcd/hmc/HMC.h:248)

    • Main outer loop that loops over the trajectories.
    • calls:
  • evolve_hmc_step()

    • returns the difference in H (H1-H0) (H is the Hamiltonian, i.e. total energy?) of the field (we are still on the lattice, not points).
    • calls:
  • TheIntegrator.integrate(U)

    • Representation policy (I think this is defined in the test).
    • calls:
  • void integrate(Field& U)

    • Location: Grid/qcd/hcm/integratros/integrator.h
    • Steps of MD (molecular dynamics).
      • Pseudo molecular dynamics in the spectral space? Steps in MD time – this is not the same as physical time, so not the time dimension of the lattice.
    • calls:
  • this->step(U, 0, first_step, last_step) virtual method for the time integration (step method) of the pseudo MD. The scheme is a multi-level integration scheme that uses recursion.

    • Implementation in Grid/qcd/hmc/integrators/Integratoralgorithm.h
    • Three step implementations, LeapFrog, MinimumNorm2, ForceGradient, defined in the test.
    • Using MinimumNorm2.
    • calls:
  • this->update_P(...)

    • momentum update that gets updated with the multi-level time-stepping scheme.
    • void update_P(MomentaField& Mom, Field& U, int level, double ep)
    • calls:
  • as[level].actions.at(a)->deriv(Smearer, force)

    • Smearer is the smearing policy - what is it?
    • Used for different actions (the type of action(s) is defined the test, I think with TwoFlavourPseudoFermionAction?)
    • deriv(...) is a virtual method for derivatives, implemented in the actions.
    • In our case, this is in Grid/qcd/actions/pseudofermion/TwoFlavourEvenOdd.h (not sure why…)
  • virtual void deriv(const GaugeField &U,GaugeField & dSdU)

    • calls:
  • DerivativeSolver(Mpc,PhiOdd,X)

    • Not sure how, but this is implemented in:
  • void MpcDeriv(GaugeField &Force,const FermionField &U,const FermionField &V)

    • Location: Grid/qcd/actions/pseudofermion/EvenOddSchurDifferentiable.h
  • this->_Mat.Meoooe(V,temp1)

    • _Mat defined in femion/ShurDiagTwoKappa -> algorithms/LinearOperator.h
    • Inherited from SchurDiagMorrOperator (in LinearOperator.h), templeted on Matrix.
    • This is instantiated in the implementation policy

SKIPED some stuff as we weren’t sure of the link. Restarting at:

  • WillsonFermionImplementation.h

  • Kernels::DhopDirKernel(st, U, st.CommBuf(), Ls, B.Grid()->oSites(), B, Btilde, mu, gamma)

    • Called in a loop providing different gamma values (investigate what this is).
    • Implemented in WillsonKernelsImplementation.h
  • void WilsonKernels<Impl>::DhopDirKernel() (line 369)

    • U is the Gauge Field
    • A lot of autoView calls.
    • LoopBody(Dir) defined here.
    • Defines the lambda for the loop using accelerator_for(ss, Nsite, Simd::Nsimd(), {...}
    • XYZp/m is plus/minus. sp is spinor.
    • Same file starting at line 57 - where generic stencils are defined.

From here we got tired and things are less clear…

  • WilsonImpl.h (Not sure what we looked at here, examples of accelerator_for?)

  • Uses coalesceRead and coalesceWrite

    • Location: tensors/Tensor_SIMT.h
    • Suspected to be a movement of memory between global GPU and shared GPU memory (VRAM to L1 cache).
  • Grid/threads/Accelerator.h

  • void LambdaApply()

  • accelerator_for is a wrapper for LambdaApply with a for loop.

  • CshiftCommon

    • Suspected of executing Copy_plane kernels in LambdaApply
  • syntax similar to: accelerator_for(i,end,Nsimd(){coaleseWrite(), coaleseRead()})

    • Location: tensors/TensorSIMT.h
  • void CoalesceWrite() ... insertlane

    • Location: tensor/Tensorextractmerge.h

Notes:

  • Data structures defined in the Tensor directory.

  • New implementation of the WilsonKernelsImplementation.h (this is where we can manipulate the Gauge field (symmetric matrix, [A, B; -B*, A*]))

  • Preference to avoid creating new data structures and manipulate the existing ones at the WilsonKernelImplementation.h level instead.