Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Using pre-computed velocity solutions in one-way coupled physics #202

Open
1 task done
zasexton opened this issue Mar 2, 2024 · 16 comments
Open
1 task done

Using pre-computed velocity solutions in one-way coupled physics #202

zasexton opened this issue Mar 2, 2024 · 16 comments
Labels
enhancement New feature or request

Comments

@zasexton
Copy link
Contributor

zasexton commented Mar 2, 2024

Problem

The general problem: We wish to use the (potentially time-varying) velocity solutions obtained from a previous simulation within a subsequent physics (namely, scalar advection-diffusion).

Solution

This has been done previously in svFSI on side-branches where residence time calculations have been desired or where thrombosis modeling has been done. See Gutierrez et al 2021.

Although it is not clear if this exact framework would work well for a more object-oriented svFSIplus the following scheme is generally implemented in svFSI:

  1. True/False flag is selected for Use pre-computed velocity
  2. If True the following subroutine is called in svFSI to read in the velocity solution vector fields across some set of time points. This subroutine can be found in the VTKXML.f file of the attached svFSI source code. Reminder this will not be found on the main branches of svFSI as this functionality was never formally included.
!**********************************************************************

      SUBROUTINE READVELOCITYVTU(fName)
      
      USE COMMOD
      USE LISTMOD
      USE ALLFUN
      USE vtkXMLMod
      
      IMPLICIT NONE

      CHARACTER(LEN=STDL) :: fName, sName
      
      TYPE(vtkXMLType) :: vtu
      
      INTEGER :: iStat, iEq, iOut, iM, l, s, e, a, b, Ac, nNo, oGrp
      INTEGER :: i, j, iTS, iVel, zp
      CHARACTER(LEN=stdL) :: varName, velName
      REAL(KIND=8), ALLOCATABLE :: tmpS(:,:), tmpGS(:,:)
      i = (lStep - fStep)/iStep + 1
      ALLOCATE (allU(nsd,gtnNo,i))
      iStat = 0
      CALL loadVTK(vtu, fName, iStat)
      IF (iStat .LT. 0) err = "VTU file read error (init)"
      
      CALL getVTK_numPoints(vtu, nNo, iStat)
      IF (iStat .LT. 0) err = "VTU file read error (num points)"
      IF (nNo .NE. SUM(msh(:)%gnNo)) 
     2   err = "Incompatible mesh and "//TRIM(fName)
      j=0
      ALLOCATE(tmpGS(nsd,gtnNo))

      iStat = -1
      iVel = 0
      zp = 0

      IF (fStep .GE. 1000) THEN
         sName = STR(fStep)
      ELSE
         WRITE(sName,'(I3.3)') fStep
      END IF

      DO WHILE(iStat .LT. 0)
         iVel = iVel + 1
         iStat = 0
         SELECT CASE (iVel)
         CASE(1)
            velName = "velocity"
         CASE(2)
            velName = "NS_Velocity"
         CASE(3)
            velName = "FS_Velocity"
         CASE(4)
            zp = 1
            WRITE(sName,'(I4.4)') fStep
            velName = "velocity"
         CASE(5)
            err ="VTU file read error (point data)"
         END SELECT
         varName = TRIM(velName)//"_0"//TRIM(ADJUSTL(sName))
         IF (iVel .EQ. 4) THEN
            PRINT *, "Try format <"//TRIM(velName)//"_00>"
         ELSE
            PRINT *, "Try format <"//TRIM(velName)//"_0>"
         END IF
         CALL getVTK_pointData(vtu, TRIM(varName), tmpGS, iStat)
      END DO


      DO iTS=fStep, lStep, iStep
         IF (iTS .GE. 1000) THEN
            sName = STR(iTS)
         ELSEIF (zp .EQ. 1) THEN
            WRITE(sName,'(I4.4)') iTS
         ELSE
            WRITE(sName,'(I3.3)') iTS
         END IF

         varName = TRIM(velName)//"_0"//TRIM(ADJUSTL(sName))
         CALL getVTK_pointData(vtu, TRIM(varName), tmpGS, iStat)
         IF (iStat .LT. 0) err ="VTU file read error (point data)"

         j=j+1
         allU(:,:,j) = tmpGS

      END DO
      CALL flushVTK(vtu)
      
      RETURN
      END SUBROUTINE READVELOCITYVTU
!**********************************************************************
  1. The read velocity solutions are distributed across the processors. This code snippet is taken from the DISTRIBUTE subroutine in DISTRIBUTE.f
!     Distributing allU to processors
      IF (velFileFlag) THEN
         IF (cm%mas()) THEN
            ALLOCATE(tmpU(nsd,gtnNo,ntpoints))
            tmpU = allU
            DEALLOCATE(allU)
         ELSE
            ALLOCATE(tmpU(0,0,0))
         END IF
         ALLOCATE(allU(nsd,tnNo, ntpoints))
         allU = LOCAL(tmpU)
         DEALLOCATE(tmpU)
      END IF

!     Distributing tagFile node info to processors
      flag = (ALLOCATED(tagRT)) 
      CALL cm%bcast(flag)
      IF (flag) THEN     
         IF (cm%mas()) THEN
            ALLOCATE(tmpRT(gtnNo))
            tmpRT = tagRT
            DEALLOCATE(tagRT)
         ELSE
            ALLOCATE(tmpRT(0))
         END IF
         ALLOCATE(tagRT(tnNo))
         tagRT = LOCAL(tmpRT)
         DEALLOCATE(tmpRT)
      END IF
  1. In the predictor step of the time iteration loop the new solution values are obtained from the allU solutions
!     Predictor step
         CALL PICP
         CALL SETBCDIR(An, Yn, Dn)
         IF (velFileFlag) THEN
             n = MOD(cTS,ntpoints*iStep)

             n1 = n/iStep + 1
             n2 = n1 + 1

             i = (lStep - fStep)/iStep + 2
             IF(n2 .EQ. i) n2 = 1

             alpha = REAL(MOD(n,iStep),8)/REAL(iStep,8)
             Yn(1:nsd,:) = allU(:,:,n1)*(1D0 - alpha) + 
     2          allU(:,:,n2)*alpha
         END IF 
  1. The physics problem is solved at the current iteration and continues to iterate until the number of time steps is achieved

Attached files include the source code and example input files for residence time calculations.
svFSI_input.txt
tagFile.txt
Code.tar.gz

Additional context

No response

Code of Conduct

  • I agree to follow this project's Code of Conduct and Contributing Guidelines
@zasexton zasexton added the enhancement New feature or request label Mar 2, 2024
@zasexton
Copy link
Contributor Author

zasexton commented Mar 4, 2024

The following function is how I propose to read in time-varying solution field data across a given vtu mesh for svFSIplus. This would replicate behavior of the subroutine READVELOCITYVTU called in svFSI. However, there are some notable differences.

  1. the field_name such as "Velocity", "Pressure", etc must be supplied by the user and will not be hard-coded into the function.
  2. the time step of the field data is assumed to come at the end of the vtkDataArray name. This conforms to how such time-varying velocity and pressure data are written out to vtk files from svPost. However, because we do not hard-code the array name format we needed to define a robust method for extracting the time step information.
  3. Last, this function assumes that the mshType objects are equipped with an additional Array3<double> (called Ys) attribute capable of storing the solution field of interest.
void load_time_varying_field_vtu(const std::string file_name, const std::string field_name, mshType& mesh) {

  #define n_debug_load_vtu
  #ifdef debug_load_vtu
  std::cout << "[load_vtu] " << std::endl;
  std::cout << "[load_vtu] ===== vtk_xml_parser::load_time_varying_field_vtu ===== " << std::endl;
  std::cout << "[load_vtu] file_name: " << file_name << std::endl;
  #endif

    auto reader = vtkSmartPointer<vtkXMLUnstructuredGridReader>::New();
    reader->SetFileName(file_name.c_str());
    reader->Update();
    vtkSmartPointer<vtkUnstructuredGrid> vtk_ugrid = reader->GetOutput();
    vtkIdType num_nodes = vtk_ugrid->GetNumberOfPoints();
    int array_count = 0;
    std::vector<str::pair<std::string, int>> array_names;

    if (num_nodes == 0) {
        throw std::runtime_error("Failed reading the VTK file '" + file_name + "'.");
    }
    // Store all array names
    for (int i = 0; i < vtk_ugrid->GetPointData()->GetNumberOfArrays(); i++) {
        std::string array_name = vtk_ugrid->GetPointData()->GetArrayName(i);
        size_t pos = array_name.find(field_name.c_str());
        if (pos != std::string::npos) {
            auto not_digit = [](char c) { return !std::isdigit(c); };
            auto it = std::find_if(array_name.rbegin(), array_name.rend(), not_digit);
            std::string time_step = std::string(it.base(), array_name.end());
            array_count++;
            if (!time_step.empty()) {
                array_names.push_back({array_name, std::stoi(time_step)});
            } else {
                array_names.push_back({array_name, 0});
            }
        }
    }

    if (array_count == 0) {
        throw std::runtime_error("No '" + field_name + "' data found in the VTK file '" + file_name + "'.");
    }

    // Order all array names
    std::sort(array_names.begin(), array_names.end(), [](const std::pair<std::string, int>& a, const std::pair<std::string, int>& b) {
        return a.second < b.second;
    });

    int num_components = array->GetNumberOfComponents();
    mesh.Ys.resize(num_components, num_nodes, array_count);

    for (int i = 0; i < array_count; i++) {
        auto array = vtk_ugrid->GetPointData()->GetArray(array_names[i].first.c_str());
        if (array == nullptr) {
            throw std::runtime_error("No '" + array_names[i].first + "' data found in the VTK file '" + file_name + "'.");
        }
        for (int j = 0; j < num_nodes; j++) {
            for (int k = 0; k < num_components; k++) {
                mesh.Ys(k, j, i) = array->GetComponent(j, k);
            }
        }
    }
}

@zasexton
Copy link
Contributor Author

zasexton commented Mar 4, 2024

For partitioned multiphysics, I believe this will be a powerful and flexible approach to incorporate previous solution data into new physics simulations. Because we do not assume a particular structure to the solution field we can potentially use this same functionality for scalar or tensor fields (should a partitioned scheme become relevant for such cases).

zasexton added a commit to zasexton/svFSIplus_public that referenced this issue Mar 4, 2024
@zasexton
Copy link
Contributor Author

zasexton commented Mar 5, 2024

The branch above implements functionality for reading in specified solution fields but does not yet distribute the global data across processors (necessary for any parallel simulation). This will be added next and seek to replace the functionality of the added code in svFSI DISTRIBUTE.f and should be necessary for partitioned multi-physics schemes (agnostic of the physics).

@zasexton
Copy link
Contributor Author

zasexton commented Mar 7, 2024

The following function is how I propose to distribute precomputed state variables. Similar to the all_fun::local for Vector<double> and Array<double>, this function allows for the distribution of Array3<double>.

Array3<double>
local(const ComMod& com_mod, const CmMod& cm_mod, const cmType& cm, Array3<double>& U){
  if (com_mod.ltg.size() == 0) {
    throw std::runtime_error("ltg is not set yet");
  }

  Array3<double> local_array;
  int m;
  int n;
  int r;

  if (cm.mas(cm_mod)) {
    m = U.nrows();
    r = U.ncols();
    n = U.nslices();
    if (U.ncols() != com_mod.gtnNo) {
      throw std::runtime_error("local_rv is only specified for vector with size gtnNo");
    }
  }

  if (cm.seq()) {
    local_array.resize(m, com_mod.gtnNo, U.nslices());
    local_array = U;
    return local_array;
  }

  cm.bcast(cm_mod, &m);
  cm.bcast(cm_mod, &n);
  cm.bcast(cm_mod, &r);

  local_array.resize(m, com_mod.tnNo, r);
  Vector<double> tmpU(m * com_mod.gtnNo * r);

  if (cm.mas(cm_mod)) {
    for (int a = 0; a < com_mod.gtnNo; a++) {
      int s = m * a;
      for (int i = 0; i < U.nslices(); i++) {
        int e = i * m * (com_mod.gtnNo + 1);
        for (int j = 0; j < U.ncols(); j++) {
          tmpU(j+s+e) = U(j, a, i);
        }
      }
    }
  }

  cm.bcast(cm_mod, tmpU);

  for (int a = 0; a < com_mod.tnNo; a++) {
    int Ac = com_mod.ltg[a];
    int s = m * Ac;
    for (int i = 0; i < n; i++) {
      int e = i * m * (r + 1);
      for (int j = 0; j < m; j++) {
        local_array(j, a, i) = tmpU(j+s+e);
      }
    }
  }

  return local_array;
}

@ktbolt
Copy link
Collaborator

ktbolt commented Mar 7, 2024

@zasexton Good to show the proposed code!

Some comments

  • Add comments to the code so I can understand what it is supposed to do. For example, what is U ?

  • It is good not to usr single letter variable names, be descriptive: the rows, columns and slices of U actually do have meaning I am guessing (e.g. U.nslices() = number of time steps?).

  • It seems tmpU is just a subset of U so should it not be an Array3?

@zasexton
Copy link
Contributor Author

zasexton commented Mar 7, 2024

@ktbolt regarding your comments.

  • will add comments to the code to generally describe what is happening.
  • For U, and generally for all parameter names, I copy the same format used for all_fun::local(const ComMod& com_mod, const CmMod& cm_mod, const cmType& cm, Array<double>& U) and all_fun::local(const ComMod& com_mod, const CmMod& cm_mod, const cmType& cm, Vector<int>& U). See here.
    https://github.com/SimVascular/svFSIplus/blob/29a94c3b8641118d66fc56357ef212a1cce0c63d/Code/Source/svFSI/all_fun.cpp#L989-L1095
    I agree that the naming convention is confusing but decided to copy it so that it would be more obvious to a future developer if a seperate issue is made for renaming parameters that they should do this for all local functions in all_fun.cpp
  • for the rows, cols, and slices I will rename m, n, and r to be more descriptive.
  • yes, tmpU is a subset of U; however, in the other local function that distributes Array<double> data a similar scheme is used by creating a Vector<double> type tmpU. It was unclear to me if there were further limitations in cm.bcast for types Array<double> and Array3<double> thus I chose to use this flattened scheme. If necessary, I can check to see if cm.bcast has implemented functions for these types.

@zasexton
Copy link
Contributor Author

The final component necessary to integrate precomputed state variables for partitioned physics allows for time-varying solutions. In general we may not assume that the time step size from a previous physics simulation will be the same as current physics being computed (the reasons for this are practical and numerous, but often partitioned multi-physics schemes may seek to bridge different time scales explicitly, for example). Instead we assume that pre-computed solutions can be linearly interpolated to establish temporal agreement. This seems to be a valid assumption since any well-resolved solution from a previous simulation should be well-approximated with temporal linearity. In the special cases where the governing equations of the previous simulation are linear (e.g. diffusion, steady-porous media flow), of course, linear interpolation is mathematically well-justified.

To do this we must add the following code to the main iteration loop of the main.cpp file (sorry @ktbolt for adding code here). Specifically, we must inject the interpolated, precomputed time-varying solutions we need into the next new state-variable array for the predictor/multi-corrector algorithm within the outer loop. This will be accomplished via the following:

    if (com_mod.usePrecomp) {
        #ifdef debug_iterate_solution
        dmsg << "Use precomputed values ..." << std::endl;
        #endif
        // This loop is used to interpolate between known time values of the precomputed
        // state-variable solution
        for (int l = 0; l < com_mod.nMsh; l++) {
            auto lM = com_mod.msh[l];
            if (lM.Ys.nslices() == 1) {
                // If there is only one temporal slice, then the solution is assumed constant
                // in time and no interpolation is performed
                continue;
            } else {
                // If there are multiple temporal slices, then the solution is linearly interpolated
                // between the known time values and the current time.
                double precompDt = com_mod.precompDt;
                double preTT = precompDt * (lM.Ys.nslices() - 1);
                double cT = cTS * dt;
                double rT = std::fmod(cT, preTT);
                int n1 = static_cast<int>(rT / precompDt);
                double alpha = std::fmod(rT, precompDt);
                int n2 = n1 + 1;
                for (int i = 0; i < nsd; i++) {
                    for (int j = 0; j < tnNo; j++) {
                        if (alpha == 0.0) {
                            Yn(i, j) = lM.Ys(i, j, n2);
                        } else {
                            Yn(i, j) = (1 - alpha) * lM.Ys(i, j, n1) + alpha * lM.Ys(i, j, n2);
                        }
                    }
                }
            }
        }
    }

zasexton added a commit to zasexton/svFSIplus_public that referenced this issue Mar 15, 2024
@zasexton
Copy link
Contributor Author

I will now work on adding necessary error checking and create a test case against the default scalar advection test within svFSIplus-Tests.

@zasexton
Copy link
Contributor Author

When testing this scheme I am noticing differences between the concentration values obtained for time step 1 of simulations when comparing test case 02-dye-AD against the same problem which takes the velocity field from 02-dye-AD as an input. While I obtain a qualitatively similar answer. The concentration values are only similar for an absolute tolerance less than 1E-02. I would think that we should obtain closer numerical agreement. I have attached my test case folder here. result_001_check.vtu is the solution file that I am checking against for spatial agreement in concentration values obtained from the 8-procs/result_001.vtu.

Any suggestions on why I might be noticing such differences would be incredibly helpful!

08-AD-precompute.zip

@zasexton
Copy link
Contributor Author

Perhaps there are contributions from the acceleration matrix Ag of the state variables that I am neglecting... I will check this now.

@zasexton

This comment was marked as resolved.

@ktbolt
Copy link
Collaborator

ktbolt commented Mar 20, 2024

@zasexton This may be an indexing bug.

@zasexton
Copy link
Contributor Author

@ktbolt this was a zack bug. the 4th degree of freedom is pressure so I'll need to add another print statement to check again

@zasexton
Copy link
Contributor Author

When rechecking the updates to the Yo, Yn, Yg, Ao, An, and Ag matrices within the inner loop of the test problem 02-dye_AD we get the following for node 0:

---------------------------------------------------------------------
 Eq     N-i     T       dB  Ri/R1   Ri/R0    R/Ri     lsIt   dB  %t
---------------------------------------------------------------------
Processor: 0 Testing velocity solution 1 time: 1.0000000000000000e-02
Yo(0:4, 0) = -5.3360849097713642e-04 -1.1668874366280032e-03 -1.3422206473249587e-01 0.0000000000000000e+008.8645060761327133e-04
Yn(0:4, 0) = -5.3360849097713642e-04 -1.1668874366280032e-03 -1.3422206473249587e-01 0.0000000000000000e+008.8645060761327133e-04
Yg(0:4, 0) = 0.0000000000000000e+00 0.0000000000000000e+00 0.0000000000000000e+00 0.0000000000000000e+000.0000000000000000e+00
Ao(0:4, 0) = 0.0000000000000000e+00 0.0000000000000000e+00 0.0000000000000000e+00 0.0000000000000000e+000.0000000000000000e+00
An(0:4, 0) = 0.0000000000000000e+00 0.0000000000000000e+00 0.0000000000000000e+00 -0.0000000000000000e+000.0000000000000000e+00
Ag(0:4, 0) = 0.0000000000000000e+00 0.0000000000000000e+00 0.0000000000000000e+00 0.0000000000000000e+000.0000000000000000e+00
 HF 1-1  2.670e-01  [0 1.000e+00 1.000e+00 3.747e-07]  [15 -148 10]
Processor: 0 Testing velocity solution 1 time: 1.0000000000000000e-02
Yo(0:4, 0) = -5.3360849097713642e-04 -1.1668874366280032e-03 -1.3422206473249587e-01 0.0000000000000000e+008.8645060761327133e-04
Yn(0:4, 0) = -5.3360849097713642e-04 -1.1668874366280032e-03 -1.3422206473249587e-01 -9.3323161016075558e-068.8645060761327133e-04
Yg(0:4, 0) = -5.3360849097713642e-04 -1.1668874366280032e-03 -1.3422206473249587e-01 0.0000000000000000e+008.8645060761327133e-04
Ao(0:4, 0) = 0.0000000000000000e+00 0.0000000000000000e+00 0.0000000000000000e+00 0.0000000000000000e+000.0000000000000000e+00
An(0:4, 0) = 0.0000000000000000e+00 0.0000000000000000e+00 0.0000000000000000e+00 -1.3998474152411330e-030.0000000000000000e+00
Ag(0:4, 0) = 0.0000000000000000e+00 0.0000000000000000e+00 0.0000000000000000e+00 0.0000000000000000e+000.0000000000000000e+00
 HF 1-2  5.480e-01  [-6 4.499e-01 4.499e-01 7.123e-07]  [20 -142 15]
Processor: 0 Testing velocity solution 1 time: 1.0000000000000000e-02
Yo(0:4, 0) = -5.3360849097713642e-04 -1.1668874366280032e-03 -1.3422206473249587e-01 0.0000000000000000e+008.8645060761327133e-04
Yn(0:4, 0) = -5.3360849097713642e-04 -1.1668874366280032e-03 -1.3422206473249587e-01 -4.7320534883353537e-068.8645060761327133e-04
Yg(0:4, 0) = -5.3360849097713642e-04 -1.1668874366280032e-03 -1.3422206473249587e-01 -6.2215440677383700e-068.8645060761327133e-04
Ao(0:4, 0) = 0.0000000000000000e+00 0.0000000000000000e+00 0.0000000000000000e+00 0.0000000000000000e+000.0000000000000000e+00
An(0:4, 0) = 0.0000000000000000e+00 0.0000000000000000e+00 0.0000000000000000e+00 -7.0980802325030286e-040.0000000000000000e+00
Ag(0:4, 0) = 0.0000000000000000e+00 0.0000000000000000e+00 0.0000000000000000e+00 -1.1665395127009443e-030.0000000000000000e+00
 HF 1-3  8.810e-01  [-18 1.158e-01 1.158e-01 6.809e-07]  [24 -142 17]
Processor: 0 Testing velocity solution 1 time: 1.0000000000000000e-02
Yo(0:4, 0) = -5.3360849097713642e-04 -1.1668874366280032e-03 -1.3422206473249587e-01 0.0000000000000000e+008.8645060761327133e-04
Yn(0:4, 0) = -5.3360849097713642e-04 -1.1668874366280032e-03 -1.3422206473249587e-01 -8.7446218908725520e-078.8645060761327133e-04
Yg(0:4, 0) = -5.3360849097713642e-04 -1.1668874366280032e-03 -1.3422206473249587e-01 -3.1547023255569025e-068.8645060761327133e-04
Ao(0:4, 0) = 0.0000000000000000e+00 0.0000000000000000e+00 0.0000000000000000e+00 0.0000000000000000e+000.0000000000000000e+00
An(0:4, 0) = 0.0000000000000000e+00 0.0000000000000000e+00 0.0000000000000000e+00 -1.3116932836308834e-040.0000000000000000e+00
Ag(0:4, 0) = 0.0000000000000000e+00 0.0000000000000000e+00 0.0000000000000000e+00 -5.9150668604191913e-040.0000000000000000e+00
 HF 1-4  1.202e+00  [-26 4.892e-02 4.892e-02 6.112e-07]  [26 -143 20]
Processor: 0 Testing velocity solution 1 time: 1.0000000000000000e-02
Yo(0:4, 0) = -5.3360849097713642e-04 -1.1668874366280032e-03 -1.3422206473249587e-01 0.0000000000000000e+008.8645060761327133e-04
Yn(0:4, 0) = -5.3360849097713642e-04 -1.1668874366280032e-03 -1.3422206473249587e-01 -1.8497082606738441e-078.8645060761327133e-04
Yg(0:4, 0) = -5.3360849097713642e-04 -1.1668874366280032e-03 -1.3422206473249587e-01 -5.8297479272483680e-078.8645060761327133e-04
Ao(0:4, 0) = 0.0000000000000000e+00 0.0000000000000000e+00 0.0000000000000000e+00 0.0000000000000000e+000.0000000000000000e+00
An(0:4, 0) = 0.0000000000000000e+00 0.0000000000000000e+00 0.0000000000000000e+00 -2.7745623910107754e-050.0000000000000000e+00
Ag(0:4, 0) = 0.0000000000000000e+00 0.0000000000000000e+00 0.0000000000000000e+00 -1.0930777363590695e-040.0000000000000000e+00
 HF 1-5s 1.531e+00  [-32 2.386e-02 2.386e-02 9.198e-07]  [26 -139 20]

We do have an indexing issue for the update to acceleration Ag. The acceleration updates are accumulating in the 4th degree of freedom (pressure) instead of the scalar field (5th degree of freedom). This is most likely do to the mis-match in physics equations actually being called in the problem (i.e. we do not actually add the fluid physics equation to the simulation) but the degrees of freedom for the simulation remain the same since we inject the precomputed solution. I will resolve this now.

@zasexton
Copy link
Contributor Author

zasexton commented Mar 21, 2024

It seems that the numerical differences are from the Dirichlet velocity boundary condition imposed on the fluid physics where the value for Yg is prescribed as Yn . Of course, if scalar advection is computed without prescribing an accompanying fluid simulation and instead we have only the velocity field solutions across time we will not guarantee this strict satisfaction (nor would we necessarily want to!). To recover the solution of the svFSIplus-Test/04-fluid/02-dye-AD in the first time step we can thus impose velocity at the fluid Dirichlet boundary in the precomputed velocity files to ensure Yg agreement. When we do this we recover the same solutions at the first time step regardless of using the partitioned sequential physics (as is traditionally been done) or using a precomputed velocity solution. See below:

precomputed_concentration_error

The maximum absolute node-wise error is 6.44449771e-16 and the mean absolute node-wise error is 2.45571103e-19. This makes sense and agrees with our intuition of numerical precision that we would expect of this simulation.

@zasexton
Copy link
Contributor Author

Attached is the relevant test case for verifying agreement of the scalar-advection equation using a steady, precomputed velocity field solution. The two solutions, for the scalar field demonstrate parity to desired floating point precision. Unfortunately such a test case for time-varying inlet flow rates is not as easy to construct due to assumptions about Yo, Yn, and Yg between the two simulation schemes. For now, this implementation of the code seems sufficient and rational for the applications it would be called.

I will begin a draft merge request now...

02-precompute-velocities.zip

zasexton added a commit to zasexton/svFSIplus_public that referenced this issue Mar 25, 2024
zasexton added a commit to zasexton/svFSIplus_public that referenced this issue Mar 28, 2024
zasexton added a commit to zasexton/svFSIplus_public that referenced this issue Mar 28, 2024
…tion, removing commented lines for testing in heatf.cpp, fixing spelling mistakes in ComMod.h (SimVascular#202)
zasexton added a commit to zasexton/svFSIplus_public that referenced this issue Mar 29, 2024
…nd vtk_xml_parser.cpp, encapsulated precomputed linear interpolation time advancement in a function to remove from main.cpp (SimVascular#202)
zasexton added a commit to zasexton/svFSIplus_public that referenced this issue Mar 29, 2024
MatteoSalvador pushed a commit that referenced this issue Apr 2, 2024
* including functionality to read in solution fields and store them within the mesh

* adding function for reading in precomputed solutions and parameters to mesh loading (#202)

* adding distribution functionality for Array3 and testing AD integration

* adding linear temporal interpolation for precomputed state-variables and adding a velocity output for scalar advection (#202)

* fixing initialize check for seperate fluid equation in AD simulation (#202)

* removing print statements for checking

* removing print statements and adding error checking for precomputed solution dimension consistency (#202)

* adding test case for precomputed velocity solutions to the heatf equation, removing commented lines for testing in heatf.cpp, fixing spelling mistakes in ComMod.h (#202)

* adding Doxygen compatible comments for new functions in vtk_xml.cpp and vtk_xml_parser.cpp, encapsulated precomputed linear interpolation time advancement in a function to remove from main.cpp (#202)

* fixing minor formating in main.cpp (#202)
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
enhancement New feature or request
Projects
None yet
Development

No branches or pull requests

2 participants