-
Notifications
You must be signed in to change notification settings - Fork 27
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Using pre-computed velocity solutions in one-way coupled physics #202
Comments
The following function is how I propose to read in time-varying solution field data across a given
void load_time_varying_field_vtu(const std::string file_name, const std::string field_name, mshType& mesh) {
#define n_debug_load_vtu
#ifdef debug_load_vtu
std::cout << "[load_vtu] " << std::endl;
std::cout << "[load_vtu] ===== vtk_xml_parser::load_time_varying_field_vtu ===== " << std::endl;
std::cout << "[load_vtu] file_name: " << file_name << std::endl;
#endif
auto reader = vtkSmartPointer<vtkXMLUnstructuredGridReader>::New();
reader->SetFileName(file_name.c_str());
reader->Update();
vtkSmartPointer<vtkUnstructuredGrid> vtk_ugrid = reader->GetOutput();
vtkIdType num_nodes = vtk_ugrid->GetNumberOfPoints();
int array_count = 0;
std::vector<str::pair<std::string, int>> array_names;
if (num_nodes == 0) {
throw std::runtime_error("Failed reading the VTK file '" + file_name + "'.");
}
// Store all array names
for (int i = 0; i < vtk_ugrid->GetPointData()->GetNumberOfArrays(); i++) {
std::string array_name = vtk_ugrid->GetPointData()->GetArrayName(i);
size_t pos = array_name.find(field_name.c_str());
if (pos != std::string::npos) {
auto not_digit = [](char c) { return !std::isdigit(c); };
auto it = std::find_if(array_name.rbegin(), array_name.rend(), not_digit);
std::string time_step = std::string(it.base(), array_name.end());
array_count++;
if (!time_step.empty()) {
array_names.push_back({array_name, std::stoi(time_step)});
} else {
array_names.push_back({array_name, 0});
}
}
}
if (array_count == 0) {
throw std::runtime_error("No '" + field_name + "' data found in the VTK file '" + file_name + "'.");
}
// Order all array names
std::sort(array_names.begin(), array_names.end(), [](const std::pair<std::string, int>& a, const std::pair<std::string, int>& b) {
return a.second < b.second;
});
int num_components = array->GetNumberOfComponents();
mesh.Ys.resize(num_components, num_nodes, array_count);
for (int i = 0; i < array_count; i++) {
auto array = vtk_ugrid->GetPointData()->GetArray(array_names[i].first.c_str());
if (array == nullptr) {
throw std::runtime_error("No '" + array_names[i].first + "' data found in the VTK file '" + file_name + "'.");
}
for (int j = 0; j < num_nodes; j++) {
for (int k = 0; k < num_components; k++) {
mesh.Ys(k, j, i) = array->GetComponent(j, k);
}
}
}
} |
For partitioned multiphysics, I believe this will be a powerful and flexible approach to incorporate previous solution data into new physics simulations. Because we do not assume a particular structure to the solution field we can potentially use this same functionality for scalar or tensor fields (should a partitioned scheme become relevant for such cases). |
The branch above implements functionality for reading in specified solution fields but does not yet distribute the global data across processors (necessary for any parallel simulation). This will be added next and seek to replace the functionality of the added code in |
The following function is how I propose to distribute precomputed state variables. Similar to the Array3<double>
local(const ComMod& com_mod, const CmMod& cm_mod, const cmType& cm, Array3<double>& U){
if (com_mod.ltg.size() == 0) {
throw std::runtime_error("ltg is not set yet");
}
Array3<double> local_array;
int m;
int n;
int r;
if (cm.mas(cm_mod)) {
m = U.nrows();
r = U.ncols();
n = U.nslices();
if (U.ncols() != com_mod.gtnNo) {
throw std::runtime_error("local_rv is only specified for vector with size gtnNo");
}
}
if (cm.seq()) {
local_array.resize(m, com_mod.gtnNo, U.nslices());
local_array = U;
return local_array;
}
cm.bcast(cm_mod, &m);
cm.bcast(cm_mod, &n);
cm.bcast(cm_mod, &r);
local_array.resize(m, com_mod.tnNo, r);
Vector<double> tmpU(m * com_mod.gtnNo * r);
if (cm.mas(cm_mod)) {
for (int a = 0; a < com_mod.gtnNo; a++) {
int s = m * a;
for (int i = 0; i < U.nslices(); i++) {
int e = i * m * (com_mod.gtnNo + 1);
for (int j = 0; j < U.ncols(); j++) {
tmpU(j+s+e) = U(j, a, i);
}
}
}
}
cm.bcast(cm_mod, tmpU);
for (int a = 0; a < com_mod.tnNo; a++) {
int Ac = com_mod.ltg[a];
int s = m * Ac;
for (int i = 0; i < n; i++) {
int e = i * m * (r + 1);
for (int j = 0; j < m; j++) {
local_array(j, a, i) = tmpU(j+s+e);
}
}
}
return local_array;
} |
@zasexton Good to show the proposed code! Some comments
|
@ktbolt regarding your comments.
|
The final component necessary to integrate precomputed state variables for partitioned physics allows for time-varying solutions. In general we may not assume that the time step size from a previous physics simulation will be the same as current physics being computed (the reasons for this are practical and numerous, but often partitioned multi-physics schemes may seek to bridge different time scales explicitly, for example). Instead we assume that pre-computed solutions can be linearly interpolated to establish temporal agreement. This seems to be a valid assumption since any well-resolved solution from a previous simulation should be well-approximated with temporal linearity. In the special cases where the governing equations of the previous simulation are linear (e.g. diffusion, steady-porous media flow), of course, linear interpolation is mathematically well-justified. To do this we must add the following code to the main iteration loop of the if (com_mod.usePrecomp) {
#ifdef debug_iterate_solution
dmsg << "Use precomputed values ..." << std::endl;
#endif
// This loop is used to interpolate between known time values of the precomputed
// state-variable solution
for (int l = 0; l < com_mod.nMsh; l++) {
auto lM = com_mod.msh[l];
if (lM.Ys.nslices() == 1) {
// If there is only one temporal slice, then the solution is assumed constant
// in time and no interpolation is performed
continue;
} else {
// If there are multiple temporal slices, then the solution is linearly interpolated
// between the known time values and the current time.
double precompDt = com_mod.precompDt;
double preTT = precompDt * (lM.Ys.nslices() - 1);
double cT = cTS * dt;
double rT = std::fmod(cT, preTT);
int n1 = static_cast<int>(rT / precompDt);
double alpha = std::fmod(rT, precompDt);
int n2 = n1 + 1;
for (int i = 0; i < nsd; i++) {
for (int j = 0; j < tnNo; j++) {
if (alpha == 0.0) {
Yn(i, j) = lM.Ys(i, j, n2);
} else {
Yn(i, j) = (1 - alpha) * lM.Ys(i, j, n1) + alpha * lM.Ys(i, j, n2);
}
}
}
}
}
} |
…and adding a velocity output for scalar advection (SimVascular#202)
I will now work on adding necessary error checking and create a test case against the default scalar advection test within |
When testing this scheme I am noticing differences between the concentration values obtained for time step 1 of simulations when comparing test case Any suggestions on why I might be noticing such differences would be incredibly helpful! |
Perhaps there are contributions from the acceleration matrix |
This comment was marked as resolved.
This comment was marked as resolved.
@zasexton This may be an indexing bug. |
@ktbolt this was a zack bug. the 4th degree of freedom is pressure so I'll need to add another print statement to check again |
When rechecking the updates to the ---------------------------------------------------------------------
Eq N-i T dB Ri/R1 Ri/R0 R/Ri lsIt dB %t
---------------------------------------------------------------------
Processor: 0 Testing velocity solution 1 time: 1.0000000000000000e-02
Yo(0:4, 0) = -5.3360849097713642e-04 -1.1668874366280032e-03 -1.3422206473249587e-01 0.0000000000000000e+008.8645060761327133e-04
Yn(0:4, 0) = -5.3360849097713642e-04 -1.1668874366280032e-03 -1.3422206473249587e-01 0.0000000000000000e+008.8645060761327133e-04
Yg(0:4, 0) = 0.0000000000000000e+00 0.0000000000000000e+00 0.0000000000000000e+00 0.0000000000000000e+000.0000000000000000e+00
Ao(0:4, 0) = 0.0000000000000000e+00 0.0000000000000000e+00 0.0000000000000000e+00 0.0000000000000000e+000.0000000000000000e+00
An(0:4, 0) = 0.0000000000000000e+00 0.0000000000000000e+00 0.0000000000000000e+00 -0.0000000000000000e+000.0000000000000000e+00
Ag(0:4, 0) = 0.0000000000000000e+00 0.0000000000000000e+00 0.0000000000000000e+00 0.0000000000000000e+000.0000000000000000e+00
HF 1-1 2.670e-01 [0 1.000e+00 1.000e+00 3.747e-07] [15 -148 10]
Processor: 0 Testing velocity solution 1 time: 1.0000000000000000e-02
Yo(0:4, 0) = -5.3360849097713642e-04 -1.1668874366280032e-03 -1.3422206473249587e-01 0.0000000000000000e+008.8645060761327133e-04
Yn(0:4, 0) = -5.3360849097713642e-04 -1.1668874366280032e-03 -1.3422206473249587e-01 -9.3323161016075558e-068.8645060761327133e-04
Yg(0:4, 0) = -5.3360849097713642e-04 -1.1668874366280032e-03 -1.3422206473249587e-01 0.0000000000000000e+008.8645060761327133e-04
Ao(0:4, 0) = 0.0000000000000000e+00 0.0000000000000000e+00 0.0000000000000000e+00 0.0000000000000000e+000.0000000000000000e+00
An(0:4, 0) = 0.0000000000000000e+00 0.0000000000000000e+00 0.0000000000000000e+00 -1.3998474152411330e-030.0000000000000000e+00
Ag(0:4, 0) = 0.0000000000000000e+00 0.0000000000000000e+00 0.0000000000000000e+00 0.0000000000000000e+000.0000000000000000e+00
HF 1-2 5.480e-01 [-6 4.499e-01 4.499e-01 7.123e-07] [20 -142 15]
Processor: 0 Testing velocity solution 1 time: 1.0000000000000000e-02
Yo(0:4, 0) = -5.3360849097713642e-04 -1.1668874366280032e-03 -1.3422206473249587e-01 0.0000000000000000e+008.8645060761327133e-04
Yn(0:4, 0) = -5.3360849097713642e-04 -1.1668874366280032e-03 -1.3422206473249587e-01 -4.7320534883353537e-068.8645060761327133e-04
Yg(0:4, 0) = -5.3360849097713642e-04 -1.1668874366280032e-03 -1.3422206473249587e-01 -6.2215440677383700e-068.8645060761327133e-04
Ao(0:4, 0) = 0.0000000000000000e+00 0.0000000000000000e+00 0.0000000000000000e+00 0.0000000000000000e+000.0000000000000000e+00
An(0:4, 0) = 0.0000000000000000e+00 0.0000000000000000e+00 0.0000000000000000e+00 -7.0980802325030286e-040.0000000000000000e+00
Ag(0:4, 0) = 0.0000000000000000e+00 0.0000000000000000e+00 0.0000000000000000e+00 -1.1665395127009443e-030.0000000000000000e+00
HF 1-3 8.810e-01 [-18 1.158e-01 1.158e-01 6.809e-07] [24 -142 17]
Processor: 0 Testing velocity solution 1 time: 1.0000000000000000e-02
Yo(0:4, 0) = -5.3360849097713642e-04 -1.1668874366280032e-03 -1.3422206473249587e-01 0.0000000000000000e+008.8645060761327133e-04
Yn(0:4, 0) = -5.3360849097713642e-04 -1.1668874366280032e-03 -1.3422206473249587e-01 -8.7446218908725520e-078.8645060761327133e-04
Yg(0:4, 0) = -5.3360849097713642e-04 -1.1668874366280032e-03 -1.3422206473249587e-01 -3.1547023255569025e-068.8645060761327133e-04
Ao(0:4, 0) = 0.0000000000000000e+00 0.0000000000000000e+00 0.0000000000000000e+00 0.0000000000000000e+000.0000000000000000e+00
An(0:4, 0) = 0.0000000000000000e+00 0.0000000000000000e+00 0.0000000000000000e+00 -1.3116932836308834e-040.0000000000000000e+00
Ag(0:4, 0) = 0.0000000000000000e+00 0.0000000000000000e+00 0.0000000000000000e+00 -5.9150668604191913e-040.0000000000000000e+00
HF 1-4 1.202e+00 [-26 4.892e-02 4.892e-02 6.112e-07] [26 -143 20]
Processor: 0 Testing velocity solution 1 time: 1.0000000000000000e-02
Yo(0:4, 0) = -5.3360849097713642e-04 -1.1668874366280032e-03 -1.3422206473249587e-01 0.0000000000000000e+008.8645060761327133e-04
Yn(0:4, 0) = -5.3360849097713642e-04 -1.1668874366280032e-03 -1.3422206473249587e-01 -1.8497082606738441e-078.8645060761327133e-04
Yg(0:4, 0) = -5.3360849097713642e-04 -1.1668874366280032e-03 -1.3422206473249587e-01 -5.8297479272483680e-078.8645060761327133e-04
Ao(0:4, 0) = 0.0000000000000000e+00 0.0000000000000000e+00 0.0000000000000000e+00 0.0000000000000000e+000.0000000000000000e+00
An(0:4, 0) = 0.0000000000000000e+00 0.0000000000000000e+00 0.0000000000000000e+00 -2.7745623910107754e-050.0000000000000000e+00
Ag(0:4, 0) = 0.0000000000000000e+00 0.0000000000000000e+00 0.0000000000000000e+00 -1.0930777363590695e-040.0000000000000000e+00
HF 1-5s 1.531e+00 [-32 2.386e-02 2.386e-02 9.198e-07] [26 -139 20] We do have an indexing issue for the update to acceleration |
It seems that the numerical differences are from the Dirichlet velocity boundary condition imposed on the fluid physics where the value for The maximum absolute node-wise error is |
Attached is the relevant test case for verifying agreement of the I will begin a draft merge request now... |
…olution dimension consistency (SimVascular#202)
…tion, removing commented lines for testing in heatf.cpp, fixing spelling mistakes in ComMod.h (SimVascular#202)
…nd vtk_xml_parser.cpp, encapsulated precomputed linear interpolation time advancement in a function to remove from main.cpp (SimVascular#202)
* including functionality to read in solution fields and store them within the mesh * adding function for reading in precomputed solutions and parameters to mesh loading (#202) * adding distribution functionality for Array3 and testing AD integration * adding linear temporal interpolation for precomputed state-variables and adding a velocity output for scalar advection (#202) * fixing initialize check for seperate fluid equation in AD simulation (#202) * removing print statements for checking * removing print statements and adding error checking for precomputed solution dimension consistency (#202) * adding test case for precomputed velocity solutions to the heatf equation, removing commented lines for testing in heatf.cpp, fixing spelling mistakes in ComMod.h (#202) * adding Doxygen compatible comments for new functions in vtk_xml.cpp and vtk_xml_parser.cpp, encapsulated precomputed linear interpolation time advancement in a function to remove from main.cpp (#202) * fixing minor formating in main.cpp (#202)
Problem
The general problem: We wish to use the (potentially time-varying) velocity solutions obtained from a previous simulation within a subsequent physics (namely, scalar advection-diffusion).
Solution
This has been done previously in
svFSI
on side-branches where residence time calculations have been desired or where thrombosis modeling has been done. See Gutierrez et al 2021.Although it is not clear if this exact framework would work well for a more object-oriented
svFSIplus
the following scheme is generally implemented insvFSI
:True
/False
flag is selected forUse pre-computed velocity
True
the following subroutine is called insvFSI
to read in the velocity solution vector fields across some set of time points. This subroutine can be found in theVTKXML.f
file of the attachedsvFSI
source code. Reminder this will not be found on the main branches ofsvFSI
as this functionality was never formally included.DISTRIBUTE
subroutine inDISTRIBUTE.f
allU
solutionsAttached files include the source code and example input files for residence time calculations.
svFSI_input.txt
tagFile.txt
Code.tar.gz
Additional context
No response
Code of Conduct
The text was updated successfully, but these errors were encountered: