Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

How to run separate OpenSeesPy interepreters in parallel #219

Open
Kang-SungKu opened this issue Feb 6, 2021 · 0 comments
Open

How to run separate OpenSeesPy interepreters in parallel #219

Kang-SungKu opened this issue Feb 6, 2021 · 0 comments

Comments

@Kang-SungKu
Copy link

Dear Dr. Zhu,

I am enjoying using OpenSeesPy for my research work, and would like to understand the best practice to run separate OpenSeesPy interpreters in parallel (i.e., create multiple OpenSeesPy interpreters, and the nodes/elements created from one interpreter cannot be accessed by the other interpreters). Here are the specific details on my issue.

-Goal: I am implementing a design exploration framework, which utilizes OpenSeesPy to evaluate a number of different structures and identify an optimal solution. For efficient design exploration on a muti-core CPU or a cluster with multiple nodes, I would like to run multiple OpenSeesPy sessions in parallel.

-What I did: First I referred the examples in the official documentation (https://openseespydoc.readthedocs.io/en/latest/src/paralleltruss2.html) and your recent session (https://www.youtube.com/watch?v=vjGm2kM5Ihc&feature=youtu.be), but the examples are about analyzing the same nodes/elements with multiple properties in parallel, but not about analyzing different sets of nodes/elements in parallel. To achieve the goal, I tried parallelizing a for-loop involving structural analysis using a library called Dask (https://dask.org/) coupled with Microsoft MPI. where each iteration of the loop creates a Python instance importing OpenSeesPy and performing some analysis.

-Issue: However, when I parallelized the for-loop, I get errors like "node with tag xx already exists in model", which implies that a node create from a Python instance is shared with the other instances. Therefore, I believe even though multiple instances have been created, they are relying on the same OpenSees(Py) interpreter under the hood, which is not the intended configuration. Therefore, I believe I will need to find a way to run completely separate OpenSeesPy interpreters in parallel to address my issue.

The simplified version of my scripts is shown below (please note that the simplified script is to present what I intend to do, but not for replication of the issue). Script 1 defines a class (MasonryAnalysisModule) importing OpenSeesPy and performing some analysis, and Script 2 creates multiple instances of MasonryAnalysisModule and parallelize the analysis by using Dask with the for-loop. Any comments or suggestions (whether with Dask or other methods for parallelization) are really appreciated. Thank you very much for your time!

Best,
SungKu


Configuration:

Python 3.8.5
OpenSeesPy 3.2.2.5
Dask 2021.1.1


Script 1: masonry_analysis_module.py

import openseespy.opensees as ops  # import OpenSeesPy
import Get_Rendering as opsplt  # import OpenSeesPy plotting commands

class MasonryAnalysisModule:
    def test(self, input_seed):
        self.seed = input_seed

        # Initialize OpenSees Model
        # Clean previous model
        ops.wipe()
        ops.model('basic', '-ndm', 3, '-ndf', 3)    # For 'block', ndm and ndf should be 3
        print("This instance has seed of ", self.seed)

        for node_index in range(10):
            x = node_index * self.seed
            y = node_index * self.seed
            z = node_index * self.seed
            ops.node(node_index, x, y, z)

        # The result can be some analysis results, but for test, just return the node tags
        result = ops.getNodeTags()
        print("\t--> complete seed", self.seed)
        return result


Script 2: parallel_openseespy_test.py

from dask.distributed import Client
from dask import delayed, compute

def ops_test(input_seed):
    import masonry_analysis_module as ma
    masonry_analysis_module = ma.MasonryAnalysisModule()
    analysis_result = masonry_analysis_module.test(input_seed)
    return analysis_result

if __name__ == "__main__":
    NUM_FEM_WORKERS = 4
    client = Client(n_workers=NUM_FEM_WORKERS)

    # For-loop using DASK-delayed
    list_result = []
    for input_seed in range(10):
        result = delayed(ops_test)(input_seed)
        list_result.append((input_seed, result))

    # Here, initialte the computation of for-loop and retrieve the results
    list_result = compute(*list_result)

    for result in list_result:
        print("X:", result)


Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

1 participant