Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Test Networks developed at KTH SmarTS Lab #4

Closed
6 of 18 tasks
lvanfretti opened this issue Nov 5, 2015 · 21 comments
Closed
6 of 18 tasks

Test Networks developed at KTH SmarTS Lab #4

lvanfretti opened this issue Nov 5, 2015 · 21 comments
Assignees
Milestone

Comments

@lvanfretti
Copy link
Contributor

@tinrabuzin we need to upload the test networks we have developed over time.

There are several of them, I think some of them are already in the development examples, however, we could create under examples a folder called test networks with the network models (from what I remember, include):

  • SMIB (ref Kundur Book / PSAT)
  • 2 Area system (ref ?)
  • IEEE 9 Bus (ref PSAT)
  • IEEE 14 Bus (ref PSAT)
  • Nordic44 (ref PSSE)
  • KTH Nordic 32 (ref Simulink)
  • KTH Nordic 32 (ref PSAT)
  • System A (ref Eurostag)
  • System B (ref Eurostag)
  • AKD (Giussepe)
  • AKA (Giussepe)
  • HSD (Giussepe)
  • iGrGen (Tetiana)
  • Other small systems from Tetiana
  • Norwegian distribution network (Ahsan)
  • Models developed by Ahsan to replicate those in Yuwa's thesis

Let's try to track what is available of larger test networks, and see if I forgot something.

Important

  • Create another repo with source files of each test system for Development and Application examples including the files from the source software used to develop them.

New

  • Replicate all test models from PSAT (Ahsan)
@lvanfretti
Copy link
Contributor Author

More systems:

  • AKD (Giussepe)
  • AKA (Giussepe)
  • HSD (Giussepe)

@MaximeBaudette
Copy link
Member

@lvanfretti
I propose that we create a separate repository for all these test systems, so we don't mix it with the library.
We can host it on the SmarTS-Lab organization.

@lvanfretti
Copy link
Contributor Author

@MaximeBaudette
I don't like that idea because if it is not in the /itesla/ipsl repo it will not be part of the distro, and therefore will not be maintained.

This is a major issue for the future, when I don't have any students to help me maintain the models... I don't want people to have to go through the same effort that we had to go through.

@MaximeBaudette
Copy link
Member

@lvanfretti
We can also host it on the itesla account but in a different repository.

@lvanfretti
Copy link
Contributor Author

@MaximeBaudette
Talk to Geoffrey to check what will be taken care of by iTesla consortium and what doesn't.
But this is anyway confusing to new users. Why should it not be in the library? Every other library includes the examples with the library.

@lvanfretti
Copy link
Contributor Author

@MaximeBaudette see the comment here

@MaximeBaudette
Copy link
Member

@lvanfretti
My point is that this repository should be dedicated to the library only. I personally think that other, more complex, networks should be developed as independent packages. And because of this I think it is more suitable to host our more complex networks on a separate repository.
We have examples in the library, but I think we should only keep the examples that have been used for small scale validation.

@MaximeBaudette
Copy link
Member

The Networks are being uploaded in the ApplicationExamples branch. Preliminary version of the some systems have been uploaded, I updated Luigi's comment with clickable list.

When the first batch of networks has been checked and validated, it will be merged into the master branch.

@lvanfretti
Copy link
Contributor Author

@MaximeBaudette @tinrabuzin and Ahsan you have updated and validated a lot of test systems.

However, we don't provide access to validation files in the reference software. This is not part of iPSL, however, we should make them available somewhere.

I suggest we figure out a way of having them available. As I don't know GIT very well, I don't know if we can have another master branch with this, but we should edit the models with an annotation of where to find them, while preserving both the Development Examples and Application Examples structure.

@lvanfretti lvanfretti modified the milestones: Release 1.0, Release 0.9 Feb 21, 2016
@MaximeBaudette
Copy link
Member

@lvanfretti
Git is really not made to store large files. To give you an idea, Bitbucket limits the repository at 100 MB and GitHub 1Go, this translate in very few validation cases that we could store there before reaching the limit (one validation case is 10-50 MB depending on network size).
I have some idea about how to do that and requested Tin to start thinking about it.
I put down some of my ideas over there:
https://trello.com/c/IQ9o6RAO

@lvanfretti
Copy link
Contributor Author

But Maxime, we should not have so much storage needed!
The files we need to store, from PSAT, PSS/E and Eurotrash are more or less
only a few KB of size.
We don't need to store the output results.

In any case, this is really important to give people access to. We can't
claim that the system is "sw-to-sw validated" unless the source files are
available for anyone to reproduce the result.

And finally, if I have to pay cash to get a larger Github for SmarTS Lab,
we do it, even if I have to pay from my pocket.

Have a nice sunday!

On Sun, Feb 21, 2016 at 11:35 AM, Maxime Baudette [email protected]
wrote:

@lvanfretti https://github.com/lvanfretti
Git is really not made to store large files. To give you an idea,
Bitbucket limits the repository at 100 MB and GitHub 1Go, this translate in
very few validation cases that we could store there before reaching the
limit (one validation case is 10-50 MB depending on network size).
I have some idea about how to do that and requested Tin to start thinking
about it.
I put down some of my ideas over there:
https://trello.com/c/IQ9o6RAO


Reply to this email directly or view it on GitHub
#4 (comment).

@MaximeBaudette
Copy link
Member

I think we should store the output results, so that we may in the future have an automatic validation of the library (I talked to Tin about Continuous Integration a while ago), and for this it would be more practical to just have all the "trace" of the models to run scripts. (It seems really unnecessary to have to "re-run all simulations", if we would have to run the check at every push)

As for the source files of the reference systems, we could collect them in a separate repository... But I'm afraid we might run into the problem of copyright holders. For a lot of these systems we are not the copyright holder, and then it makes it complicated to just collect the files, while keeping it clear, what is ours, and what is somebody else's.

@MaximeBaudette
Copy link
Member

@lvanfretti you can ping ahsan like this @ahsanKTH

@tinrabuzin
Copy link
Member

@MaximeBaudette @lvanfretti Here you can find an interesting way of testing the library - https://github.com/lbl-srg/modelica-buildings/wiki/Unit-Tests. I guess we could do the same with our DevelopmentExample and Application Examples... Though this doesn't solve the issue of the data storage.

@dietmarw
Copy link
Contributor

Regarding storing simulation results you might wanna have a look at https://git-lfs.github.com/

@tinrabuzin
Copy link
Member

@dietmarw Thanks for the tip! GIT LFS looks nice.
I have another question. Have you ever seen that the automatic tests have been run on the Modelica code when it has been committed to the repository? Actually, do you have any suggestions about how we could check that our test systems still work correctly when someone makes changes to the repo?

@dietmarw
Copy link
Contributor

@tinrabuzin I don't have much experience with automatic library testing tbh. I knew about the Buildings library having some pre-release scripts in place. But you might be able to use some of the setup that OpenModelica is using for its library coverage tests. But better hear with them :-)

@petitrenaudseb
Copy link

We could also you something like Jenkins https://jenkins-ci.org/ which is adapted for automatic build, test, etc.

You could use it to test if the latest commit is still ok and everything works after every commit or every night for example

@tinrabuzin
Copy link
Member

@petitrenaudseb Yes, I've seen Jenkins and looked into Travis, but I'm not sure how I would do it for Modelica code.

@dietmarw
Copy link
Contributor

For the ModelicaBook (which utilises OpenModelica for generation of plots etc.) @xie-dongping did recently implement a CI build using circleci (could probably be done with travis too I guess). Have a look at the setup: https://github.com/xogeny/ModelicaBook/blob/master/circle.yml which basically just launches the docker container: https://github.com/xogeny/ModelicaBook/tree/master/docker

@lvanfretti
Copy link
Contributor Author

I am closing this item as we will not continue to support iPSL within my team.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

No branches or pull requests

5 participants