Is there a specific reason we are storing the datasets in their serialised form rather than static fields/properties in the assembly? #973
Replies: 4 comments
-
Agreed, that there is a lot of knowledge in what I am describing, and that this knowledge most probably need to be documented better. Saying this, I still think it is easier to understand for a new contributor is that all you need to do is to place a file in a folder compared to having to manage that through some other mechanism. But would disagree that it would be as easy to write in the data in a CS file compared to doing it in something like GH. A lot of this data is more or less procedureally generated to some extent (as you are stating), generally from large tables of data, from excel for example, that are then used to generate the obejcts. Having to sit and type in all the values for all the dimensions and constants for all of our 2000+ steel sections into a cs file would be close to impossible, whilst just generating the sections you need is relatively simple.
Agree on the version drawback. This requires the discipline/toolkit lead to be aware of the datasets and make sure they are being upgraded as appropriate. For the structures datasets, that are the oldest, this has had to happen a few times already, but have gone relatively smooth. Not sure having it as compiled code would help there really or make it easier, but agree, it would at least flag it. Maybe this could be solved by some additional compliance/run check or similar @FraserGreenroyd . Something that makes sure that datasets that were previously de-serialising still are after a specific change.
Ok, sorry, misunderstood you point there :)
What kind of data are you talking about here, so I can understand your context a bit better? |
Beta Was this translation helpful? Give feedback.
-
Yes agreed, but my point is to allow doing things without reading all the wiki, but just by looking at the code that is there already and replicate. But of course a minimum level of knowledge is always necessary. So, I have a case in which my data is just a list of metadata - the information to retrieve (either download or load from disk) a file. It's much easier for me to type the information in by hand, rather than doing it in grasshopper - which goes through layers of manipulation and wrappers. |
Beta Was this translation helpful? Give feedback.
-
I cannot agree more. |
Beta Was this translation helpful? Give feedback.
-
I get your point on ease of creation for the case of potentially small surgical datasets and for someone in the code already @epignatelli Although arguably the serialised labels I have created: The origins of simply storing the serialised JSON data for "datasets" is for me to make it trivial for anyone to turn any collection of BHoM objects into a "dataset" for repeated use. The same way we use the JSON files to pass around serialised analysis or BIM models, or anything - you can memorialise those exact same instances of objects for repeated use by putting the same .json in the datasets folder. I also have historically been a fan of clear division of oM:
But as always if use cases are there good to enable flexibility. Be great to see in some sense what data you are dealing with and understand your work flow? Can you share? |
Beta Was this translation helpful? Give feedback.
-
Ported from BHoM/BHoM_Engine#1941
@IsakNaslundBh:
@epignatelli
@IsakNaslundBh:
@epignatelli
Beta Was this translation helpful? Give feedback.
All reactions