-
Notifications
You must be signed in to change notification settings - Fork 8
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
What is next? #12
Comments
I think it would be nice if we could separate features from the tests. Features are the things people vote for and tests are only used to verify vendor support. A features would typically have more than one test case. Issues have voting capabilities. They are not ideal for feature voting because issues are something that you can complete and close. I know there have been discussions on adding voting capabilities to other parts of Gitlab but I don't think that has been done yet. I'm thinking the wiki would be the best place, Running the test cases locally with your own simulator or with many different simulators/versions in a CI and tracking the status is what VUnit do. The problem is how/where we run the commercial tools. Github's CI has the ability let people run CI jobs on their local computers. I'm not sure if Gitlab has a solution like that but it would be a way to distribute the CI tasks to those having the required licenses while still having an automated solution Creating a bug report is just a matter of pointing to the failing CI run. Everything needed to recreate the bug is there. I'm ok with changing the name as suggested |
@LarsAsplund Reporting test errors to vendors is only a side issue - my main goal is to give individual users a means to express and tabulate interest in the feature - and then report it to the vendor. Tabulating it to us allows us to quantify interest and promote the feature to the community - reporting it to the vendors gives them a means to believe our numbers - if they are actually keeping any of the reports. Currently from a user perspective a vendor receives a feature request - denies that this is actually a VHDL feature and then deletes it. |
Is tabulating requests from multiple people WRT the same issue something we can automate? |
@JimLewis I was hoaping that the https://github.com/VHDL/Compliance-Tests will have interface like https://github.com/SymbiFlow/sv-tests |
@Nic30 That is ok, however, it misses tabulating the number of users who have noted that the feature does not work. This is important to do. Vendors claim to be "market driven". However, they have people who are paid to transition the market to SystemVerilog - this is where they make more money. They make claims that their market is happy with VHDL-2008 and has not asked for anything in the new standard. How do you prove this is a bogus claim. How do you help your customers trust you when you claim this is a bogus claim. On one presentation, a Vendor claimed that OSVVM was not a methodology. They claimed there are more SystemVerilog engineers available - even in Europe. Considering that in the European FPGA market that 30% use VHDL + OSVVM and only 20% use SystemVerilog +UVM, that is a fairly egregious claim. If we have numbers we can refute their claims. Without numbers, we loose customers to their continuous FUD. |
@JimLewis I send the sv-test to show you the test reports and it's GUI which seems to me as nice. The second thing which seems to me as a good idea is a test for each code construct based on formal syntax from language std. This is good because it completely tests the tool and passing tests can be seen as some kind of reward. This covers the points did ask for:
This is not related to VHDL/SV war, any vendor interest or claim. ( However I may be a vendor in your eyes, but I am just a PhD student) |
@Nic30 For me it is not a V vs. SV type of thing. How does the community (users and vendor) know if a language addition is really relevant or not? Simple, provide them with a tally of how many people tested the feature and submitted a bug request for it. If they are not submitting a bug report, then they are not so interested in it. OTOH, this sort of web based bug report submission and counting is not a strength in my skill set. So I am hoping to find someone else who is willing to implement it. In trade of course, I am contributing where I am stronger - VHDL language and VHDL Verification Libraries. Also I can also make sure that the VHDL language committee produces use models for all new language features. |
@Nic30 OTOH, for a commercial vendor, I expect them to support standards. Some are good. Others are playing a passive aggressive game of tool support - sometimes making things up - some times out right lying. |
I'd propose something less verbose:
Yes, as long as we use reactions to issues as the measuring mechanism. I think we can decide to take into account all reactions, some kind only, for all the comments in each issue, or for the first comment only. I believe issues can be reacted to, even if closed. Hence, we can use the open/closed state to track whether we have implemented tests/examples for that feature in this repo; and the reactions to count the demand. However, if we want to track the demand for each feature and vendor, that might be harder to achieve. On the one hand, we would need a separate issue (or a separate comment in the same issue) for each vendor. Similar to VHDL/Interfaces#27. On the other hand, we might not be allowed to do it.
This is currently the problem with this repo. The |
This is a very old issue, but I'd like to resurrect conversation around it given that there are some outstanding pull requests which help try to alleviate some of the deficiencies listed previously, specifically:
Adding the VHDL-2019 tests should provide the I like the table similar to sv-tests and I understand there may be license issues with posting something like that for commercial simulators, but could the overall test count be posted without issue for the commercial ones - instead of broken out? For example, if we grey'd out the test results individually, but just said it received a score of X/Y - would you be comfortable with that? I am willing to do more work on making this better and trying to drive better support. So, to reiterate @JimLewis, after those pull requests are merged in - what is next in 2023? |
@bpadalino merged #19 and #21 and updated #22. I'm unsure about #20, since we might want to discuss what to do in such cases (tools implementing features differently). With regard to #13, I didn't read the latest updates. I'll go through them now.
The
Fair enough. Let's wait until we merge #13. Then, we can move
There are several strategies we could use to work around the issue. For instance, we could have a table with columns G, N, Q, M, R and A (and C, S or X in the future). Then, we would add a large warning telling: "the results in this table are computed from results lists provided by users; we don't run any tests on non-free tools and we don't check the source of the lists provided by users". IMO we should not waste any time on that. We should not play any game based on hiding data/facts for dubious Also, it's 2023, we have 1-2y to do the next revision of the standard. There is still much work to be done to make the LRM and the IEEE libraries open-source friendly. Libraries are open-source, but not as friendly as they should; and the LRM is not open source yet. |
The problem as I see it is that if you ask vendors such as
Mentor/Siemen's Ray Salemi, FPGA Solutions Manager for Mentor's Questa,
the question, "Any word on Mentor supporting the new features in VHDL-2019?"
The response we get is:
"we tend to add features as customers start using them.
Do you have any that stand out? "
We need a means to document what "stands out" to the
entire user community. I think what you have here is a start
for doing this.
My vision is:
We need a means for the user community to demonstrate
their support for features and to give users confidence that
they are not the only one expressing interest in this feature.
We need a set of test cases that are test feature capability
that can be added to by users who find bugs that were not
illuminated by the initial tests. That sounds like where this
group is headed anyway.
Next, we need a way for people to express interest in a given
test case. Demonstrating user support. But only one vote
per user/github account.
We need scripts for running the tests on different tools.
We need individual users to be able to run said tests on
their local platform.
We need a means for the user to indicate whether a
test has passed or failed on their platform.
When a test passes, we need a mechanism so a user
can indicate this and we can at a minimum add it to
our internal tracking.
When a test case fails, we need a mechanism to indicate
it failed, to internally track what tool the test failed for,
and a means for an individual user to produce a vendor
product support request using their user name and have
the information for the report automatically collected.
In addition to submitting the issue to tech support,
it would also be nice to submit the issue to the vendors
discussion boards to help generate additional support
for the feature - or to add to an existing discussion.
We need some level of tabulated reporting regarding
interest level and support of a particular feature.
My only concern is what does a vendor consider to be
benchmarking. I always thought it was performance.
If it does not include language feature support, then
it would be nice to have each tool listed in a matrix and
indicate number of passing and failing tests for a particular
feature.
Even if we cannot report against a particular vendor,
we can tabulate:
Under support of code, it would also be nice to have an
extended listing of why this feature is important to you
and/or your project team.
Even without listing which vendor does or does not support
a feature, one thing we can clearly demonstrate to both
users and the vendors is the user support for the implementation
of a feature.
Given this objective, I think we need a more sexy name than
compliance tests - something like VHDL User Alliance Language Support Tests
The text was updated successfully, but these errors were encountered: