-
Notifications
You must be signed in to change notification settings - Fork 244
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
[riscv-arch-test] measure execution time of each test #119
Conversation
👍
This is a complicated one 😄 In general, there are 5 supported "test libraries" right now: The real machine run time for executing a complete test library is something like this:
The individual tests vary in complexity and thus in the resulting execution time. The current setup uses the worst-case execution time for all tests in one library. No question, this is not efficient at all. So if you say a test takes 17 minutes then I think this is the execution of all tests from one library. Running only Also I think that I would love to have a better approach here. It would be nice if the program being executed could actually terminate the simulation by itself. I was thinking about some memory-mapped component in the testbench that terminates the simulation by writing a specific pattern. Maybe we could use |
No. I really mean that jal-01 takes 17 minutes. That's why I think it is a problem. See https://github.com/umarcor/neorv32/runs/3041314676?check_suite_focus=true#step:6:449. There are ~85 tests, and all of them take 52 min. So, each test needs 25 seconds on average, except for I/jal-01. I found that because time is a serious problem when a container is used. See https://github.com/umarcor/neorv32/actions/runs/1020590959. Testing
The point is that the hardware used for running I/jal-01 is exactly the same as the other tests in Anyway, I propose we merge this, so I can rebase the containers branch on top of it. In the end, this PR is harmless as it is only meant for providing more info to us.
I think this is something to be evaluated after we are done with the current set of reorganisation PRs. I have a branch on top of #117 for creating a
It does. That is used by VUnit internally.
Before VHDL 2008, there was no standard procedure for terminating the simulation. Therefore, an assertion or report of severity error is the de facto standard procedure for terminating VHDL 1993 testbenches. All simulators are (should be) aware of that, and can handle it. In fact, all of the current tests are returning exit code |
You are absolutely right, the hardware is always the same. Why is there such a great variety in the run time?!? Some take 24s, others take 52s and Maybe we should disable the i-cache for the simulations here. At least this would highly reduce the switching activity - especially for the jal test, which obviously does a lot of jumps. 🤔
I will check the thing with the i-cache and then we can merge this.
Ok, so we could implement a mechanism for the the CPU to terminate the simulation. Let's discuss/implement this in a follow-up PR/issue. |
Disabling the i-cache makes everything slower. I mean, this is obvious somehow... 😄 So it is not an issue with the "amount of switching activity GHDL has to simulate".... Anyway, we can elaborate on that in a later issue. |
This PR adds
time -v
to the architecture test simulation calls, in order to measure the execution time and resource usage.Most of the tests need 20-60s, which is reasonable. However,
I/jal-01
needs 17 minutes! @stnolting, is that expected? Might that be caused by some bug?