-
Notifications
You must be signed in to change notification settings - Fork 2
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
TestResult should not be part of a set #1
Comments
The rubric is independent of the test result currently. The test result set
may point to the rubric that was run in order to produce a set of test
results.
A test result may be returned with other test results in a set (not always
necessarily). Each result points to the test specification that produced
it, and that's it.
A test result set is not mandatory. It's a convenience to bundle test
results without having to repeat the same metadata again and again
El mar., 19 mar. 2024 8:44 a. m., Mark Wilkinson ***@***.***>
escribió:
… A test result should be completely agnostic of the rubric of which it is a
part. Different assessment tools will assemble different tests, so there's
no way for the test to know what rubric it is a member of. The ResultSet
membership property is sufficient to manage this piece of information
—
Reply to this email directly, view it on GitHub
<#1>,
or unsubscribe
<https://github.com/notifications/unsubscribe-auth/AALTIGV5TL7R5M27FGGGR7LYY7UEVAVCNFSM6AAAAABE5AG4N2VHI2DSMVQWIX3LMV43ASLTON2WKOZSGE4TIMRVGYZDQMY>
.
You are receiving this because you are subscribed to this thread.Message
ID: ***@***.***>
|
This is going to require that we inject additional metadata into the output of a test... which is fine, but... feels wrong. |
Perhaps we need to color-code the schema diagram, to show which piece of software is generating which classes/properties. In my mind, we have a test, a "workflow engine" that is executing a set of tests (what we call an "assessment"), and the assessment is based on a rubric (which is presumably independent of the workflow engine, but used by the workflow engine). |
In the end, what I think of is the the response that I would receive as a user/developer to do something with. In terms of granularity, it is true that you may have a workflow engine that runs individual tests. But from a usability point of view, you can simplify the output by stating "I assessed your resource, ran 20 tests calling this API and using this tool". If you want to return the granular provenance per test, it's also possible, but it may be repetitive. I think the current modeling supports both. If you don't want to return a resultset, then you don't. I agree that you have a test specification, a system that runs it and the test result. I don't understand the part where you would be injecting additional metadata into the output of a test. |
I will try to add two examples:
|
First example added. Also simplified the figure |
Closing this issue until someone complains |
A test result should be completely agnostic of the rubric of which it is a part. Different assessment tools will assemble different tests, so there's no way for the test to know what rubric it is a member of. The ResultSet membership property is sufficient to manage this piece of information
The text was updated successfully, but these errors were encountered: