Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

TestResult should not be part of a set #1

Closed
markwilkinson opened this issue Mar 19, 2024 · 7 comments
Closed

TestResult should not be part of a set #1

markwilkinson opened this issue Mar 19, 2024 · 7 comments

Comments

@markwilkinson
Copy link
Contributor

A test result should be completely agnostic of the rubric of which it is a part. Different assessment tools will assemble different tests, so there's no way for the test to know what rubric it is a member of. The ResultSet membership property is sufficient to manage this piece of information

@dgarijo
Copy link
Collaborator

dgarijo commented Mar 19, 2024 via email

@markwilkinson
Copy link
Contributor Author

This is going to require that we inject additional metadata into the output of a test... which is fine, but... feels wrong.

@markwilkinson
Copy link
Contributor Author

Perhaps we need to color-code the schema diagram, to show which piece of software is generating which classes/properties.

In my mind, we have a test, a "workflow engine" that is executing a set of tests (what we call an "assessment"), and the assessment is based on a rubric (which is presumably independent of the workflow engine, but used by the workflow engine).

@dgarijo
Copy link
Collaborator

dgarijo commented Mar 19, 2024

In the end, what I think of is the the response that I would receive as a user/developer to do something with.
For example, I run F-UJI, and I get a set of result tests. I run FOOPS! and I get a set of result tests. I run the evaluator with a rubric (if I understood correctly), and I get a set of test results.

In terms of granularity, it is true that you may have a workflow engine that runs individual tests. But from a usability point of view, you can simplify the output by stating "I assessed your resource, ran 20 tests calling this API and using this tool". If you want to return the granular provenance per test, it's also possible, but it may be repetitive. I think the current modeling supports both. If you don't want to return a resultset, then you don't.

I agree that you have a test specification, a system that runs it and the test result. I don't understand the part where you would be injecting additional metadata into the output of a test.

@dgarijo
Copy link
Collaborator

dgarijo commented Mar 19, 2024

I will try to add two examples:

  • An example where you get a individual test result
  • An example where 2/3 test results are returned in a set.
    Hopefully this will aid the discussion.

@dgarijo
Copy link
Collaborator

dgarijo commented Mar 20, 2024

First example added. Also simplified the figure

@dgarijo
Copy link
Collaborator

dgarijo commented Oct 8, 2024

Closing this issue until someone complains

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants