-
Notifications
You must be signed in to change notification settings - Fork 563
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Fix crash with old resultset.json #824
Conversation
542090d
to
8a3c80c
Compare
Ensure a different name so merging doesn't just throw it out ResultMerger#store_result: clear_resultset new_set = resultset command_name, data = result.to_hash.first it might throw out the result, it might not 😱
* make sure the time is current is also tested * adjust file paths so they actually end up trying to get merged and not being ignored
8a3c80c
to
23c8b94
Compare
We do a light check on every result if it looks like it conforms to our file format. If not, we ignore it and throw it out. Initial plan was to just catch an error and return a known good result, but sadly since we don't know what order the results come in that doesn't really work (as the first could be the "bad" result). Might be able to change this if we always knew we started with a known good result. Checking feels kinda clunky, but I honestly didn't know of a better way right now.
|
||
def matches_current_format?(result) | ||
# I so wish I could already use pattern matching | ||
key, data = result.first |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
this whole checking (and the other checks) feel kinda clunky. I didn't know of a better way to handle it right now though :(
@deivid-rodriguez this is ready-ish from my side, but the checking feels pretty clunky. If you manage to get to it, I'd appreciate a look (even if I'd merge it before because I wanna get the release out :) ) |
Merging to build upon it and eventual release this evening, if someone has a better idea on how to do this that'd still be mightily appreciated :) |
I'm sorry @PragTob I couldn't find time to look at this. |
@deivid-rodriguez no worries, I also didn't give a lot of time due to me wanting to release today :) if you still find anything here, I'm happy about a comment and I'll fix it up later :D |
@@ -1,5 +1,8 @@ | |||
# frozen_string_literal: true | |||
|
|||
# While you're here, handy tip for debugging: try to attach @announce-output | |||
# @announce-command to the scenarios/features to see what's going on |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
😍
I think it would be better if we "auto-healed" the results automatically without giving any warnings? That said, it doesn't seem like a big deal, but with a warning the user might be confused and think she is doing something wrong when she isn't. |
Also, is the issue you mention in the PR body about results being discarded what this PR is fixing? |
@deivid-rodriguez thanks for checking in! Yeah, figured that/knew that I knew it from somewhere. Wanted to revisit #687 As for auto healing, I don't think it's needed. The resultset is from an old run anyhow so we wouldn't want to merge it anyhow. So ignoring it is fine, I wanted to make sure in the message that the problem should go away but maybe I can formulate it better? Also, if we change the format again auto healing becomes more and more involved for (imo) no benefit. :) |
I read the code a bit, and it seems to me that the only change in the format is a new nesting level with ":lines" or ":branches" keys. The way I would've approached it is to add a ":lines" nesting to old reports and then merge them normally. I don't see a reason why we shouldn't merge them if they're within the "time window" for merging the user has set. |
@deivid-rodriguez imo the time window is only there because we have no good mechanism of saying when a new test run was started (because after rspec is finished cucumber might run and we only ever start new processes). However, in practice what we want is to only ever merge reports from the same test run. If the So, I don't think that that's the right behaviour. You are right that to achieve it right now that's the only thing we'd have to do is this. However, if we changed the format again (which we well might) that'd add different "fixes" which could grow more complex. Plus I really don't believe that we should merge them as outlined above. If we really could make sure that we're the first/last test process that's relevant I'd be even in favor of deleting the cache file, alas I can't think of how we could determine that. |
That's not how I understood auto merging. The way I understood it is: consider current coverage results accurate during the specified time window, and merge them with subsequent reports during that time window instead of discarding them. I find this useful for example for development workflows were you only run single tests or parts of a test suite, because they refer to the part of the program that I'm changing, and running the whole thing is too heavy. With auto-merging, I have continuous feedback about how my progress affects global coverage results. Without it, I lose all the other information. |
@deivid-rodriguez that's an interesting view of it that I hadn't considered before 👌 I see merging just as a vehicle to be able to merge results of multiple test suites (rspec, test unit, cucumber...) and parallel test execution. With your use case, might it not be counter productive? I.e. if you make coverage worse you won't notice because it merges with the old results so it'd be more like the result you get is the total of what you covered through all your test runs. Moreover, I think it might lead to other problems if you also changed the code so that lines move around different lines may be marked as covered just because their line in the file changes. |
The lines covered vs total lines will change accordingly even if the percent is similar/same. Anyways, this is how it currently works, so I assume people is tweaking the available options (
Simplecov shows warnings when this happens, that's good. Probably a good time for refreshing the whole thing :) Anyways, I'm happy with the way you did it too, I just wanted to comment with the way I would've done it. The warnings were a bit annoying in my test suite after the upgrade because I have many many different test suites, which all try to merge with global results and print the warning. But nothing that |
Thanks for all your recent work by the way, it's awesome! |
Nah, thanks for the feedback! 👌 You're right that's how it currently works. The warning should only show up once though, right? 🤔 |
Right, if you discard the invalid resultset the first time, there should be no warnings after that. Mmmm, there's maybe a bug somewhere 🤷♂️, I'll see if I can reproduce the multiwarning situation. |
Yeah that'd immensely suck if I messed that up... let me check with our own scenario real quick... F*ck... I just reproduced it, we still always seem to write it back into the cache file 😢 |
@deivid-rodriguez welp... I'll write a feature, work on it and release beta3 I guess. Only affects people where it is recent enough regarding their merge time out but yeah... |
ResultMerger#store_result:
it might throw out the result, it might not 😱
Hence adjusted the command name to be unique (for now) so that we're always confroned with the old bad data.
It's expected that this fails now as the fix is a TODO atm I just wanted to put it here so that I don't just have to update #820 all the time.