Skip to content

Commit

Permalink
Fix oversight in type_use.rs
Browse files Browse the repository at this point in the history
Closes #2053
  • Loading branch information
marijnh committed Apr 18, 2012
1 parent 73ea690 commit fda7bb6
Showing 1 changed file with 15 additions and 0 deletions.
15 changes: 15 additions & 0 deletions src/rustc/middle/trans/type_use.rs
Original file line number Diff line number Diff line change
Expand Up @@ -180,6 +180,21 @@ fn mark_for_expr(cx: ctx, e: @expr) {
// after the chosen field
let base_ty = ty::node_id_to_type(cx.ccx.tcx, base.id);
type_needs(cx, use_repr, ty::type_autoderef(cx.ccx.tcx, base_ty));

option::iter(cx.ccx.maps.method_map.find(e.id)) {|mth|
alt mth {
typeck::method_static(did) {
option::iter(cx.ccx.tcx.node_type_substs.find(e.id)) {|ts|
vec::iter2(type_uses_for(cx.ccx, did, ts.len()), ts)
{|uses, subst| type_needs(cx, uses, subst)}
}
}
typeck::method_param(_, _, param, _) {
cx.uses[param] |= use_tydesc;
}
typeck::method_iface(_, _) {}
}
}
}
expr_log(_, _, val) {
node_type_needs(cx, use_tydesc, val.id);
Expand Down

14 comments on commit fda7bb6

@brson
Copy link
Contributor

@brson brson commented on fda7bb6 Apr 18, 2012

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Please add tests

@marijnh
Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Tests have a cost too. Our test suite already takes a terribly long time to run. Adding another test for every bug, even if it's unlikely to ever surface again, is how it got so bloated.

@nikomatsakis
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I pretty strongly disagree. Reliability for a compiler is priority #1. If we need to break up our test base into "random issues that are unlikely to reoccur", which do not need to be run constantly but do get run regularly, that's fine, but we should always have a test.

@nikomatsakis
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Also: we're unlikely to remove that chunk of code, but it's not so unlikely that we might rewrite that whole system (or any system in the compiler). In that case, the weird tricky forgotten cases can easily resurface.

@brson
Copy link
Contributor

@brson brson commented on fda7bb6 Apr 18, 2012

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Clearly our opinions are very different on this subject. From my perspective the issue is not fixed without a test verifying it.

@catamorphism
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I agree with Brian and Niko. The work of fixing a bug is wasted if any change (or rewrite) could cause it to reoccur. If the size of our test suite becomes a problem, there are things we could do about it (like requiring a reduced test suite for checkins, and running the full test suite only on the buildbots). Also, compiling the compiler is the bottleneck in a build right now, not running the tests (at least on my machine).

@BrendanEich
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Mozilla across the project requires regression tests with fix patches.

We used to not require regression tests with fix patches. That led to a very bad place, and (sayrer helped a lot) we reformed and require tests now as part of code review. The previous mobile Firefox iteration (XUL front end) was test-free for too long and it materially hurt that project (not the only cause of failure but a big one in my view).

/be

@marijnh
Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

There are a near infinite number of other potential bugs even in this small module (type_use). The fact that I made this particular mistake doesn't mean that any of the others won't surface in a future change or rewrite. An exhaustive test suite, for a system the size and complexity of our compiler, is an impossible thing. I'm happy to write test for things that feel fragile or likely to be re-broken, but this dogma of always adding a test for every fixed bug is not something I am comfortable with.

@catamorphism
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@marijnh - no one is asking for an exhaustive test suite, and I think we probably all understand why that's impossible. In my opinion, if I can reason about the code I'm fixing well enough to fix the bug, I should be able to reason about it well enough to write a test case that differentiates between the old version and the new version. It doesn't seem like that should be a lot of added effort, compared to the effort of isolating bugs.

@dherman
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

It would be fine to specify a subset for smoke tests, if the speed of running the test suite is becoming a problem. Also, we can beef up our test infrastructure -- I'm working with @brson and @graydon on getting more horsepower behind our bots. But having a thorough test suite is definitely necessary.

Dave

@graydon
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

With valgrind turned off, the test-running part of 'make check' runs in 35 seconds of wall time on my machine (8 way i7). The run-pass portion (which is where most regression tests wind up) takes 14s. If you need a smoke test, run that.

Our testsuite is not bloated by any measure. It's still on the small side, though improving, that's mostly due to @brson's efforts. We still have no idea about coverage, our fuzzer is rarely run, and I regularly manage to ICE rustc. I'd support a "minimum one test per fix" rule, either now or when we transition to mandatory code review (which we will, eventually).

For comparison / reference sake:

suite namesuite # testssuite # loc
gcc~26,000~800,000
spidermonkey~3,700~400,000
LLVM~5,200~200,000
clang~4,500~100,000
ghc~3,300~110,000
go~750~66,000
rustc~1,100~18,000

@pcwalton
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I definitely sympathize with Marijn in that I do think that blanket "every patch must have a test" rules lead to pain. My personal example is that we couldn't land even style tweaks in the Web Console in Firefox without a test, which led to tests that measured dimensions of UI elements from script and broke on systems with different font sizes and whatnot.

That said, language tests are easy to write and we (and I especially) should be writing more of them. What has saved us so far is that the Rust compiler itself is a giant test case, but clearly we need more than just that. We should definitely strive for 100% function coverage, and preferably 100% branch coverage too.

I think the right way to address the problem of long-running tests is to have an inbound branch that's regularly patrolled by a sheriff and merged. Any developer can feel free to push to inbound without running tests (although not breaking the build would be nice), and you're equally free to back out others' changes if they're broken. Every so often (say, daily), the sheriff merges the latest green revision to master. This would give us the safety of tests without the burden of having to run them all the time to keep master clean.

@BrendanEich
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

No one should have to write a fragile pixel-measuring test for an aesthetic change. The topic was fixes, i.e. correctness patches. No need to work hard on an airtight definition, and if a fix can't be tested easily without non-determinism or other fragility problems, skip it -- but I would like to learn more about such fixes.

/be

@pcwalton
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Agreed.

Please sign in to comment.