Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Optimize unresolved references in semantic #4169

Closed
overlookmotel opened this issue Jul 10, 2024 · 3 comments · Fixed by #4213
Closed

Optimize unresolved references in semantic #4169

overlookmotel opened this issue Jul 10, 2024 · 3 comments · Fixed by #4213
Assignees
Labels
A-semantic Area - Semantic C-performance Category - Solution not expected to change functional behavior, only performance

Comments

@overlookmotel
Copy link
Contributor

#4162 boosted perf of semantic by turning unresolved_references into a stack.

I think we can potentially do even better. From #4162 (comment):

What I had in mind is that you never call unresolved_references.pop(). Instead, hold on to the hash map and re-use it. That way we avoid generating (and allocating) new hash map when entering a scope most of the time.

unresolved_references is Vec<UnresolvedReferences>, indexed by scope depth.

e.g. with this input:

function foo() {}
function bar() {}
  • Enter top level scope:
    • Depth is 0.
    • Push a UnresolvedReferences hash map to unresolved_references.
    • Current hashmap is unresolved_references[0].
  • Enter scope for foo:
    • Increment depth to 1.
    • Push another UnresolvedReferences hash map to unresolved_references.
    • Current hashmap is unresolved_references[1].
  • Exit scope for foo:
    • Clear current hashmap (unresolved_references[1]).
    • Decrement depth to 0.
    • Do NOT pop from unresolved_references.
    • Current hashmap is now unresolved_references[0].
  • Enter scope for bar:
    • Increment depth to 1.
    • Do NOT push to unresolved_references.
    • Current hashmap is unresolved_references[1] (i.e. reuse same hash map that was used for foo)
  • Exit scope for bar: same as for foo
  • Exit top level scope: same

I think this would be a further perf boost due to reducing allocations, and reusing allocations which are already warm in the cache.

cc @lucab (though no obligation on you to do this)

@overlookmotel overlookmotel added A-semantic Area - Semantic C-performance Category - Solution not expected to change functional behavior, only performance labels Jul 10, 2024
@Boshen
Copy link
Member

Boshen commented Jul 11, 2024

Hashing into UnresolvedReferences remains the slower part of the pipeline. Are there any other improvements can be made before storing the hash value inline?

image image

(I'm currently working on the mangler, which involves rebuilding the symbol and scope tree again.)

@overlookmotel
Copy link
Contributor Author

overlookmotel commented Jul 11, 2024

Are there any other improvements can be made before storing the hash value inline?

Very likely yes. See point 7 on oxc-project/backlog#32 (I only added it earlier this week).

We should certainly be able to reduce hashing, but I don't know the internals of the hashmap we're using well enough to say best way to do it. We may not even need to store the whole hash, maybe just the top 7 bits of it (if it's a SwissTable-style hashmap).

But that's orthogonal to the optimization described in this issue. Both will improve perf independently, and it'd be even better if we can do both.

Boshen pushed a commit that referenced this issue Jul 12, 2024
Closes #4169.

Reduce allocations while building `Semantic` by recycling hash maps used for tracking unresolved references, rather than creating a new one for every scope.
@overlookmotel
Copy link
Contributor Author

Fixed in #4213.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
A-semantic Area - Semantic C-performance Category - Solution not expected to change functional behavior, only performance
Projects
None yet
Development

Successfully merging a pull request may close this issue.

3 participants