Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Use lru_cache to suppress memory usage of commutation analysis #7072

Closed
wants to merge 1 commit into from

Conversation

t-imamichi
Copy link
Member

@t-imamichi t-imamichi commented Sep 28, 2021

Summary

Use lru_cache to suppress memory usage of commutation analysis. It also simplifies the code a little.
This might be oversimplification because it does not take care of qargs.

Details and comments

@t-imamichi t-imamichi changed the title Use lru_cache to surppress memory usage of commutation analysis Use lru_cache to suppress memory usage of commutation analysis Sep 28, 2021
@jakelishman
Copy link
Member

Do you have memory profiles for this change? The current cache only stores a single boolean per key, so I'm surprised it has too large a memory footprint as it is.

I'm also very surprised that DAGOpNode is hashable in the general case. This seems a little dangerous; they store mutable values within themselves, and they allow themselves to be mutated, so by the normal Python rules, they shouldn't be hashable.

@t-imamichi
Copy link
Member Author

t-imamichi commented Sep 28, 2021

Hi, @jakelishman. Thank you for your comment. @ewinston has the memory profile data with large number of qubits.
I saw #6982 after I made this PR. My PR oversimplified the code, so it is not likely to run. I will close it.

@t-imamichi t-imamichi closed this Oct 7, 2021
@t-imamichi t-imamichi deleted the lru branch October 29, 2021 12:55
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

2 participants