-
Notifications
You must be signed in to change notification settings - Fork 1.1k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Use _apply_channel_
when appropriate
#5901
Comments
I created #5902 that improves the situation here a fair amount. Curious to see what more can be done! |
Also I'd have to imagine that Edit: I take it back, or at least I think I do: einsum seems to take the same amount of time no matter what's in the operator matrix. |
Thank you for checking einsum performance on sparse vs dense matrices! Your result appears to confirm my concerns here. |
For posterity and since the code is rather tricky, #5905 is also a large boost when the matrix is one-hot, which is the case for many Kraus components such as Reset, ConfusionChannel, PhaseDamp(1). However the |
From cirq sync: This should be done. This would probably involve two steps:
|
Confusion matrix and reset channels both can be viewed as starting with a zero DM tensor and then copying (adding) in scaled slices from the original DM tensor. Thus we make a helper function that does this and add `_apply_channel_` optimizations to those gates. Fixes #5901. Starts #5900 though I haven't looked at all gates. Also starts #4579 but there's likely more to do there as well. I didn't add the new test function to the primary test suite because creating superoperators is likely computationally expensive (granted most if not all gates that use this would be three or fewer qubits, which is still cheap), and the test not relevant for most gates.
Yeah, let's keep this open until we've convinced ourselves that there are no more channels that should have |
…#5917) Confusion matrix and reset channels both can be viewed as starting with a zero DM tensor and then copying (adding) in scaled slices from the original DM tensor. Thus we make a helper function that does this and add `_apply_channel_` optimizations to those gates. Fixes quantumlib#5901. Starts quantumlib#5900 though I haven't looked at all gates. Also starts quantumlib#4579 but there's likely more to do there as well. I didn't add the new test function to the primary test suite because creating superoperators is likely computationally expensive (granted most if not all gates that use this would be three or fewer qubits, which is still cheap), and the test not relevant for most gates.
…#5917) Confusion matrix and reset channels both can be viewed as starting with a zero DM tensor and then copying (adding) in scaled slices from the original DM tensor. Thus we make a helper function that does this and add `_apply_channel_` optimizations to those gates. Fixes quantumlib#5901. Starts quantumlib#5900 though I haven't looked at all gates. Also starts quantumlib#4579 but there's likely more to do there as well. I didn't add the new test function to the primary test suite because creating superoperators is likely computationally expensive (granted most if not all gates that use this would be three or fewer qubits, which is still cheap), and the test not relevant for most gates.
Most (all?) our channels are specified via
_kraus_
method that returns a sequence of Kraus operators. This is inefficient for channels such as Reset whose Kraus operators are sparse and whose action is easy to describe in terms of changes they make to the elements of the input density matrix. IOW, such channels should be implemented using_apply_channel_
, not_kraus_
.Filing for discussion, identification of the offending channels and subsequent implementation changes.
Context: Review of #5851 (which adds yet another offending channel).
The text was updated successfully, but these errors were encountered: