-
Notifications
You must be signed in to change notification settings - Fork 0
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
some questions about detection calculation #4
Comments
Hi, thanks for your interest! In our experiments, we conduct detection as follows. We assume the watermark developer possesses a ground-truth key w_i. For a candidate image p, we first extract w_s and calculate the number of matched bits between w_i and w_s to detect whether the image is from the watermarked model. You can set a threshold to perform this binary task, where a larger threshold improves the FPR but might sacrifice the TPR. If the number of matched bits is larger than the threshold, then we conclude that this image is from the watermarked model. And sure, I will modify the script for watermark detection. |
Thanks for your reply. My remaining question is: if we assume that the watermark developer possesses a ground-truth key w_i during detection, then detection and identification seem to be the same, and there seems to be no difference. |
I agree that your concern is reasonable. The detection and identification tasks are positively related. Let's say we have a watermark detection with perfect detection performance (all extracted watermarks exactly match their ground-truth watermarks), then we can conclude that the identification ACC is 100%. But there indeed exists some difference between the detection and identification. Generally, I think the detection acts more like a binary classification task. The watermark developer embeds a specific watermark into an image carry, and the test assesses how closely the retrieved watermark aligns with their ground-truth watermark. If their similarity exceeds a predefined threshold (the number of matched bits in this context), then the developer would conclude that this image has been watermarked using their strategy. That is to say, the detection measures the similarity between two watermarks (one-to-one). The identification means that the developer possesses a pool of watermarks, and determines where the recovered watermark comes from, which acts more like a multi-classification task (one-to-all). Different from binary detection, you might encounter several disturbing watermarks that look similar to your recovered tasks. For instance, if your recovered watermark is 10101010, both 10101001 and 01101010 have 6 matched bits with your recovered watermark. In this case, it is hard to tell where the image comes from. But this works well for detection if you set your threshold to 3/4. |
I appreciate your excellent work! I have some questions about the detection calculation. First, I don't seem to find the code file for the detection calculation., clould you provide it? Secondly, I don't quite understand the detection calculation. For an image, how do we determine that this image comes from our model? The article says to calculate the number of matched bits Mi, but in reality we can only get Ws from this image, so we can't calculate Mi. Could you tell me more about the detection calculation? Thank you, and look forward to your early reply.
The text was updated successfully, but these errors were encountered: