-
-
Notifications
You must be signed in to change notification settings - Fork 439
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Prevent redundant simultaneous requests to remote storage in get.php #601
Prevent redundant simultaneous requests to remote storage in get.php #601
Conversation
Hi @colinmollenhour |
|
This has been tested with Before: Up to 20 fetches from remote storage, usually all successful but not always |
@sreichel is there a reason for closing it? |
Sorry, just hit the wrong button. |
… storage in get.php
… storage in get.php
A serious bug was discovered related to this commit. I was able to create as many folders as I wanted in some folders inside media directory. With a simple bash script running a curl command in a loop I can create trees of folders. A Denial of Service (DoS) can be achieve in this way. By reverting the changes to vanilla Magento this issue is gone. I suggest removing this implementation from OpenMage for a while till you find a solution. For more details please check this report here: #1334. |
@@ -47,6 +47,9 @@ class Mage_Core_Model_Resource_File_Storage_File | |||
*/ | |||
protected $_ignoredFiles; | |||
|
|||
/** @var resource */ | |||
protected $filePointer; |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
@Flyingmana we are not following the Magento convention that private/protected files have underscore prefix?
@colinmollenhour: My personal opinion is to revert this change to Magento default till you provide a final solution for the issue reported here #1334. You provided a solution here #1338 but it is still a temporary one. Once there is a complete solution and tested you can merge all changes into the master branch. |
The current get.php mechanism for fetching files from remote storage does not fully prevent a stampede of requests for the same resource from resulting in multiple fetch operations from the remote storage. That is, the remote fetching happens before the file locking. When the database storage is rewritten to use a third-party extension like Thai_S3 this can result in significantly more bandwidth being used and files being written multiple times. A situation that is not too uncommon would be if you update the watermark parameters then every resized image suddenly must be regenerated and there is currently not a good way (that I know of) to pre-warm the cache for this scenario so we are trying to make the remote fetching as efficient as possible.
This change reduces race conditions and duplicate fetch operations by locking the file before it is fetched and only releasing the lock after it is written so that other processes will be able to read the file written by the first process even if they started before the file existed locally.