-
Notifications
You must be signed in to change notification settings - Fork 29
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Better (try 3). #38
Better (try 3). #38
Conversation
…-safe logging) Moved core and station to new common log functions (_info and _err) using new QueueLogger Core now attempts to restart stations that stop Core now detects new folders on the fly instead of only at start time (stationfolder option) Station.py has a lot of changes to utilize the common blocking queue only when needed rather than at every step (may increase performance with many stations) TODO: Detect proper vs unexpected shutdown of stations and restart only unexpected ones Make new folder detection for stationfolder an option
Added main loop in station to allow remote commands to restart playback after stopping Added livecreation option to enable.disable watchfolder mode Added fix for newly created stations not starting Cleaned up a missed queue unlock instance Signed-off-by: achbed <github@achbed.org>
Signed-off-by: achbed <github@achbed.org>
Signed-off-by: achbed <github@achbed.org>
Added code in mp3.py to allow for adding a COUNTRY tag to the json output |
That is fast and good coding! :) |
My test channel is restarted on each song transition ("could not send the buffer"). Do you have the same behavior? |
I am not having that issue. I'm running with an icecast 2 (latest release) instance on the same server. The mutli-station config is running quite well so far. I'm up to 10 128k MP3 stations so far and having no significant performance issues with this version. |
The main reason for the change on the queue is that we're not using a typical Producer/Worker architecture anyway. In most cases the workers are doing stuff on the queue items and reporting back. We're not doing that - we're doing stuff almost completely independently and using the queue more as a poor mans lock to make sure we don't have conflict issues. Since we want each thread/station to run as efficiently as possible, removing the blocks involved with grabbing queue items should reduce wait times and "hiccups" in providing stream data. The way it's implemented now, we grab a work queue item in only when we're changing a piece of data that could be used by multiple threads. We should probably look at using a proper lock mechanism instead at some point. |
Thanks for these explanations. |
Added new QueueLogger class (logging from a queue so we can do thread-safe logging)
Moved core and station to new common log functions (_info and _err) using new QueueLogger
Core now attempts to restart stations that stop unexpectedly
Core now detects new folders on the fly instead of only at start time (stationfolder option)
Station.py has a lot of changes to utilize the common blocking queue ( self.q.get(1) ) only when needed rather than at every step (may increase performance on installs with many stations)
Lots more error handling when blocking queue is utilized to ensure that one error doesn't kill other threads
Renamed station and stations variables within core to be more clear what they contain
Added main loop in station to allow remote commands to restart playback after stopping
Added livecreation option to enable.disable watchfolder mode