Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Better (try 3). #38

Merged
merged 5 commits into from
Dec 1, 2014
Merged

Better (try 3). #38

merged 5 commits into from
Dec 1, 2014

Conversation

achbed
Copy link
Contributor

@achbed achbed commented Dec 1, 2014

Added new QueueLogger class (logging from a queue so we can do thread-safe logging)
Moved core and station to new common log functions (_info and _err) using new QueueLogger
Core now attempts to restart stations that stop unexpectedly
Core now detects new folders on the fly instead of only at start time (stationfolder option)
Station.py has a lot of changes to utilize the common blocking queue ( self.q.get(1) ) only when needed rather than at every step (may increase performance on installs with many stations)
Lots more error handling when blocking queue is utilized to ensure that one error doesn't kill other threads
Renamed station and stations variables within core to be more clear what they contain
Added main loop in station to allow remote commands to restart playback after stopping
Added livecreation option to enable.disable watchfolder mode

…-safe logging)

Moved core and station to new common log functions (_info and _err) using new QueueLogger
Core now attempts to restart stations that stop
Core now detects new folders on the fly instead of only at start time (stationfolder option)
Station.py has a lot of changes to utilize the common blocking queue only when needed rather than at every step (may increase performance with many stations)

TODO:
Detect proper vs unexpected shutdown of stations and restart only unexpected ones
Make new folder detection for stationfolder an option
Added main loop in station to allow remote commands to restart playback after stopping
Added livecreation option to enable.disable watchfolder mode
Added fix for newly created stations not starting
Cleaned up a missed queue unlock instance

Signed-off-by: achbed <github@achbed.org>
Signed-off-by: achbed <github@achbed.org>
Signed-off-by: achbed <github@achbed.org>
@achbed
Copy link
Contributor Author

achbed commented Dec 1, 2014

Added code in mp3.py to allow for adding a COUNTRY tag to the json output

@yomguy
Copy link
Owner

yomguy commented Dec 1, 2014

That is fast and good coding! :)
Thanks for the well detailed commit messages.
I will test this today.

yomguy added a commit that referenced this pull request Dec 1, 2014
@yomguy yomguy merged commit d1913f9 into yomguy:dev Dec 1, 2014
@yomguy
Copy link
Owner

yomguy commented Dec 1, 2014

My test channel is restarted on each song transition ("could not send the buffer"). Do you have the same behavior?
I'm surprised you don't use the Queue anymore for the mainloop of each station. This is normally needed in a Producer/Worker architecture. Is your multi-station config working good without it?

@achbed
Copy link
Contributor Author

achbed commented Dec 1, 2014

I am not having that issue. I'm running with an icecast 2 (latest release) instance on the same server.

The mutli-station config is running quite well so far. I'm up to 10 128k MP3 stations so far and having no significant performance issues with this version.

@achbed
Copy link
Contributor Author

achbed commented Dec 1, 2014

The main reason for the change on the queue is that we're not using a typical Producer/Worker architecture anyway. In most cases the workers are doing stuff on the queue items and reporting back. We're not doing that - we're doing stuff almost completely independently and using the queue more as a poor mans lock to make sure we don't have conflict issues. Since we want each thread/station to run as efficiently as possible, removing the blocks involved with grabbing queue items should reduce wait times and "hiccups" in providing stream data. The way it's implemented now, we grab a work queue item in only when we're changing a piece of data that could be used by multiple threads. We should probably look at using a proper lock mechanism instead at some point.

@yomguy
Copy link
Owner

yomguy commented Dec 4, 2014

Thanks for these explanations.
OK of course for removing all unnecessary blocks. But is your approach compatible with the python GIL limitations?

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

2 participants