US.1 Server July Total Downtime

Again this month, a total of near 5h downtime for US.1 server.

21/07/2016 04:10:41 21/07/2016 07:38:41 3h 28m
04/07/2016 11:35:41 04/07/2016 12:57:41 1h 22m

Last one was hours ago.
We are in the middle of running an advertisement campaign on many platforms and no need to say that it is expensive and when clients are going to our website, Facebook business page etc and trying to access our radio station and all they get is silence and "server is down" message, it is not a very successful publicity for us and making us spend a lot of money for nothing.

We were extremely patient for June 10h downtime, and all the earlier months, but this time, since it affected our ad campaign directly, we have no choice but to ask you guys, a partial refund for the month of July. And hoping that everything will be fixed in the future.

Thank you.
 
Firstly, really sorry about this. I'll try and explain what's happening. us1 has a very odd issue that's causing it to randomly crash. Our hardware providers claim it's a software issue and our software providers (Centovacast) reckon it might be related to a bug in the ices-cc encoder which they no longer support. Unfortunately there's also no migration path for switching users of the old ices-cc encoder clients to the new liquidsoap. We can only delete and recreate an account to achieve this which is impossible to do en masse. We're also not 100% convinced it's ices-cc causing the problem but we are attempting to compile our own but the toolchain broken. We're looking to other possible causes while still trying to work on the ices-cc compile even though it might not be causing the crash. We have several other servers configured in exactly the same way which don't crash so we're a bit of a loss right now to the cause of the problem but rest assured we are trying to get to the bottom of it.

In the meantime apart from apologising profusely we would like to offer to move your server from us1 (Dallas) to us2 (based in Newark). Your hostname/port would change so you would need to update any links you have but your startpage url will remain the same. Let us know if you would like us to do this ? We've also applied a credit to your account as way of apology for the issues.
 
Glad to hear that you are working hard on the problem, seems like a complex one. Thank you for the great support.

As for moving to us2. Newark, we would appreciate it indeed. Although, just to make sure, only the hostname and port would change
which means we will not have to re-import all our playlist and songs again since it is only server side, and not affecting Centova version?
 
It is indeed, as it occurs randomly and with no hints in any logs which just makes it all the more difficult to pin point the cause.

Yes, only the hostname and port would change as we can perform a full account backup and restore which would keep all your settings, playlists, autodj files and reports. I estimate it would take us around 15 minutes (no more than 30) of downtime. Generally around 6:00am GMT is the least busy time overall for stations and looking at your stats it's similar for your station too. I would recommend we migrated you then. Let me know if you have a preferred day / time (in GMT) and I'll see if we're available then to do it for you.
 
Great. Sorry for the delay. I've assigned you a slot of 3pm GMT on Friday which is 11am Eastern Time. We'll send you an email once it's complete with the new hostname and port although you'll be able to also get these by logging into your control panel exactly the same as before.
 
The migration is complete and we're just waiting for the media to copy across then we'll bring the server back online.

The new hostname is us2.internet-radio.com and the new port is 8443.
 
Very well, we will wait for the media to copy, and for the server to be back online.
When the migration is complete, we just have to log back into our account with same Centova and it should be Us2. ?
 
Files have been copied. It took a little longer than expected but we're just updating the media library now then we'll bring the server back online.

Yep just login exactly as before.
 
I'm having a few issues liquidsoap (the encoder). Please bear with me while I try and diagnose the issue.
 
It's back online : https://www.internet-radio.com/station/megatoncafe/

Apologies it took longer than expected. The file transfer was slower than I anticipated and there's a bug in Centova's backup system which didn't create the necessary directory to hold Liquidsoap's socket file. I'll be submitting a bug report for that.

If you could check everything is in order that would be great. I've kept the old server on us1 just in case and can revert to that if need be.
 
Thank you very much guys at Internet Radio.
Everything seems perfectly fine, although we do not know if the volume equaliser (gain) is working correctly as some songs are way louder and others are more quieter.
Something to do with the bug with liquidsoap? Please check it out, as it is a very important feature. Big difference in volume is painful to listeners.
Thank you guys.
 
The volume gain to equalise the sound is definitely not working, please fix it as soon as possible as the difference between volume is extremely bad.
It was working perfectly fine before.
 
Strange. I've compared the liquidsoap settings and they are the same as the old server.

# Centova Cast integration configuration
centovacast.settings = [
("username", "megatoncafe"),
("nextsong_uri", "/nextsong.php?username=megatoncafe&secret=hidden"),
("nextsong_debug", "0"),
("fallback_playlist", "/usr/local/centovacast/var/vhosts/megatoncafe/var/spool/playlist.txt"),
("account_failsafe_filename", "/usr/local/centovacast/var/vhosts/megatoncafe/var/spool/sounds/fallbackfile.mp3"),
("system_failsafe_filename", "/usr/local/centovacast/var/spool/sounds/station-unavailable/general-fallback.mp3"),
("autodj_startup_silence", "5"),
("genre", ""),
("crossfade", "4"),
("crossfade_fadein", "1"),
("crossfade_fadeout", "1"),
("crossfade_mode", "normal"),
("replay_gain", "1"),
("portbase", "8445"),
("harbor_password", "hidden"),
("live_to_autodj_skip", "0"),
("conservative", "0")
]

replay_gain is enabled but you are right it's not actually doing anything. Looking at the liquidsoap logs I don't see any mentions of it. We should be seeing entries like the following :

2016/07/30 01:40:56 [replaygain:3] End of the current overriding.
2016/07/30 01:40:56 [replaygain:3] Overriding amplification: 0.340408.

After reading http://savonet.sourceforge.net/doc-svn/replay_gain.html I thought perhaps it would require each song to be played once before the replaygain could be calculated and then it would be applied next time the song came around. I spun up a test server and looped through a few tracks and it didn't help. It appears that this issue is also affecting other us2 accounts and we weren't aware.

I've raised a support ticket with Centova to see if they can shed any light on the issue as it's most odd. I'll get back to you once I get a response.
 
Centova have just got back to me and there's a bug in the auto-installer which means on Debian 8 systems it fails to install the mp3gain binary because it's only available in Debian 7 and lower. mp3gain is what liquidsoap uses to do the volume normalisation This isn't a problem though as I just downloaded the source code, compiled it and installed it without issue.

I stopped and started your liquidsoap instance just in case it's required which didn't cause any listeners to be kicked. Just a track interruption. I'm seeing replaygain working on some smaller auto dj libraries although I think it needs to play a track once in order to calculate its amplification level:

Liquidsoap provides a script for extracting the replay gain value from mp3, ogg/vorbis and flac files. It requires the tools mp3gain (resp. vorbisgain and ogginfo, resp. metaflac) for mp3 (resp. ogg/vorbis, resp. flac) files processing, and will affect your files: after the first computation of the replay gain, that information will be stored in the metadata.

http://savonet.sourceforge.net/doc-svn/replay_gain.html

If I'm interpreting this correctly it will take longer to kick in on larger auto dj libraries like yours but when a track gets played a second time it will have the replay gain applied.

We'll keep monitoring your logs to ensure everything is in order.
 
Thank you very much, slowly, it seems to be back to normal with each track playing once.
We do have a question, we have hundreds of songs in our playlists auto-dj, set on weight, and not randomly but they are supposed to play one after another (sequential) in order.
Although, it seems that the Centova only chose certain songs to play (few hundred) and not playing the rest of them, how can we fix this for the auto-dj to play the full library, and not just
certain selected songs?
 
Sorry for the delay. We've been deploying two new servers the last few days.

The mp3gain / Replay Gain is fully working now. There was actually another issue in play that I had to raise with the main Centova Cast developer that's now sorted.

I've checked your settings and I see that you've got each playlist set to play sequentially with a weight of 1 so it should play your entire collection. I wrote a small perl script to check each song in your media directory against all the log file to see if they are mentioned. 523 out of the 1023 songs aren't mentioned (and therefore not played) during the last 7 days worth of logs.

It could be that those 523 songs haven't been added to a playlists but it's difficult to tell. You could create a new "Master" playlist and just add all your songs to that so you can be sure they are all added. This shouldn't be too difficult and there's no need to have them separated if they all have the same weight anyway unless you wanted the different genres to be grouped together for some reason. If its important that the media be segregated then perhaps have a look through each playlist and ensure there's no media missing.

I've had a look for other users with similar issues with Centova cast and I can't find any unfortunately although I'm happy to raise a support ticket if you're confident there's a bug.
 
Top