
Twitch streaming from webradio stream
Twitch is a video streaming service, mostly used by gamers, that can be used also for another purpose. As you may already know, I’m a Digital Streaming Specialist in a web radio radiocittaperta.it. Can Twitch be used for a web radio or radio? How can a web radio use the Twitch video functionalities?
There are multiple radio and web radio that use Twitch to stream their content, mostly they use some webcams in the studio and add the radio streaming audio to the video. This looks like a TV radio, like the one that you can find on TV. Technologies can achieve the same result on Twitch as the one on TV with fewer expenses. Twitch is free, the streaming software can be free (OBS is a free software to do it), you need to buy a camera(s) and some video acquisition hardware but you can arrange these things with the budget you have.
If you don’t have a budget at all, and you don’t want to add a real video taken from a camera to your web radio audio streaming, you can create a digital video with the following steps. The idea is to:
- Use a static image
- Add some dynamic content, for example, some video created directly from the audio source
- Add the streaming audio
You may think that you need software always open on a computer but there is a better solution: use ffmpeg to do this and let ffmpeg run in background on a computer. I’ve already written about ffmpeg (here, here, and here) and some of the function it has. With this solution, you can run it on a computer also used for something else, on a server, or also on a RaspberryPi connected to the internet. And this is very cool stuff!
- twitch key, it looks like live_238476238546_234jhgfuowgsjdhbfwsDFSdgbjsbv
- the audio stream https://my.audio.stream:port/stream
- a static image /somepath/mystaticimage.jpg
then just run this command
ffmpeg -loop 1 -f image2 -thread_queue_size 256 -i /somepath/mystaticimage.jpg \
-thread_queue_size 256 -i https://my.audio.stream:port/stream -re \
-nostdin \
-f lavfi -i aevalsrc="sin(0*2*PI*t)" \
-vcodec libx264 -r 30 -g 30 \
-preset fast -vb 3000k -pix_fmt rgb24 \
-pix_fmt yuv420p -f flv \
-filter_complex \
"[1:a]showwaves=s=960x100:colors=Red:mode=cline:rate=25:scale=sqrt[outputwave]; \
[0:v][outputwave] overlay=0:510:shortest=1 [out]" \
-map '[out]' -map '1:a' -c:a copy -y \
rtmp://live-ber.twitch.tv/app/live_238476238546_234jhgfuowgsjdhbfwsDFSdgbjsbv \
-loglevel quiet 2> /dev/null &
some notes on this command:
-
set where the waveform is printed, 510 is “at what height” you can play with that value to move upper or lower the waveform.[0:v][outputwave] overlay=0:510:shortest=1 [out]"
-
is the waveform creator. There is the color and how the waveform should be printed, check the ffmepg documentation for other configuration"[1:a]showwaves=s=960x100:colors=Red:mode=cline:rate=25:scale=sqrt[outputwave];
-
is to put the command on background-loglevel quiet 2> /dev/null &
And this is the final result (some seconds)

Real time detect silence from audio input in Linux
As a
The current issue at hand is “how to understand that the web radio is streaming, when the stream is silent?”
understand that the web radio is streaming, but the stream is silent
How can this happen? Web radio streaming is a chain of multiple parts, both analog and digital, and it can happen that some of them are silent, while others are working as expected. For example, suppose we have a computer working as our streamer; it’s sending the audio input grabbed from a sound card directly to a streamer server. Even if the internet works, there’s connection, password is known and host is ok, there is no way to understand if the audio grabbed is silent or not. This results in audio streaming that is up-and-running, but not producing audio (or perhaps some white noise can be heard).
If the input cable is detached, if the audio source is turned off, or if the mixer fader is completely faded out instead of being at top volume, all these situations may lead to silent audio that goes through the streaming chain. The optimum solution for this is something that real-time monitors the input audio, checks if there are at least X seconds of silence and, if this happens, starts some procedures that can be:
- start some playlist so that the audio is no more silent
- send an e-mail and/or an alert to someone
Also, when the audio input is no more silent (a super smart radio speaker has moved the mixer fader to the correct position, for example…), this software needs to stop playing the playlist. It must run forever, it must restart if it’s stopped, and it must run at boot time. So let’s start with something that I’ve found: Python hassounddevice
that do what we need to do.
Sounddevice with Python
I’m not so used to Python, but it seems very easy to understand, easy to use, easy to modify and super powerful. I’ve started from this thread with this code snippet
# Print out realtime audio volume as ascii bars
import sounddevice as sd
import numpy as np
def print_sound(indata, outdata, frames, time, status):
volume_norm = np.linalg.norm(indata)*10
print ("|" * int(volume_norm))
with sd.Stream(callback=print_sound):
sd.sleep(10000)
that shows some bars from the audio input level, with some modifications, a chain of “if”; I’ve modified the script so that it writes to a file when X silent samples are found. Silent is defined as “a level under the threshold, th, value.
#!/usr/bin/env python3
import numpy as np import sounddevice as sd import datetime duration = 10 #in seconds th = 10 sec = 0 maxNumberOfSilent = 4000 isSilent = True logfile = open("soundlevel.log", "a") def audio_callback(indata, frames, time, status): global sec global isSilent global logfile dateLog = datetime.datetime.now().strftime("[%Y-%m-%d %H:%M:%S] ") volume_norm = np.linalg.norm(indata) * 10 #print("|" * int(volume_norm)) #print(volume_norm) if volume_norm < th: sec += 1 else: sec = 0 if (sec > maxNumberOfSilent and not isSilent): isSilent = True logfile.write(dateLog+"Silent for "+str(maxNumberOfSilent)+" samples\n") logfile.flush() #print("Silent for "+str(maxNumberOfSilent)+" samples") elif (sec == 0 and isSilent): isSilent = False logfile.write(dateLog+"Music\n") logfile.flush() #print("Music") stream = sd.InputStream(callback=audio_callback) with stream: while (True): sd.sleep(duration * 1000)
After some more research, I’ve found a class to send e-mails using a Gmail account:
import smtplib, ssl
class Mail:
def __init__(self):
self.port = 465
self.smtp_server_domain_name = "smtp.gmail.com"
self.sender_mail = "........"
self.password = "........"
def send(self, emails, subject, content):
ssl_context = ssl.create_default_context()
service = smtplib.SMTP_SSL(self.smtp_server_domain_name, self.port, context=ssl_context)
service.login(self.sender_mail, self.password)
for email in emails:
result = service.sendmail(self.sender_mail, email, f"Subject: {subject}\n{content}")
service.quit()
if __name__ == '__main__':
mails = input("Enter emails: ").split()
subject = input("Enter subject: ")
content = input("Enter content: ")
mail = Mail()
mail.send(mails, subject, content)
To put everything together, I’ve created a system that sends an e-mail when the sound is silent:
#!/usr/bin/env python3
import numpy as np
import sounddevice as sd
import datetime
import smtplib, ssl
th = 10
sec = 0
maxNumberOfSilent = 10000
isSilent = True
logfile = open("soundlevel.log", "a")
to_addresses = ("myemail@mail.com",)
class Mail:
def __init__(self):
self.port = 465
self.smtp_server_domain_name = "smtp.gmail.com"
self.sender_mail = "....."
self.password = "...."
def send(self, emails, subject, content):
ssl_context = ssl.create_default_context()
service = smtplib.SMTP_SSL(self.smtp_server_domain_name, self.port, context=ssl_context)
service.login(self.sender_mail, self.password)
for email in emails:
result = service.sendmail(self.sender_mail, email, f"Subject: {subject}\n{content}")
service.quit()
mail_client = Mail()
def audio_callback(indata, frames, time, status):
global sec
global isSilent
global logfile
dateLog = datetime.datetime.now().strftime("[%Y-%m-%d %H:%M:%S] ")
volume_norm = np.linalg.norm(indata) * 10
#print("|" * int(volume_norm))
#print(volume_norm)
if volume_norm < th: sec += 1 else: sec = 0 if (sec > maxNumberOfSilent and not isSilent):
isSilent = True
logfile.write(dateLog+"Silent for "+str(maxNumberOfSilent)+" samples\n")
logfile.flush()
mail_client.send(to_addresses,"Audio is silent",dateLog+" audio is silent")
#print("Silent for "+str(maxNumberOfSilent)+" samples")
elif (sec == 0 and isSilent):
isSilent = False
logfile.write(dateLog+"Music\n")
logfile.flush()
mail_client.send(to_addresses,"Audio back to normal",dateLog+" audio back to normal")
#print("Music")
stream = sd.InputStream(callback=audio_callback)
with stream:
while (True):
sd.sleep(10 * 1000)

Podcast player for web radio site
As a hobby, I’m the “software handyman” in a web radio (also a speaker to be honest). In my spare time, I try to find solutions and cool stuff for all the digital problems related to this activity. It could be something that needs to be automated or some cool stuff on the website. This time, despite the fact that I’m not a good UI/UX, I’ve tried to improve the podcast player on the website. The default web player of WordPress is not that bad, but experimenting is something that I like to do and, after all, none will be killed by a bad player experiment on a web radio site self-financed by its speaker. So I’ve seen some cool players with a waveform on it and I’ve spent time to figure out how these can be included.
First Implementation of a new podcast player
After a first implementation, I found version 1 of this new player not-so-dynamic as I’m expecting. And more it also required some additional computation and files (a static file representing the waveform peak).
Second iteration of a podcast player
So I’ve found a new super dynamic player, in javascript. But it gave me some problems related to the “touch event” needed by the iPhone users to start the sound. After some months of figuring out how to solve it, I finally found a way of making it work.
Second implementation podcast player
It’s not perfect, still can be done better and maybe there are bugs around it but what I want to highlight is the fact that I’ve spent a lot of my free time finding a better solution (better means “better to me”) and solving issues, problems, edge cases, code limitations. See this recap video:
I’ve had the chance to do this because there is no money, neither human life involved and, basically, because this is a hobby. The possibility of experimenting in a safe environment made the difference and gave me also a vision of what are the other different skills and people involved in the development of a software solution: what could be the problem for a UX, what could push to add a new solution based also on time needed to do it.
My advice is to find space to explore and make mistakes in your workplace or outside your workplace. If you have time take a look at the player, at the website and, if you have a lot of time and you want to listen to an Italian web radio show about technology, space and similar stuff with a lot of music, my show it’s called “Katzenjammer” every Monday, starting at 20:00 Europe/Rome time zone.
There are podcasts as well 😀
Maybe I’ll share some tech details and code in a future post, anyway feel free to contact me for details or questions

Managing a web radio streaming provider
If you manage a Web Radio, if you are an IT consultant, or you’re just the “one of the tech guys” behind the scenes of such a radio, keep in mind the following suggestions. There could be some part of the web integration with your site that may result in future problems or a lot of work. Most of these problems are related to third-party services that you may or may not use, so let’s analyze some of them.
Streaming services
If you manage a Web Radio, you already know that you need a streaming web service in order to make people listen to your streaming audio. Unless you select an “all-in-house” solution, where you have a server with your website, your streaming server, and also podcast spaces, you probably rely on some third-party streaming services. This is also convenient from an economical point of view, because streaming servers can be very cheap, and some of them also come with an una-tantum solution (you paid once and only once). This basically means that your website has an url
https://www.myradiowebsite.com
While your streaming service has a different url, maybe something like this
https://stream.thirdparty.org:8021/stream
While this is a good solution and it works nicely without any problem, please consider that your streaming url (https://stream.thirdparty.org:8021/stream) may be used in multiple places: your website is one of them, but also your mobile app, your other integration with other websites, other webradio aggregator websites and so on.
The problem: streaming url
When you use a streaming server provider, this basically means you are coupled to a streaming url and if you want to change your streaming server provider, you need to update everything with your new streaming url: your website, your mobile app, the webradio aggregator website (eg. Tunein). More than this, if you need more listeners, the only way of doing this is to pay your streaming server provider for an upgraded plan.
The solution: a redirect proxy
The solution is to set up a simple redirect that can be easily manageable in-house. The redirect should add a streaming URL owned in your website, so something like this:
https://www.myradiowebsite.com/mystreaming --redirect to--> https://stream.thirdparty.org:8021/stream
This can be achieved in different ways: the simplest could probably be adding a rule in your .htaccess file (if you have an apache hosting) or a directive in ngnix configuration, if your hosting supports nginx. The redirect syntax is not covered in this post, you could probably find the right ones just by looking on the internet. I want to cover a slightly different solution that uses a programmatic approach. It’s a PHP redirect solution, but I know that this can be done with any other server-side language. More than having just a redirect, I’ve added the possibility of a random weighted redirect. The idea is that you can set up a redirect that is randomly distributed, so if you have 2 streaming service providers…
server1 = https://my.webradioserver.com:8201/stream
server2 = https://another.server.com/8020/stream
… you can then weigh for example set 50-50 so that a user could listen to server1 or to server2 with a 50% of probability. This gives you the chance to move to another server with a ramp up, or you can add another server to increase the possible listeners, and assign the user randomly . This is my solution:
<?php
header("Expires: on, 01 Jan 1970 00:00:00 GMT");
header("Last-Modified: " . gmdate("D, d M Y H:i:s") . " GMT");
header("Cache-Control: no-store, no-cache, must-revalidate");
header("Cache-Control: post-check=0, pre-check=0", false);
header("Pragma: no-cache");
function getRandomWeightedElement(array $weightedValues) {
$rand = mt_rand(1, (int) array_sum($weightedValues));
foreach ($weightedValues as $key => $value) {
$rand -= $value;
if ($rand <= 0) {
return $key;
}
}
}
$serverSTreaming1 = 'https://my.webradioserver.com:8201/stream';
$serverSTreaming2 = 'https://another.server.com/8020/stream';
$wheights = array($serverSTreaming1=>0, $serverSTreaming2=>100);
header("location: ".getRandomWeightedElement($wheigths));
die();
The section$wheights = array($serverSTreaming1=>0, $serverSTreaming2=>100);
does all the magic: this setup moves all the traffic to streaming $serverSTreaming2
, but you can of course configure it at 50-50 as well, for example with
$wheights = array($serverSTreaming1=>50, $serverSTreaming2=>50);
This solution could easily be extended with more streaming providers and different weights.
Giveaways
With a redirect solution, you can easily change your streaming provider to all your clients like your website, your mobile application, third party web site aggregators etc. because you use the source url and then only change the redirect destination.
You can add multiple sources and weigh them, as per my example.

How to convert video for Playstation 3 / PS3
I know that downloading and converting video is an old-style procedure that, at the moment, it seems useless and time-consuming. But my PlayStation 3 still works like a charm and I can watch a movie while seating on the couch using the controller as a remote, so quite better than a computer on my legs connected to a streaming website.
A non-confortable position for watching movies
The problem is that the PS3, PlayStation 3, only plays some type of files and very often you need to convert downloaded files to a suitable format. After googling and looking around, I’ve finally found my perfect settings. First of all, you need to use ffmpeg to convert the file. ffmpeg is a super powerful tool that can do a lot of stuff with video and audio. We’ll use it to convert the input file into a format that can be played by PS3. Suppose you have a file named
INPUT_FILE_NAME.mkv
(.mkv is a format that PS3 can not play) Then, to convert it just use this command:
ffmpeg -y -i "INPUT_FILE_NAME.mkv" -vf scale=1024:-1 -c:v libx264 -pix_fmt nv12 -acodec aac -b:a 192k -ac 2 -ar 44100 -af "aresample=async=1:min_hard_comp=0.100000:first_pts=0" -f mp4 "OUTPUT_FILE_NAME.mp4"
If you hate dubbing (I know that are a lot of movie lovers that hate dubbing, I’m not one of them, to be honest) you can also put the subtitle that is (often) in the .mkv downloaded file, directly into the mp4 in output. Unfortunately, you can not disable it, but it’s a good trade-off.
In this case, use this command
ffmpeg -y -i "INPUT_FILE_NAME.mkv" -vf scale=1024:-1 -c:v libx264 -pix_fmt nv12 -vf subtitles="INPUT_FILE_NAME.mkv":si=0 -acodec aac -b:a 192k -ac 2 -ar 44100 -af "aresample=async=1:min_hard_comp=0.100000:first_pts=0" -f mp4 "OUTPUT_FILE_NAME.mp4" > /dev/null 2>&1 < /dev/null &
Where I’ve just added the part
-vf subtitles="INPUT_FILE_NAME.mkv":si=0
that basically tells ffmpeg to “get the subtitle from the input file, get the first subtitle track, number 0, and render them into the output file”. You can change the file name and, as long as ffmpeg can read the input file, you’ll have a file usable by your PS3. Another option I’ve added it’s the > /dev/null 2>&1 < /dev/null &
at the end of the command so that it’ll run in background.
The first part > /dev/null 2>&1
basically tells ffmpeg to put any output to nowhere.
The second part < /dev/null
tells ffmpeg that there is no interactive input (yes ffmpeg is an interactive tool!)…
… and the last part &
means “run in background“!

EZSTREAM for streaming
ezstream is a command line source client software that can stream audio with/without reconding to an icecast or shoutcast server.
In its basic mode of operation, it streams media files or data from standard input without reencoding and thus requires only very little CPU resources. It can also use various external decoders and encoders to reencode from one format to another, and then stream the result to an Icecast server. I’m using it to stream a list of mp3 files to a webradio server. My use case is:
I want to stream the podcasts that I host in a folder, with some auto-update mechanism, so that If a new podcast is added, then it’s played (sooner or later).
ezstream can be used for streaming with a configuration file and it can be launched with:
ezstream -c mystream.xml
where mystream.xml is the configuration file
EZSTREAM configuration
ezstream use an xml file for configuration. Mine is very simple and I’ve used a modified version of the example that ezstream itself provides. Here’s the configuration I’m using:
<format>MP3</format>
<filename>/myfolder/playlist.txt</filename>
<stream_once>0</stream_once>
format: is the file format
filename: it’s the name and the path of a text file that contains a path of every single podcast, one per line
stream_once: zero (0) stream and then restart the list, one (1) stream the list once.
In the file there are other configurations: <url>, <sourcepassword>, <svrinfobitrate> but I think these are trivial.
The playlist
ezstream provide a very usefull feature to shuffle a list of file, it’s documented so you can check it with -h flag, and you can run it with:
ezstream -s playlist.txt
this command gets the playlist.txt input file, shuffles it, and sends it to the stdoutput, so it writes the shuffled list to the console. You can redirect the stdoutput to a file so that the list is shuffled. But, to be honest, I need something similar… but different. I want a shuffled version of the list but every X podcast, I want to insert two sounds, two musical cuts. So I’ve used a scripting language to generate the playlist. Unfortunately for you, I’ve used php, but let’s just analyze the logic:
<?php
$baseFolder = "/music/";
$cutArray = array("jingle.mp3","spot.mp3");
$playlistFile = "/myfolder/playlist.txt";
$everySong =3;
if (file_exists($playlistFile)){
unlink($playlistFile);
}
foreach (glob($baseFolder . '*.mp3', GLOB_NOSORT) as $file) {
$tmp = basename($file);
if (strpos($tmp, 'news-automatiche') === false) {
$allfile[] = $baseFolder . basename($tmp);
}
}
shuffle($allfile);
$f = fopen($playlistFile, "w+");
foreach ($allfile as $index => $file) {
if ($index % $everySong == 0) {
foreach ($cutArray as $sng) {
fwrite($f, $sng . PHP_EOL);
}
}
fwrite($f, $file . PHP_EOL);
}
fclose($f);
I check if the file exists and, if yes, I delete it. Then I scan the folder $baseFolder for mp3 files and store them in an Array (with full path).
The array is shuffled with shuffle($allfile)
Then I write the playlist file line by line, and with % (mod mathematic operation), every 3 podcast files (defined with the variable $everySong), I write the two musical cut sounds.
Shuffle and update playlist
How can I reload the playlist with the new added podcasts? I’ve written a bash script that I run with cron:
Every 12 hours the cronjob runs the php script to write a new playlist file. This generated playlist file includes all the podcasts, including the added ones. Then, with the usage of sig -HUP signal mechanism of ezstream, I can reload this new playlist.
#!/bin/bash
SERVICE="ezstream"
COMMAND="pidof $SERVICE"
EZSTREAMPID=$(eval $COMMAND)
# Randomize playlist and insert sounds
php createPlaylist.php
# Rereads the playlist file
kill -HUP $EZSTREAMPID
The main point here is kill -HUP $EZSTREAMPID: this command sends signal -HUP to the running ezstream. This signal:
- Rereads the playlist file
- Checks, in the new playlist file, where is the current playing podcast file
- If the current playing podcast file exists, the next song will be the one below it. Otherwise the next song will be the first of the playlist file
Conclusion
The need for an icecast client player that can stream podcast files with an update functionality is solved with ezstream and some lines of code, easy to read and to configure for your needs.

From audio to video with ffmpeg
A video content is something that social media and website used widely. Also a Video content it’s some more engaging and with a better reaction than a simple audio. Working on a webradio, I’m always looking for some solution that can engage people in order to attract people so my Idea was to modify audio content to create a video content.
But how you can create a video content from a video content?
ffmpeg has a huge collection of functions, one of these is waveform that can generate a video from an audio, the video simply shows the waveform of the audio. It’s good, but for me this is just a starting point
ffmpeg -i input_audio.mp3 -filter_complex \
"[0:a]showwaves=s=1280x100:colors=Red:mode=cline:rate=25:scale=sqrt[outputwave]" \
output.mp4
I’m using here filter_complex because I want a chain of filters, and this is only one of the steps. Let me explain the parameters:
[0:a] this means: from the stream 0 take the audio…
showwaves=s=1280×100 this is the filter, showwave, and the output format 1280×100
colors=Red the color of the waveform….
mode=cline the type of the waveform, line…..
rate=25 … how many frame
scale=sqrt …the scale. sqrt is good
[outputwave] this is a placeholder….we will use it later in the chain.
But what I want is a video, where the waveform it’s only an overlay level on it. The base video could be a pre-created video that can be used for all the audio content. It could be a slideshow video, something created in graphic, a static foto or with simple animation. So let’s suppose that this video exists. But How can this video fit the audio length, that can be variable?
How can, a pre-created video, fit the audio length of an audio file with a variable length?
My idea is to loop the pre-created video for the length of the audio and overlay on it, the waveform created from the audio file!
And I want this in a single ffmpeg command! And so…
ffmpeg -stream_loop -1 -i video.mp4 \
-i input_audio.mp3 -filter_complex \
"[1:a]showwaves=s=1280x100:colors=Red:mode=cline:rate=25:scale=sqrt[outputwave]; \
[0:v][outputwave] overlay=0:main_h-overlay_h:shortest=1 [out]" \
-map '[out]' -map '1:a' -c:a copy -y \
output.mp4
-stream_loop -1 -i video.mp4 This loops the input video, video.mp4, idefinetly*
-i input_audio.mp3….outputwave] we already covered this….
[0:v][outputwave] overlay=0:main_h-overlay_h:shortest=1 [out] this get the video of the first video, with [0:v] and the video of the waveform, [outputwave] as the input for the overlay filter, and put it in the position bottom with 0:main_h-overlay_h. The magic happens with shortest=1 that means: do this overlay for the length of the shorter. *The loop is infinitely long but the waveform no! And this means: for the length of the waveform!
-map ‘[out]’ -map ‘1:a’ -c:a copy -y this map the output as a video and copy the audio of the
This is a frame of the final video:


Listen microphone with your computer speaker: Audio monitor (for Mac)
How to listen your microphone with your speaker? How can you direct, or redirect, input from microphone to a different audio device?
Using your mac there is a freeware that you can download, it’s called Audio Monitor. The MTCoreAudio framework is a Cocoa-flavored Objective-C wrapping around the Hardware Abstraction Layer (HAL) of Apple’s CoreAudio library.
As you can see here you can send audio from your microphone to your speakers, as you can imagine this means also that there’ll be a lot of echo, but another things that you can do is send audio from your microphone to another output device. In conjunction with a setup like the one described here: Audio virtual device for MAC you can enable a lot of configuration like send your microphone audio and music to another listener connected with you with skype.