It is just a pain in the ass if you cannot get any work done because you have to constantly check if the image of a webcam has changed. Therefore I wrote a shell script that downloads the webcam images and puts them together as a video.
What the scrip does is the following:
- download an image from an url
- check if the image is the same as the last one (do not store duplicates)
- Create a video from all downloaded images
Now let’s give some credit where it is due:
Downloading the image can be done with either wget or curl. However the file must be renamed do avoid overriding: Curl for grabbing webcam pics in succession?
Basically there are three ways to create a movie from images. Two are described in Making movies from image files using ffmpeg/mencoder. The third did not work for me but this may because I use Ubuntu Gnome. On a KDE desktop it might work. This solution however only works with a desktop an cannot be run on a server or a NAS.
The iterative aspect of the execution of the script I solved with a cron job.
I put the code into two functions. The first takes for arguments:
- The URL from where to download the image
- The path where the image is to be stored
- The name of the video file (without file extension)
- The file name of the downloaded image
downloadAndCreateMovie(){
STORE_PATH=$2
URL=$1
if [ ! -d $STORE_PATH ]
then
mkdir $STORE_PATH
fi
cd $STORE_PATH
PREFIX=$3
LASTFILE=`ls -Atr *.jpg | tail -1`
FILENAME=$4
# curl -O $URL
wget -q $URL
MD5LAST=`md5sum -b $LASTFILE | sed 's: :\n:g' | head -1`
MD5CUR=`md5sum -b $FILENAME | sed 's: :\n:g' | head -1`
if test $MD5LAST != $MD5CUR
then
cp -v $FILENAME `date +"%y%m%d-%H%M%S"`.jpg
images2movie $PREFIX
fi
rm $FILENAME
}
First we make sure that the directory where we store the images exists. Then we figure out what the file name of the latest downloaded image is. We can then download the image with curl or wget. To compare the two files (the latest and the just downloaded) we compute the md5 checksum and compare them. If the do not match we rename the downloaded file by giving it a timestamp as a file name and then call the second function to create the video.
This function has some drawbacks:
- The name of the downloaded image could be retrieved from the URL
- If the function is called upon an empty folder there is no last file and the function gets stuck in the computation of the md5 sum of “
- There is no exception handling on the download. I guess it will just time out the there will be no image
Now the second function creates the video from the image files. It takes the name of the video file as an argument. I choose the ffmpeg approach since it did work better for me and might work better on a NAS.
images2movie(){
MOVIE_NAME="movie"
MOVIE_NAME=$1
DURATION=3
mkdir movie
FILECOUNTER=`printf "%05d" 0`
for file in `ls`
do
for (( i = 0 ; i < = $DURATION; i++ ))
do
cp $file movie/$FILECOUNTER.jpg
FILECOUNTER=`expr $FILECOUNTER + 1`
FILECOUNTER=`printf "%05d" $FILECOUNTER`
done
done
rm $MOVIE_NAME.mp4
ffmpeg -r 1 -b 1800 -s svga -i movie/%05d.jpg $MOVIE_NAME.mp4
rm -rf movie
}
The video is created with a frame rate of one frame per second. If you want to display the same image over several seconds you have to input it several times. The variable DURATION defines how many times. The images are copied into the temporary video directory. The file name is changed to consecutive numbering with five digits starting with 0. The tricky part is the counting: The file name is a string so you have to evaluate the expression of adding 1 to the numeric file name and then reformatting it as a five digit string.
The next thing is to create the video from the sample images. The parameter -r specifies the frames per second. The parameter -b defines the bitrate. The parameter -s defines the output format. Here you will want to select a format that matches the format of the images for best quality. The parameter -i defines the input using the same expression as the file name. The last parameter is the output file name.
Basically that's it. Therefore the call would look like this:
STORE_PATH=~/Pictures/kronplatz/Marchner URL=http://www.kronplatz.com/_webcam/70/webcam7.jpg downloadAndCreateMovie $URL $STORE_PATH Marchner webcam7.jpg
To install this a a cronjob issue
crontab -e
on the console and add this line
*/10 * * * * /home/andi/bin/downloadImage.sh > /dev/null 2>&1
You will want to apply the correct path to your script. This line executes the script every ten minutes and throws away all output. If the images on the URL are replaced regularly you are doing good in choosing the same interval here.
When closing the editor the temporary file will be stored and installed as cron job.
The script can be downloaded:
downloadImage.sh
It might be necessary to add the path variable in the crontab. PATH=$PATH:$HOME/bin does not work, you will want to specify the full path.
If you have further problems you can redirect the log output to a file by replacing /dev/null with the path to the file.
Using ffmpeg as shown produces some block artefacts. I found an alternative in dvd-slideshow which can be installed with
sudo apt-get install dvd-slideshow.Then replace the ffmpeg line with the following:
dvd-slideshow -n $MOVIE_NAME -o . -f $SLIDESHOW
$SLIDESHOW holds the filename of the input file that you have to generate in the for loop where you iterate over all *.jpg files in the directory:
echo $file':1' >> $SLIDESHOW
This will generate a *vob file of considerable size. You can add the -flv flag to reduce the size by factor 10 but this will introduce some block artefacts again.