Augmenting Audio/Video Recordings with Text Overlays¶
This description extends the description of audio/video recording.
Pipeline¶
The following diagram contains a simplified illustration of a minimal GStreamer pipeline that can be used to generate videos with text overlays from RSB events. The vorbisenc
, theoraenc
and oggmux
elements can be replaced with different encoders and container multiplexers as required.
rsbaudiosrc------------------->vorbisenc---+ v rsbvideosrc--------+ oggmux--->filesink v ^ subtitleoverlay--->theoraenc---+ ^ rsbtextsrc---------+
Script¶
The RSB GStreamer Integration project includes the script source:trunk/scripts/record_subtitles.sh in the source:trunk/scripts subdirectory. This script is sufficient for cases in which audio data, video data and a single text overlay should be renderer into a video container file. The script is used as follows:
record_subtitles.sh OUTPUTFILE VIDEOSCOPE ( AUDIOSCOPE | _ ) ( TEXTSCOPE | _ )
_
can be used to indicate that a particular component should not be recorded.
Example¶
Assuming audio data, video data and some sort of textual controller state messages have been recorded into a logfile named state_transitions.tide
, this example demonstrates overlaying the video with the textual controller state messages.
- Start the recording and overlay rendering pipeline:
$ ./record_subtitles.sh state_visualization.ogv /video/camera1 /audio/mic1 /controller/states
This part of the example assumes an RSB configuration that enables the Spread transport for a Spread daemon listening on port4803
oflocalhost
.
- Replay audio, video and text data from a suitable log file:
$ bag-play -r as-fast-as-possible state_transitions.tide 'spread://localhost:4803'