Saturday, May 24, 2014

Thread-safe Singleton with C++11

C++11 makes it easier to write a thread-safe singleton.

Here is an example. The class definition of the singleton looks as follows:
#include <memory>
#include <mutex>

class CSingleton
{
public:
    virtual ~CSingleton();
    static CSingleton& GetInstance();

private:
    static std::unique_ptr<CSingleton> m_instance;
    static std::once_flag m_onceFlag;
    CSingleton(void);
    CSingleton(const CSingleton& src);
    CSingleton& operator=(const CSingleton& rhs);
};

//The implementation of the GetInstance() method is very easy using C++11 std::call_once() and a lambda:

std::unique_ptr<CSingleton> CSingleton::m_instance;
std::once_flag CSingleton::m_onceFlag;

CSingleton& CSingleton::GetInstance()
{
    std::call_once(m_onceFlag,
        [] {
            m_instance.reset(new CSingleton);
    });
    return *m_instance.get();
}

Wednesday, May 14, 2014

Bluetooth audio Support using Gstreamer


The focus during the Bluetooth audio service development was to fix all the limitations of Bluetooth ALSA and PlugZ and present a flexible infrastructure that could be used for all Bluetooth audio related profiles. The following requirements were identified during the design:

• Treat mono and high quality profiles as equal
With the Service Discovery Protocol (SDP) it is possible to retrieve the supported profile list from
any remote Bluetooth device. Together with the information about the audio stream it is possible to select the correct profile automatically and do the needed conversation transparent for the user.

• Integrate with all multimedia frameworks Choosing ALSA as the basic multimedia framework
is not the best choice. Actually ALSA is pretty bad when it comes to virtual sound cards and
that is what Bluetooth audio is. There is no audio hardware directly attached to the system. All headsets, headphones or speakers are connected via an invisible radio link.
The frameworks GStreamer and Pulse Audio [7] are much better when it comes to handling virtual audio devices. So there is no need to treat them as second class citizens.

• Low-latency and high performance
In cases where the host has to handle all audio data processing, it should be done the most efficient
way and data copying should be avoided at all costs. This increases the performance and at the
same time results in good latency. In addition this will reduce the power consumption.

• Full integration with D-Bus
Provide a full D-Bus interface for control and notifications. It should allow creating, configuring and
controlling audio connections.Integrate with the Audio/Video Remote Control Profile (AVRCP) for handling keys and displays on remote devices.

GStreamer support

With the usage of GStreamer the possibilities become more flexible. The GStreamer framework allows a lot of configuration since everything can be abstracted into elements or containers and then a pipe can be constructed out of them. The GStreamer plugin that provides access to the Bluetooth
audio services consists of multiple elements that can be combined in various ways. Figure 4 shows the details of these elements.
# gst-inspect bluetooth
Plugin Details:
Name: bluetooth
Description: Bluetooth plugin library
Filename: libgstbluetooth.so
Version: 3.30
License: LGPL
Source module: bluez-utils
Binary package: BlueZ
Origin URL: http://www.bluez.org/
rtpmp3pay: RTP packet payloader
a2dpsink: Bluetooth A2DP sink
avdtpsink: Bluetooth AVDTP sink
mp3parse: Bluetooth MP3 parser
mp3dec: Bluetooth MP3 decoder
mp3enc: Bluetooth MP3 encoder
bluetooth: mp3: mp3
7 features:
+-- 6 elements
+-- 1 types

GStreamer plugin

Sunday, May 11, 2014

In-Memory Video using OpenGL

Introduction

A common question asked by beginning VL programmers is, "How do I display video using OpenGL?"
It is not surprising that people ask this question. There are few code examples in existence which demonstrate the use of VL and OpenGL in the same program. To compound matters, the subject is also not mentioned in theIris Media Libraries Programming Guide, which was written around IRIS GL-based code examples.

Setting up OpenGL

The biggest problem that one encounters when trying to set up video display through OpenGL is that the video display is slow - much slower, in fact, than that produced by the IRIS GL-based code samples.
But video display through OpenGL does not have to be slow, if you perform the proper setup work. Here's the checklist:
  • When you create an openGL window, you will do one of two things:
    1. If you are a pure X programmer, you call glxGetConfig() and glxChooseVisual() to create an X window suitable for OpenGL rendering.
    2. if you are a Motif programmer, you call GLwCreateMDrawingArea() to create an OpenGL widget.

      Either way, you eventually call glXCreateContext() to associate an OpenGL context with the window. The call might look like this:

      context = glXCreateContext(XtDisplay(widget), vi, 0, GL_TRUE);It is very important the last parameter is GL_TRUE. This specifies that pixel rendering should be done directly to the hardware, rather than through the X server.
  • You need to turn off some OpenGL features that can slow down pixel transfers. Here is a piece of code to do it:
/*
 * The original version of this code was developed by Allen Akin
 * (akin@sgi.com).  It has been modified for this example.
 */
void setupGL()
{
    /*
     * Is there a 24-bit visual? if not, we want to dither RGB.
     */
    Boolean doDither = TRUE;
    Display * dpy    = XtDisplay(getDeviceWidget());
    XVisualInfo vinfo;
    XVisualInfo *viList;
    int nitems;

    vinfo.depth = 24;
    viList = XGetVisualInfo(dpy, VisualDepthMask, &vinfo, &nitems);
    if (viList) {
        XFree(viList);
        doDither = FALSE;
    }

    /*
     * Disable stuff that's likely to slow down glDrawPixels.
     * (Omit as much of this as possible, when you know in advance
     * that the OpenGL state will already be set correctly.)
     */
    glDisable(GL_ALPHA_TEST);
    glDisable(GL_BLEND);
    glDisable(GL_DEPTH_TEST);
    if (!doDither) {
        glDisable(GL_DITHER);
    }
    glDisable(GL_FOG);
    glDisable(GL_LIGHTING);
    glDisable(GL_LOGIC_OP);
    glDisable(GL_STENCIL_TEST);
    glDisable(GL_TEXTURE_1D);
    glDisable(GL_TEXTURE_2D);
    glPixelTransferi(GL_MAP_COLOR, GL_FALSE);
    glPixelTransferi(GL_RED_SCALE, 1);
    glPixelTransferi(GL_RED_BIAS, 0);
    glPixelTransferi(GL_GREEN_SCALE, 1);
    glPixelTransferi(GL_GREEN_BIAS, 0);
    glPixelTransferi(GL_BLUE_SCALE, 1);
    glPixelTransferi(GL_BLUE_BIAS, 0);
    glPixelTransferi(GL_ALPHA_SCALE, 1);
    glPixelTransferi(GL_ALPHA_BIAS, 0);

    /*
     * Disable extensions that could slow down glDrawPixels.
     */
    const GLubyte* extString = glGetString(GL_EXTENSIONS);

    if (extString != NULL) {
       if (strstr((char*) extString, "GL_EXT_convolution") != NULL) {
           glDisable(GL_CONVOLUTION_1D_EXT);
           glDisable(GL_CONVOLUTION_2D_EXT);
           glDisable(GL_SEPARABLE_2D_EXT);
       }
       if (strstr((char*) extString, "GL_EXT_histogram") != NULL) {
           glDisable(GL_HISTOGRAM_EXT);
           glDisable(GL_MINMAX_EXT);
       }
       if (strstr((char*) extString, "GL_EXT_texture3D") != NULL) {
           glDisable(GL_TEXTURE_3D_EXT);
       }
    }
}
  • Video that you get from VL is arranged in top-to-bottom orientation, whereas openGL works in bottom-to-top orientation. So, before you call glDrawPixels(), do this:

    glRasterPos2i(originX, originY);
    glPixelZoom(1.0, -1.0);

    originX
     and originY should be the upper left corner of the region you want to draw, in openGL coordinates (origin in lower left). The glPixelZoom() call tells OpenGL to flip the pixels on the way to the display. On the Indy, this is a highly optimized operation.
  • If your video device produces standard OpenGL ordered pixels (VL_PACKING_ABGR_8: ev1, sirius, ev3, divo), draw the pixels using the native OpenGL pixel format:

    glDrawPixels(sizeW, sizeH, GL_RGBA, GL_UNSIGNED_BYTE, dataPtr);
  • If your video device produces IRIS GL ordered pixels (VL_PACKING_RGBA_8: vino), draw the pixels using SGI's ABGR extension to OpenGL. On the Indy at least, this means the pixels from VL will go directly to the screen. Like this:

    glDrawPixels(sizeW, sizeH, GL_ABGR_EXT, GL_UNSIGNED_BYTE, dataPtr);
  • If you do not plan to do anything with the pixels from VL except display them on the screen, there is a VL optimization you can perform to maximize video throughput. Perform the following call aftervlCreateBuffer() and before vlRegisterBuffer():

    vlBufferAdvise(buf, VL_BUFFER_ADVISE_NOACCESS);
These steps are sufficient to ensure that video displays quickly. But there are further subtleties you must pay attention to in order to insure reliable transfers. These fine points, such as setting up VL events, and the proper way to respond to certain VL events, is discussed in detail elsewhere in this guide.

Displaying 8-bit Video using OpenGL

The above discussion assumes that you are displaying 24-bit video (VL packing VL_PACKING_RGB_8) to the screen. Some devices, such as vino, support 8-bit video-to-memory transfers (VL packingVL_PACKING_RGB_332_P).
The following method for displaying an 8-bit video stream using OpenGL is provided by Nelson Bolyard:
The trick for efficiently displaying 8-bit vino BGR233 images is quite involved, and entirely undocumented, as far as I know. Credit for the code below goes to Terry Crane. It involves using OpenGL's built-in "pixel mapping", which is another form of color mapping, that is separate and distinct from and in addition to the X-server's color mapping. Indy and Indigo2's XL graphics have hardware acceleration for this "pixel mapping" that translates dithered BGR233 into RGBA.
To use it, you first setup OpenGL's state machine with this code:
static void
FastUByteCItoRGBAPixelMap()
{
    GLint i;
    GLfloat constantAlpha = 1.0;
    GLfloat map[256];

    glPixelTransferf(GL_ALPHA_SCALE, 0.0);
    glPixelTransferf(GL_ALPHA_BIAS,  1.0);
    glPixelStorei(GL_UNPACK_ALIGNMENT, 1);

    /* define accelerated bgr233 to RGBA pixelmaps.  */
    for(i=0; i<256; i++)
        map[i] = (i&0x7)/7.0;
    glPixelMapfv(GL_PIXEL_MAP_I_TO_R, 256, map);
    for(i=0; i<256; i++)
        map[i] = ((i&0x38)>>3)/7.0;
    glPixelMapfv(GL_PIXEL_MAP_I_TO_G, 256, map);
    for(i=0; i<256; i++)
        map[i] = ((i&0xc0)>>6)/3.0;
    glPixelMapfv(GL_PIXEL_MAP_I_TO_B, 256, map);

    glPixelMapfv(GL_PIXEL_MAP_I_TO_A, 1, &constantAlpha);

    glPixelTransferi(GL_INDEX_SHIFT, 0);
    glPixelTransferi(GL_INDEX_OFFSET, 0);
    glPixelTransferi(GL_MAP_COLOR, GL_TRUE);
    glDisable(GL_DITHER);
}
and then you invoke glDrawPixels with the "format" GL_COLOR_INDEX.

Displaying Interlaced and/or YCrCb Video on O2 using OpenGL

There are two OpenGL extensions that facilitate the display of video on O2, the interlace extension (GL_SGIX_interlace) and the YCrCb extension (GL_SGIX_ycrcb).The first changes the semantics of calls toglDrawPixels (and related pixel and texture operations) so that video fields can be drawn directly to the graphics display. The second extends the set of pixel data formats to include YCrCb 4:2:2 (and 4:4:4).
GL_SGIX_interlace is (almost) completely implemented on O2. The extension defines an OpenGL state parameter, GL_INTERLACE_SGIX, which is GL_FALSE by default. When the parameter is GL_TRUEglDrawPixels()will draw "every other line" on the screen. Roughly speaking, if your source buffer contains 100 lines, glDrawPixels with interlace enabled will draw line 1 of your buffer at line 1 of the screen, line 2 of your buffer on line 3 of the screen, ..., and line 100 of your buffer on line 199 of the screen. This extension interacts with glRasterPos* and glPixelZoom, which are commonly used to place, scale and/or re-orient video images. Note that when the zoom is not 1.0 (or -1.0), using glRasterPos* to correctly position the second field is tricky. Fortunately, using glBitmap to move the raster x and positions works fine.
The following code draws 2 top-to-bottom NTSC fields with specified x and y zooms.
    GLdouble zoomx, zoomy;
    GLint width, height; // size of frame
    void *f1, *f2; // fields (f1 is odd)
    
    
    glEnable(GL_INTERLACE_SGIX);
    glViewport(0, 0, width, height);
    glOrtho(0, width, 0, height, -1, 1);
    glPixelZoom(zoomx, -1.0 * (GLdouble)zoomy);
    
    
    glRasterPos2f(0.0, (GLfloat)sHeight);
    glBitmap(0, 0, 0, 0, 0, -1, NULL);
    glDrawPixels(width, height/2, GL_YCRCB_422_SGIX, GL_UNSIGNED_BYTE, f1);
    glRasterPos2f(0.0, (GLfloat)sHeight);
    glDrawPixels(width, height/2, GL_YCRCB_422_SGIX, GL_UNSIGNED_BYTE, f2); 
    

Besides O2, both the Reality Engine and Infinite Reality support this extension. Unfortunately, Impact does not.
GL_SGIX_ycrcb is only partially implemented on O2. It is supported for output only, which should be enough if all you want to do is display video on the screen. You can use it with glDrawPixels, but not glReadPixels. There is a further restriction in that the extension is only supported for some drawables. It is supported on windows and pixel buffers (GLXPbufferSGIX) but not on pixmaps (GLXPixmap). According to Terry Crane, because it is only partially implemented, you won't actually find GL_SGIX_ycrcb in the extension string on 02. According to Terry, the best way to determine if the extension is available is to see if the current OpenGL renderer is O2's graphics chip called CRIME. Sadly, to determine if it is available on a given drawable, the only method is to attempt to use it and see if the call to glDrawPixels fails (assumedly with GL_INVALID_ENUM).
Here's some code that checks for the presence of both extensions.
    glXMakeCurrent(dpy, window, ctxt);
    str = (const char *)glGetString( GL_EXTENSIONS );
    
    has_lace =  (int)(strstr(str, "GL_SGIX_interlace"));
    has_ycrcb = (int)(strstr(str, "GL_SGIX_ycrcb"));
        
    // If the GL_SGIX_ycrcb extension string isn't there
    // we check against the renderer
    str = (const char *)glGetString( GL_RENDERER );
    if (strstr(str, "CRIME"))
        has_ycrcb = 1;
Here's something else you might want to know about the YCrCb extension and its interaction with glPixelZoom. If the either of the factors you pass to glPixelZoom results in a non-intergral zoom, then glDrawPixel-ing YCrCb data will be very slow on O2 (because the zoom is done in software).

Special thanks to Michael Portuesi, Nelson Bolyard, and Eric Bloch., Robert Tray and Terry Crane for some of the OpenGL information.

Thursday, May 1, 2014

MPEG-2 Program Stream Muxing/Demuxing/Timecode

MPEG-2 Program Stream Muxing


ffmpeg -genpts 1 -i ES_Video.m2v -i ES_Audio.mp2 -vcodec copy -acodec copy -f vob output.mpg

Note : In order to mux multiple audio tracks into the same file :
ffmpeg -genpts 1 -i ES_Video.m2v -i ES_Audio1.mp2 -i ES_Audio2.mp2 -vcodec copy -acodec copy -f vob output.mpg -newaudio

Note : In order to remux a PS file with multiple audio tracks :
ffmpeg -i input.mpg -vcodec copy -acodec copy -f vob output.mpg -acodec copy -newaudio


MPEG-2 Program Stream Demuxing

ffmpeg -i input.mpg -vcodec copy -f mpeg2video ES_Video.m2v -acodec copy -f mp2 ES_Audio.mp2

Note : This also works for files containing multiple audio tracks :
ffmpeg -i input.mpg -vcodec copy -f mpeg2video ES_Video.m2v -acodec copy -f mp2 ES_Audio1.mp2 -acodec copy -f mp2 ES_Audio2.mp2


MPEG-2 Start Timecode

ffmpeg -i <input_file> -timecode_frame_start <start_timecode> -vcodec mpeg2video -an output.m2v

Note : Start timecode is set as number of frames.