Thursday, July 24, 2014

Decoding the AC-3 Bitstream


AC-3 algorithm uses perceptual coding for compression. It divides the audio spectrum of each
channel into narrow frequency bands based on the frequency selectivity of human hearing.
These frequency bands are analyzed and more number of bits are used to represent most
audible signals. Encoding is done in the frequency domain, using a 512-point MDCT (or two
256-point transforms for transient signals) with 50% overlap.

The different steps described in the previous sections for decoding the AC-3 bitstream is shown as a
pseudo code in below.

                             Detect AC-3 frame sync
                             Error Check (CRC)
                             Unpack BSI (Bit Stream Information) data
                             For audio block 0 to 5
                                     Unpack fixed data (coupling,bit allocation & other info)
                                      For channel 1 to No. of coded channels
                                      Unpack exponents
                                       For band 1 to No. of bands
                                              Compute bit allocation
                                               Unpack mantissas
                                                Scale mantissas
                                                 Decouple mantissa
                                                 Denormalize mantissas by exponents
                                          Compute partial inverse transform
                                         Downmix to appropriate output channels
                                  For channel 1 to No. of output channels
                                               Window/ overlap-add with delay buffer
                                                Store samples in PCM output buffer
                                               Copy downmix buffer values to delay
Figure 4. Pseudo Code for the Decoder

Monday, July 21, 2014

Google Says Smaller Image Files Will Speed Up Web



Interesting update in WO Space from Google!

1. Google image format promises to shrink the size of Web photos and graphic files down by about 35%.

2. In order to boost load times for websites, Google announced last month that it has converted most of YouTube's thumbnail images to WebP, improving the site's load time by 10%. Google says that alone has saved users a cumulative 140,000 hours each day.

3. Huge surplus considering that images are responsible for nearly two-thirds of the size of an average website a figure that grew by more than 30% last year, according to the HTTP Archive.

4. Brings animation support to Chrome

Reference and more updates:
http://money.cnn.com/2014/07/21/technology/innovationnation/google-webp/index.html?hpt=hp_t2
http://www.cnet.com/news/google-speeds-webp-image-format-brings-animation-support-to-chrome/




Friday, July 18, 2014

Gaps in the deployment of media application and services

Gaps in the deployment of Media Application and services

Abstract:- The explosive growth of Media applications and services has blurred a clear distinction between producers and consumers of media ttoday. End users are playing both roles at different  times as there are many opportunities and potentially significant rewards for successful media applications and seamless services.  In fact, end users are also producing media at a higher rate than consuming it, Facebook reports more than 300 million photos are uploaded every day. YouTube claims100 hours of video are uploaded every minute. Some interesting facts on online video trends and Internet Marketing statistics are: 

• Viewers of online video will hit 500 million 

• Over half the population and 70% of internet users will watch

 • Mobile video will reach over 100 million viewers 

• Smartphone’s video views over 75 million 

Other interesting stats 

•  ~75% of home sellers will list with an agent that does video, but only 12% of agents have a YouTube account. 

• YouTube has over 1000 million monthly unique visitors with over 4 billion video hours being watched 

• 35% of YouTube traffic comes from the US 

• YouTube traffic from mobile devices tripled in 2012-2013 While there is great appetite for online content and the services and applications it generates there are a number of challenges—or gaps—in their play. 

Most important gaps are: 


1.  Uninterrupted multimedia Support across devices:- New devices are constantly becoming available and users are more and more likely to be accessing applications and services from multiple devices. Only considering mobile devices, the number of mobile connected devices will exceed the world’s population in 2013, and by2017t herewillbe1.4 devices per capita worldwide .As the number and types of devices being connected in the home increase, it is creating an environment with products from a mix of manufacturers where there are more opportunities for device to device interaction. Given this trend, it is imperative that companies address the issue of seamless interoperability and support across devices in their products or risk being relegated to irrelevance.


2.  Start-up latency:  Many studies have concluded that video stalling in mobile networks has the largest impact on the user engagement and quality of experience across all types of video content. It seems that while consumers will settle for low resolution videos on smaller screens, they are usually very dissatisfied with stalls. We believe that video stalling is the most significant quality measure of video viewing experience. Traditionally stalling issues were treated by increasing the bandwidth (kbps) allotted for the video transaction. However, as networks get more congested and video content becomes more prevalent, it is harder to prevent the conditions that cause stalling. It is also interesting to explore whether the actual bandwidth is allocated appropriately and if parameters other than the allocated bandwidth have any effect on video stalling.80% of users quit videos within 10 seconds if they see more than 3 stalls 12 .


3.  Distributed across Locations: A consequence of the abundance of media services, choices, and activities is that content is becoming highly distributed across a large number of locations. It is important for the end users to be able to find and access the content they require, or simply desire, without undue effort. The idea is to reduce the friction in content discovery and delivery so the end user stays engaged. Without this mechanism in place, the end user may be forced to exit the application and use another application or service to find what they need. Allowing users to centrally manage and authorize access to multiple online services as well as access content on personal storage in the home is one key way of breaking down content silos. The good news is that this is an area that is highly visible and valuable to end users, so there is a potential  for  significant  value  for  companies  that  are  able  to  provide  good  solutions  for  Distributed contents. 


4.  Flawless Media experience through Analytics:  Reporting  and  analytics  provide  an  opportunity  to  understand  how  the  multimedia  application  or multimedia services are actually being used in practices. In order to improve or optimize anything, it must be measured, so having some data collection as part of the application or service is an important ingredient. Using data collected with adherence to appropriate privacy policies can result in significant insight  that  would  be  difficult  to  predict  while  still  protecting  the  privacy  of  individual’s.  This  type  of anonymous data can help identity which device  is popular, which use cases are popular, how many devices a typical user has, and which feature users are not using or using most, to mention just a few. The data can also help operators to better understand the environment’s in which their products and services operate. In order to keep their product competitive and grow in new areas, companies need to take advantages of the insights that reporting and analytics can offer.  


5.  Video Classification and Adaptive QoP/QoS : With the development of heterogeneous networks and video coding standards, multiresolution video applications over networks become important. It is critical to ensure the service quality of the network for time-sensitive video services. Worldwide Interoperability for high available network bandwidth is a good candidate for delivering video signals because through LTE/5G the delivery quality based on the quality-of-service  (QoS)  setting  can  be  guaranteed.  The  selection  of  suitable  QoS  parameters  is, however, not trivial for service users. Instead, what a video service user really concerns with is the video quality of presentation (QoP) which includes the video resolution, the fidelity, and the frame rate. QoP is a user based definition of quality to measure for the ability to satisfy human needs. Experiments used QoP to evaluate the overall subjective quality of multimedia presentations at receiver end, where QoP has  defined  as  two  perceptual  abilities  of  information  assimilation  and  the  level  of  enjoyment 3 . Considering user perception to perceive information or just enjoy the game when he/she picks up a sports game, treat QoP as two metrics of information metrics (QoPIM) and entertainment (QoPEnt)  


6.  Subjective Video Quality of available devices Subjectively  perceived  video  quality  is a critical factor  when  adopting  new  video  applications. When video is used in mobile networks the most important requirements are related to low bitrates, framerates and the screen size of mobile device. Time-varying quality has a definite impact on human subjective judgments of quality, and this impact is a function of the frequency of occurrence of significant distortion changes and of the differences in quality between segments. Humans seemingly prefer longer freezes over shorter ones – this is not terribly surprising since choppy video playback is not pleasing at all. However, what is surprising about the frame-freeze distortion is that humans appear to be far more forgiving of lost segments than they are of choppy quality. This has interesting implications for those supplying real-time video delivery. It is also prudent to note that while choppy playback is the worst offender, lost segments star to matter relative to small reductions in choppiness. Further, this preference is dependent upon the content being displayed. It would be interesting to study whether the same results hold true when viewing sports – a viewer may prefer choppy playback in this case as opposed to him missing out on the footage 6 .  


The sky rocketing growth of media applications and services in recent years is showing no signs of slowing. Cisco and Ericsson predict video traffic will account for about 70  percent of all consumer  internet traffic by 2017,  up about 60 percent from 2012. For mobile, video accounted for more than half of mobile data traffic in 2012 and will grow to three fourth  of all mobile data by 2017 78 .  There is no surprise for so much expansion in the use of media—it helps increase engagement and convert the story much more efficiently. Today, as multiple devices such as tablet computers and ultra-light laptops and smartphones make watching video ever easier and more accessible, viewing sets records every time Score or others measure it As is often said, a picture is worth millions words, and according to Dr. James McQuivey of Forrester Research, a minute of video is worth 1.8 million words 9. 


Media applications and services today are expected to provide rich user experience which include various different combinations of multimedia (image, video, audio) capturing, media browsing, searching, sharing, and viewing high quality multimedia across a range of devices. Even for the applications and services where audio/video, images, and videos are not primary focus, they often play an important role since media can enhance the overall experience dramatically.  Given the importance of the media companies are finding new and inventive ways to introduce media application features in their products to increase the value to their customers. For example for a leading manufacturer of wireless routers have been increasingly embedding media servers across product lines to allow their users to share devices throughout the home. In doing so, the router can become the media hub at home and play a much more visible role for the customer.  People are also spending more and more time online using a wide range of value added services to access commercial content as well as view and share user generated content with friends and family. It’s easy for people to get involved in an increasing number of online services as the different social groups of which they are members may user different services. The growth in the availability of the connected devices capable of high quality media experience is also providing more outlets for end users to enjoy the  content when and where ever they want. Operators have an opportunity to offer significant value to users  of their product by providing integration with different complementary services people want to access. For  example, some media applications are allowing user to link multiple online services accounts so the  application can preset and display media from any of the services along with locally stored content in a seamless experience. 


There are many opportunities and potentially significant rewards for successful media applications and seamless services. However its dynamic fast faced environment that is not the simple and presents a set of challenges for deploying a successful solution.  Let’s discuss about some of the gaps in more detail to provide key recommendations.  


Conclusion: Audio and video traffic on the internet is growing at an exponential rate. In fact more than 50% of all mobile traffic is video and there are over 6,300 stations streaming audio content to users. All this takes up vast amounts of bandwidth and is congesting networks for mobile operators – who are also losing revenues to OTT services. Keeping pace with the explosive growth of video traffic on mobile devices, any HD solution should be focused on optimization of all video content. Addressing these features predominantly help in reducing costs as operators are able to defer network expansion and are able to deliver Quality of Experience to the user and at the same time meet the demand of increasing video downloads. 

References:

9. http://www.youtube.com/watch?v=EoaezGdKS5s

Read more at: Gaps in the deployment of Media Application and services

Monday, July 14, 2014

MPEG-DASH : Performance of Low Latency Live Streaming using DASH

HTTP Streaming is a recent topic in multimedia communications with on-going standardization activities, especially with the MPEG DASH standard which covers on demand and live services. One of the main issues in live services deployment is the reduction of the overall latency. Low or very low latency streaming is still a challenge.
HTTP Streaming is the new approach for streaming video over  the Internet, for live and on demand cases. However, current  approaches, in particular using DASH, are not deployed for low latency live services. In this paper, we proposed to use the amendment 1 of DASH in combination with Gradual 
Decoding Refresh encoding and to deliver media frames up to  the frame. We measured the overhead introduced by the GDR  encoding and the associated fragmentation. We showed that especially for high definition content, the overhead in the  order of 13% can be acceptable. We also described an 
implementation of a streaming system comprising a DASH  live encoder generator, a DASH-aware web server and a  DASH client. With this system, we validated the approach for  very low latency streaming in local networks, with latency as  low as 240 ms. In future work, we plan to examine how such  low latency system will behave in real content delivery  networks, and to further exploit the combined use of GDR and  chunk encoding to enable fetching segments not from their start, reducing the initial delay and enabling faster switching

In this Paper, Author push the use of DASH to its limits with regards to latency, down to fragments being only one frame, and evaluate the overhead introduced by that approach and the combination of: low latency video coding techniques, in particular Gradual Decoding Refresh; low latency HTTP streaming, in particular using chunked-transfer encoding; and associated ISOBMF packaging.


We experiment DASH streaming using these techniques in local networks to measure the actual end-to-end latency, as low as 240 milliseconds, for an encoding and packaging overhead in the order of 13% for HD sequences and thus validate the feasibility of very low latency DASH live streaming in local networks.

Reference: http://biblio.telecom-paristech.fr/cgi-bin/download.cgi?id=14719

Wednesday, July 2, 2014

Design Patterns C++

Singleton Pattern
  1. Singleton is a creational design pattern.
  2. A design pattern to provide one and only instance of an object.
  3. Make the constructors of the class private.
  4. Store the object created privately.
  5. Provide access to get the instance through a public method.
  6. Can be extended to create a pool of objects.Program

#include <iostream>
using namespace std;
// Singleton class
class MySingleton {
public:
static MySingleton* iInstance;
public:
static MySingleton* GetInstance();

private:
// private constructor
MySingleton();
};

MySingleton* MySingleton::iInstance = NULL;

MySingleton::MySingleton()
{
   cout << "In construtor ..." << endl;
}
MySingleton* MySingleton::GetInstance()
{
    if ( iInstance == NULL ) {
    iInstance = new MySingleton();
}
  return iInstance;

}

void main()
{
    MySingleton* obj;
    obj = MySingleton::GetInstance();
}
OUTPUT:
In construtor ... (displayed only once)


Factory Pattern
  1. Factory pattern is a creational design pattern.
  2. Idea of the factory patterns is to localize the object creation code.
  3. This prevents disturbing the entire system for a new type introduction.
  4. Typically when a new type is introduced in the system, change is at one place only where the object is created to decide which constructor to use.
  5. Simplest of the factory is introduction of a static method in the base class itself, which creates the required object based on the type.
  6. Other variant is Abstract Factory.
  7. Concrete classes are isolated.
  8. Client need not event know which class is implementing its need.
  9. Static factory - Sample Program
Example:

#include <iostream>
#include <string>

using namespace std;

// Abstract Base Class
class Shape {
public:
virtual void Draw() = 0;
// Static class to create objects
// Change is required only in this function to create a new object type
static Shape* Create(string type);
};

class Circle : public Shape {
public:
void Draw() { cout << "I am circle" << endl; }
friend class Shape;
};


class Square : public Shape {
public:
void Draw() { cout << "I am square" << endl; }
friend class Shape;
};


Shape* Shape::Create(string type) {
if ( type == "circle" ) return new Circle();
if ( type == "square" ) return new Square();
return NULL;
}


void main()
{
// Give me a circle
Shape* obj1 = Shape::Create("circle");


// Give me a square
Shape* obj2 = Shape::Create("square");


obj1->Draw();
obj2->Draw();
}


OUTPUT:
I am circle
I am square

What is Observer Pattern?
  1. Observer pattern is a behavioral design pattern.
  2. Observer pattern is used to solve the problem of notifying multiple objects of a change to keep them in sync like the Model-View-Controller (MVC) concept.
  3. Useful for event management kind of scenarios.
  4. Two classes are involved.
  5. The Observable class is where the actual data change is occuring. It has information about all the interested objects that need to be notified.
  6. The Observer is typically an abstract class providing the interface to which concrete classes interested in the events need to be compliant to.
Sample Program

#include <iostream>
#include <set>

using namespace std;

// ---------------- Observer interface -----------------
class MyObserver {
    public:
        virtual void Notify() = 0;
};

// ---------------- Observable object -------------------
class MyObservable {
        static MyObservable* instance;
        set<MyObserver*> observers;
        MyObservable() { };
    public:
       static MyObservable* GetInstance();
       void AddObserver(MyObserver& o);
       void RemoveObserver(MyObserver& o);
       void NotifyObservers();
       void Trigger();
};

MyObservable* MyObservable::instance = NULL;

MyObservable* MyObservable::GetInstance()
{
    if ( instance == NULL ) {
       instance = new MyObservable();
    }

    return instance;
}

void MyObservable::AddObserver(MyObserver& o)
{
    observers.insert(&o);
}

void MyObservable::RemoveObserver(MyObserver& o)
{
    observers.erase(&o);
}

void MyObservable::NotifyObservers()
{
    set<MyObserver*>::iterator itr;
    for ( itr = observers.begin();
          itr != observers.end(); itr++ )
    (*itr)->Notify();
}

// TEST METHOD TO TRIGGER
// IN THE REAL SCENARIO THIS IS NOT REQUIRED
void MyObservable::Trigger()
{
    NotifyObservers();
}

// ------ Concrete class interested in notifications ---
class MyClass : public MyObserver {

        MyObservable* observable;

    public:
       MyClass() {
          observable = MyObservable::GetInstance();
          observable->AddObserver(*this);
       }

       ~MyClass() {
          observable->RemoveObserver(*this);
       }

       void Notify() {
            cout << "Received a change event" << endl;
       }
};

void main()
{
    MyObservable* observable = MyObservable::GetInstance();
    MyClass* obj = new MyClass();
    observable->Trigger();
}

OUTPUT: 
Received a change event