|HOME||MEDIABIZBLOGGERS.com||WOMEN ADVANCING||HOOKED UP||MEMBERSHIP INFO||MEMBER COMPANIES||MEDIA BUSINESS REPORT||ECONOMIC FORECASTS||RESEARCH|
Published: June 27, 2008 at 12:24 PM GMT
Last Updated: July 1, 2008 at 12:24 PM GMT
Consumers have demonstrated a preference for three basic types of online video experiences over the past few months: Video Snacking, Download-to-Own and Online Television. Each of these three consumer behaviors has a specific value chain associated with it. Video Snacks are hard to directly monetize. Download-to-Own files are hard to protect. But, Online Television is, for all intents and purposes, television using the public Internet as the distribution network. And people who have popular content are enjoying excellent financial results from making that content available online.
You can find examples of Internet Television at hulu.com, abc.com, nbc.com, cbs.com, fox.com. In fact, almost every major television network offers some kind of online viewing experience for their most popular shows. Which begs the question, "What does quality online video look like?" Should it look like Standard Definition Television? Should it look like HDTV? Should it have to meet "broadcast quality" standards as a benchmark?
We have come to the time in the transition from network to networked television where setting some minimum requirements for the online viewing experience would be helpful. I'd like to assemble a group of video professionals, compile a list of requirements and set-up some independent testing groups to play video watchdog for the industry. And, I'd like you to help me get it done!
To start the dialog, here are my suggestions for the subjective attributes of quality online video:
1) The video has to start very quickly (like within a second of when you press the play button).
2) Continuous, full motion video that looks sharp at full screen.
3) Colorspace that matches or exceeds broadcast NTSC television,
4) Stereo audio with a dynamic range that exceeds broadcast standards.
5) No buffering after the initial picture comes on, no exceptions.
6) No drop out, pixilated frames or other artifacts on the screen.
To achieve these subjective goals, we will have to create a set of test criterion that takes several things into consideration:
1) Encoding, the art and science of master video files and making them available for distribution.
2) The player software.
3) The topology of the distribution network.
4) Speed of the user's broadband connection.
5) The quality of the user's broadband connection.
6) The quality of the user's computer.
With all of these variables, it is very difficult to maintain video quality from video publisher to consumer (no matter how you define quality). Mostly because there are so many components along the signal path that video publishers don't control. But let's press on.
If we were to start thinking about measuring the quality of an online video viewing experience here are a few things we might measure:
1) Start Time: As measured by the average time it takes for video to begin playing.
2) Quantity of Impairments: As measured by the number of impairments over a given length of time.
3) Average Length of Impairments: As measured by the average duration of stalls or buffering.
4) Wait Time on Seek: As measured by the average duration of buffering or stalls before the video begins to play from the seek points.
5) Wait Time on Ad Break / Return: As measured by the average delay duration when programming cuts to an ad, or when an ad ends and returns to regular programming.
6) Video Quality Delivered: As measured by average video bit rate delivered.
7) Link Efficiency: As measured by the percentage of a user's bandwidth consumed.
8) Encroachment Test: Tiered scoring of the above tests as additional viewers move onto the network.
The list above isn't complete, but it's a start.
We also need to set standards that adjust for the type of broadband environment in which the video will be consumed. For example: ADSL at 768 kpbs down and 384 kpbs up or Cable modem at 5 Mbps down by 768 kpbs up. Unless you take the network environment into consideration, the standards will be hard to achieve. We will have to "handicap" our standards to the limits of each network.
So here's the pitch. Online video is coming into its own. People are watching and, as in industry, we need to define a quality experience the same way that the broadcast networks do. We need to create testing environments and set standards of quality that each distributor can strive to achieve. I think it's a job for everyone who wants to be involved. To join my ad hoc group, email me at firstname.lastname@example.org. It's time.
Shelly Palmer is Managing Director of Advanced Media Ventures Group LLC and the author of Television Disrupted: The Transition from Network to Networked TV (2006, Focal Press). Shelly is also President of the National Academy of Television Arts & Sciences, NY (the organization that bestows the coveted Emmy® Awards). He is the Vice-Chairman of the National Academy of Media Arts & Sciences an organization dedicated to education and leadership in the areas of technology, media and entertainment. Palmer also oversees the Advanced Media Technology Emmy® Awards which honors outstanding achievements in the science and technology of advanced media. You can read Shelly’s blog here. Shelly can be reached at email@example.com
Though content marketing is now an accepted and widely adopted form of marketing, there continue to be a lot of nuances that people don’t seem to understand. One of these tricky nuances is the distinction between lead generation and demand generation. How are these things similar? How are they different? If you want to be successful in content marketing, appreciating the differences between them is key. Different types of content work better for different types of content marketing. A blog post can be a good device for either lead generation or demand generation, but which one depends on how (and why) you write it. Understanding and appreciating these differences is integral to having a successful content marketing campaign.Read More
We have been using a new tool called “Google’s Brand Lift solution” to answer exactly that question. This tool allows advertisers to gather brand metrics about YouTube ads in a matter of days in a controlled experiment setting. Thousands of advertisers across a variety of verticals have already used this tool on YouTube to test and optimize their video ads since we launched it in 2014. We ran some meta-analysis to look at the findings from the tool to help advertisers with practical tips. After analyzing around 50 campaigns from well-known Fortune 100 brands and category leaders, running on Google Preferred (some of YouTube’s most popular channels), we found that 94% of the campaigns drove a significant lift in ad recall, with an average recall lift of 80%. We also found that 65% of Google Preferred ads saw an increase in brand awareness, with an average lift of 17%.Read More