The Future of Television – Industry Overview

The Future of Television – Industry Overview – Statistical Data Included

Brad Dick

Predicting the future is always dangerous, especially in the magazine business. Unlike in radio or TV, in the printed world, your predictions live forever. And, if you’re later proven wrong, believe me someone will remind you. Despite the dangers, I felt the benefits outweighed the risks, and because Broadcast Engineering has a history of providing leadership in the adoption of new technology, we just couldn’t renege on that duty.

To help you better understand the upcoming changes, we’ve assembled 10 well-respected industry experts to look towards the future. Each was asked to address one technological area and discuss what changes might take place in next 10 years. In addition, each was asked to provide guidance to engineers and managers on how to implement the technology that will best position their facilities for tomorrow.

The combined thoughts of these technical gurus will provide a solid base for making decisions on the steps you need to begin making now. You’ll learn where the pitfalls are and how to avoid them. You’ll learn how to build on the digital platform you may already have in place. Finally, you’ll gain an insight on some important, yet not-well-known trends that may help you gain an advantage over the competition.

Your guidebook to the coming digital changes is just ahead. Let’s compare notes in 2010. One thing I can guarantee is that the winners then will be those who implemented digital technology today.

Brad Dick Editor

Production for digital & HD Digital acquisition Digital cinematography takes center stage By Laurence J. Thorpe

The past decade has witnessed a dramatic increase in the pace of professional video camera development, both in terms of the complexity of the technologies being used and the rapidly broadening range of applications. With the pace still accelerating, the next 10 years will see even more astounding advances in video cameras. Furthermore, marketplace forces will spawn innovations that will continue to blur the lines between professional and consumer.

Applications Broadcasters must search for cameras and camcorders tailored to specific applications. Today, there are application tiers within newsgathering, electronic field production, and in studio and OB production. No searching of technical brochures will yield the requisite wisdom in tailoring the applicability of the new products. That requires practical exploration. The old practice of broadcasters creating “user requirement documents” to help push the state-of-the-art in camera development should return.

The biggest challenge of the next decade will be the management of the dual 16:9 and 4:3 aspect ratio. While 4:3 will continue for quite some time in the analog domain, widescreen will invariably dominate the broadcasting and production landscape in 10 years.

There are no simple answers to this image content dilemma as program material increasingly crosses between the two image formats. Practical experience is of paramount importance in seeking what might work for primetime shows, news, talk shows, drama, special events and commercials. It can have substantial impact on what cameras might progressively incorporate by way of operational aid. Broadcasters and program producers alike should become aggressively involved in spurring their shooters and creative personnel to grapple with this today, or it will never go away.

The next decade will see a sharp acceleration in the use of digital cinematography. This will, in turn, spur more focused developments to make products specifically tailored to the needs of the cinematographer. Perpetuation of the film vs. video debate will be an anachronism. Those producing television shows should be opening a vigorous dialog with camera and lens manufacturers. Producers will choose between inevitably enhanced film stocks and all that is emerging in digital acquisition, but theirs will be a pragmatic choice based upon their scripts, storyboards and their individual creative and aesthetic aspirations, and, of course, their budgets.

By 2010, earlier ENG history will have repeated itself in primetime television production: It will be very largely electronic. Technological prowess and lowering cost will make this inevitable. The arrival of the 24p system will speed the transition in the major studios.

Optics The golden age of the 2/3-inch format has arrived. Now a de facto standard for both SD and HD cameras, this small image format will dominate the scene for the next decade. The cost of lenses, however, will continue to be a major preoccupation, especially as HDTV proliferates. This will keep pressure on the optics manufacturers to produce more cost-effective designs. This will only intensify as competitive pressures usher in an era of HD ENG (certainly something to be anticipated in 10 years time). Lens manufacturers urgently need a closer collaboration with broadcasters and producers to better define the necessary compromises in performance, facilities and costs.

As digital cinematography takes a firmer hold during the next few years and a sizeable high-end production marketplace begins to develop, manufacturers will open a parallel track and turn their design efforts to directly meet the needs of primetime production and of movie making. A second, higher tier of 2/3-inch HD lenses will be needed to address this higher-end marketplace. Program producers and directors of production need to become actively engaged in an ongoing dialog with the optical and television camera manufacturers to help shape the direction digital cinematography will take.

Imagers There have been striking developments in competition and broadening applications in the fifteen years that the CCD has dominated the professional camera scene. Ten years hence, we can expect as yet undreamed of technological advances, including new alternative sensor technologies.

Broadcasters, producers, videographers and cinematographers have increasingly fallen out of step with the technical pace of CCD imager developments. They still cling to anachronistic methods of assessing and technically evaluating contemporary digital cameras. They need to recognize that continuing their focus on the horsepower race in horizontal limiting resolution tells nothing of the advances in the other crucially important dimensions of overall picture quality. Learning about these and how to evaluate each in the context of taxing real-world scenes, as well as new test charts, is a must for the future. This direct end-user feedback is becoming much more important within an emerging digital era where image quality will become ever more important. Acquisition picture quality within an MPEG world cries out for re-examination by the end-users.

DSP This will be the fastest moving of all of the camera technologies. Propelled by the most powerful of technologies – digital processing – and aided by Moore’s Law in its VLSI physical implementation, astounding advances can be expected. We cross the millennium in a 12-bit A/D era for both HDTV and SDTV alike. The new century will see the inevitable competitive march to higher bit rates.

In 10 years, DSP will have endowed broadcast and production cameras with unheard-of intelligence. Real-time analysis of picture content will facilitate automated adjustments to a variety of picture attributes critical to the captureof high-quality images.

It is this DSP capability that will also see real-time digital imaging move definitively beyond the capabilities of motion picture imaging by 2010.

Systemization Digital cameras are becoming more powerful both in their operational dexterities and their integration into increasingly sophisticated production systems. This will continue. A new and broadening preoccupation with metadata will spawn system innovations that today are unimagined (see page 69). A great deal of energy has been expended on a “textbook” approach to metadata instead of a much-needed step-by-step exploration. Practical experimentation by broadcasters and program producers in collaboration with professional camera manufacturers is urgently needed to ensure an optimization of the practical implementation of metadata. This needs to start now rather than waiting for the ultimate production of an overly thick and daunting standardization document that might well be ignored.

On an implementation level, the present industry aversion to embracing the fiber optic link between a camera head and its CCU is understandable, but hardly helpful. Triax is ubiquitous and it is extensively installed. Any transition will be painful, but needs to be carefully measured against all that will surely happen during the next decade. While eminently practical and low-cost, triax is an aging analog technology. All of the associated technical travails of the triax communication link are removed with the broadband digital capabilities of today’s fiber links, and, new capabilities are enabled. Camera links can potentially carry far more data than their present channels of component video, audio, and control data. Broadcasters and producers should be aggressively exploring with the manufacturers new ways of maximizing this powerful two-way link, especially in OB applications. Combined with metadata, this holds high promise for radically more sophisticated system installations.

Laurence J. Thorpe is vice president of acquisition systems, Sony Electronics’ Broadcast and Professional Company, Park Ridge, NJ.

The future of HD production Post houses must rely on the efficiency of digital By Chuck Spaulding

Today, the long-awaited arrival of HD production and the looming arrival of data-based production drive the engines of change affecting the post-production industry. These developments are more significant and pose greater challenges than anything that the industry has faced in decades. These new technologies have, in recent years, been evolutionary in nature. The migration from 3/4-inch production to Betacam SP to DigiBeta, for example, was incremental. It represented improvements to existing 601 technology. High definition, because it involves a change in the standards upon which all of the production technologies and methodologies are based, is a revolutionary step. It marks a clear break from the past.

As they prepare to make this break, it would be wise for post houses to re-evaluate their role in the industry; to re-examine the way they have done business in the past; and to question whether that way will continue to serve them well in a future shaped by high definition and data.

Among the questions worth considering are: “If you were going to build a facility today, would you build it the same way you did 15 years ago?” and “What kind of facility can best prosper in a world where HD and data production are not the exception but the norm?”

Before attempting to answer those questions, it’s worth examining the state of the industry today. It’s no secret that at many facilities profit margins are slim, perhaps in single digits. In fact, judging by the recent number of bankruptcies and closures, it’s clear that more than a few facilities are not making a profit at all.

One reason for poor profits is that facilities today have no way of differentiating themselves from one another. A facility might spend $5 million on its infrastructure. Yet, on any given day, an artist could resign, set up shop down the street and compete on par with his former employer. The new “boutique” could very well produce images about as good about as fa st as the old “factory” at a fraction of the investment.

In most industries, bigger is better. General Motors can build cars faster and more cheaply than a guy working in his garage. It enjoys economies of scale. But in the post industry, being bigger can be a detriment. As a post house grows bigger it does not necessarily work faster or better, it just has more rooms. A larger facility has higher overhead than a smaller house without gaining any significant competitive advantage. This model inevitably drives profits down.

Another drag on profits is the rigid structure of most post houses. A facility with, again, $5 million in infrastructure might be set up to handle 10 projects simultaneously. If, then, it finds itself with more than 10 projects in-house, it has a production bottleneck. If it has less than 10 projects, it has empty rooms. Such a facility cannot easily scale up to meet rising demand, or scale back to lower overhead when demand sags. It may wind up selling its services at a discount simply to keep its rooms full.

What does all this have to do with high definition? Everything. Because in the world of HD, the pressures on post house profitability grow larger.

First, HD production requires post-production facilities to undergo an extensive recapitalization. High definition is a discontinuous innovation – few of the tools employed in SD production can be applied to this new standard. Post facilities therefore have to buy a whole new set of tools, tools that cost significantly more than the tools they are replacing.

With profits already low, it’s not clear where the money to pay for all this expensive new gear will come from. Clients have already indicated that they are willing, at best, to share the costs associated with the transition to high definition.

Moreover, HD production is intensely demanding of technological and artistic resources. In this respect, HD production more closely resembles film production than SD video production. Imagine, for example, color correcting a beauty shot for a car commercial. Working in standard definition, you might notice a muddy reflection in a hubcap. Viewing the same shot in high definition, you would see that the “muddy reflection” is in reality the film crew. Working in a data environment, you’d be able to tell that the director’s fly was open. Transferring that same spot to SD video, you might do 15 or 20 dirt fixes. Transferring it to high definition, you might need to do 100 fixes as every tiny flaw is magnified.

HD production requires more work on more costly gear. As HD images are exponentially larger than SD images, it also takes longer to transfer film to high definition, longer to manipulate images, longer to edit and longer to move HD files around a facility. Add to that the client’s reluctance to pay more and it’s hard to see how post-production facilities can come out money ahead. The solution to this dilemma, I believe, is for post-production facilities to move away from a production model that hasn’t worked all that well for SD production, and toward one suited to the specific demands of high definition, i.e. taking advantage of the strengths of data-based technologies.

An effective model for HD production, one that enables facilities to operate profitably, must be one that allows facilities to enjoy economies of scale. The bigger a facility grows, the more it invests in infrastructure, the more efficient it becomes. It is able to turn out images faster and better than smaller rivals. This model would also allow new resources to be quickly and easily added to meet rising demand or for overhead to be reduced when demand slacks. This would give facilities the ability to control their costs. This model would also be easily reconfigurable so that, for example, a facility involved in color correcting for commercials by day could be working on film mastering and digital restoration at night. This would allow facilities to go after new revenue streams rather than engaging in aferocious competition with one another over for a bigger piece of a small pie.

Is such a new production model a pipe dream? No. What is needed to make this new model a reality is simply to take full advantage of existing data technologies, develop a small number of new technologies and, primarily, to apply a new way of thinking about how post production should work. Today, most facilities consist of a number of rooms, each centering on one very expensive system that empowers one artist to do one thing. These rooms are serially connected to one another. In contrast, the new data-based model would feature a collaborative work environment, with many more, but far less expensive, workstations empowering many more, but by-and-large less highly paid, artists. These workstations would be connected in parallel so that all would have simultaneous access to data. Under this model, new resources could be easily added at relatively small cost with each new resource adding to the overall efficiency of the “factory” environment. Bigger would be better. In addition, it would no longer be easy for a single artist to strike out on his own and compete with his former boss. A facility’s resources could also be easily repurposed to differing kinds of projects. This model not only provides a means for facilities to manage the enormous data storage and manipulation demands incumbent to high definition and data production, it allows them to do so profitably.

The technological hurdles to implementing such a model are few. A far bigger challenge involves changing a mindset ingrained through decades of working in the 601 production model. It’s not easy to teach an old dog new tricks, but for facilities that want to survive and prosper in the future world of high definition and data, it’s one trick they would do well to master.

Chuck Spaulding is president of Postware. Metadata in the new millennium The growth of metadata will drive television By Chuck Fuller

Few would argue that over the next 10 years, television is poised to undergo dramatic transformations in capture, production, delivery, and consumption. While many technologies, infrastructure changes, and consumer paradigms will affect these transformations, it is the advent of metadata (data about the content) that will have the biggest impact and enable the deepest, most central changes in the medium. Metadata acts like a card catalog in a library, unlocking the wealth of information in digital media content. Metadata extraction, application and exploitation are the keys to leveraging all of the other changes on the horizon. Without metadata, TV will remain TV, regardless of changes in the plumbing.

At the end of this transformation, which will be complete and stabilized by the year 2010, we will see content capture and production processes that are completely digital. Distribution will also be completely digital and will consist of a hybrid of airwave, cable, satellite and Internet delivery mechanisms that are the endgame of convergence. Consumer devices in the home will exhibit dramatic increases in fidelity, flexibility and friendliness. Such statements are perhaps obvious; however, what is not obvious are the specific changes, mechanisms and features making up this future television environment. A close inspection of the definition, role and capabilities of metadata in this converged world is the key to providing insight into those specifics.

Metadata explained Metadata falls into two major categories: Automatic information extraction and manually added information. Automatic information extraction is index information derived automatically by signal analysis algorithms, such as keyframe storyboards, CC-text/Teletext extraction, speech recognition, speaker identification, face recognition, optical character recognition, etc. Manually added information is value-added information applied by humans as part of the production process, including e-commerce tags, hyperlinks, editorial descriptions, keywords drawn from controlled vocabularies, rights management information, version management information, etc.

Automatic metadata extraction processes will continue to advance and will include automated narrative segmentation, content summarization, object recognition and hierarchical derivative-content mapping. Value-added information will benefit from improved metadata standards, system-level awareness and use of metadata, as well as enhanced metadata packaging and transport mechanisms such as MPEG-4 and MPEG-7.

Metadata will become central to the production and value chain of video, from the camera to the consumer. This trend is already occurring, and can be seen in the feature sets of digital cameras, the industry focus on digital media asset management systems, and the emergence of searchable video on the Web. As the decade progresses, metadata extraction will move closer to the lens (see page 66), and metadata exploitation will move closer to the consumer.

Metadata increases the value of content throughout its life cycle, lowers the cost of production and archive maintenance, and significantly enhances the end-user experience by providing a “pull” metaphor with deep personalization. While the information elite will embrace this kind of personalized pointcasting and active engagement, broadcasting itself will continue to dominate the infrastructure. However, intelligent agents that can perform the personalized “pull” on behalf of the consumer, filtering through thousands of broadcast channels to create a surfable number of personalized channels, will enhance the viewing experience – a change not possible without end-to-end metadata.

The future of television Advances in metadata technology and the proliferation of its use in the content life cycle will drive the future of television. The key areas to watch: Metadata extraction moves closer to the lens: Digital cameras today capture various information about the capture process, including ‘takes’, time-stamps, GPS information and annotations entered by the camera operator. In the future, full-featured metadata extraction processing (including keyframes, speech, personality identification, etc.) will be part of the output of the camera, packaged in an MPEG-7 transport, and fed into the production process. Metadata use enhances the production process: Broadcasters will need to manage an ever increasing number of information sources, distribution channels and competitors, while at the same time dramatically reducing production costs. In this environment, being able to automatically access, edit and air digital content is mandatory. Metadata, with its ability to help journalists and producers immediately locate and manage content, holds the key to broadcasting efficiently in the 21st century.

Metadata drives the archive: Content that is forked off to an archive (during and after production) is well prepared for the application of library-science techniques which enable long-term storage and retrieval. Assets that are not carefully indexed with metadata will be lost in the petabytes of archived content. Ready access to archived content will enable new revenue opportunities. Metadata provides deep personalization: Metadata offers fine-grained search and browse on demand to active information consumers, enabling efficient and accurate pull of content. Metadata also provides the necessary handles for agent-based filtering that can dynamically learn consumer preferences and interests. Intelligent agents thus maintain the passive viewing experience, but with far more targeted and interesting content. Advertising and commerce opportunities will exploit content associations and personal profiles, all within the privacy of the set-top box. Video-on-demand is realized at the fringes of the distribution infrastructure: Technological advancements in mass storage will continue unabated during the decade, producing multi-terabyte storage devices at commodity prices. The digital VCR will converge with the set-top box housing multiple channel tuners and intelligent agent software. Thousands of hours of high-fidelity preferred content will be constantly available for searching and surfing. Metadata will stream down to the set-top along with the digital media, where it will be analyzed and compared to interest profiles. Conditional-access content will be encrypted and stored locally where it can be unlocked instantaneously. VOD will become a reality, not with massive head-end servers and fiber-to-the-home, but with highly distributed, personalized storage devices. Head-end video servers, coupled with IP-multicast mechanisms to alleviate bandwidth bottlenecks, will only be needed for live content distribution.

Regardless of the specifics you envision for the future of television, the industry changes during the next decade will surely astound us. Excepting the inventions of television and radio themselves, metadata promises to enable features and capabilities that are unrivaled in the history of the industry.

Chuck Fuller is a co-founder of Virage Inc. Tomorrow’s newsroom Everything you need is available today By David Schleifer

Almost everything you need to build the newsroom of the future is available today. The problem is in trying to get it all to work together. Even making it all work together is the first challenge of integration. Just making sure that there are industry standard formats that can deliver compatibility between acquisition, editing and playback has been a struggle. The second and perhaps more important challenge is to deliver on the kind of integration that truly gives broadcasters choices and delivers new functionality and flexibility in the process or workflow.

Improving newsroom workflow Integration directly affects workflow, and workflow determines how quickly, cost effectively and efficiently the job is done. Some elements of the job the newsroom is challenged with today are very different from the challenges of a few years ago, while others have remained constant. A constant is the need to process information, to add value to pictures and sound and to do it in the most cost-effective manner possible. A new factor is the need to squeeze more efficiency out of the process while leveraging the same information to many promising, and in some cases unproven, new outlets, like the Internet or HDTV. In facing these new challenges stations need to re-examine workflow. It would be simple if you could put an Internet publishing button on a tape deck, let several people edit from the same tape at the same time, or automatically track metadata from the field to playback and archiving on paper. Unfortunately this requires several new areas of integration.

The process of creating news easily breaks down into four general areas: acquisition, production, playback and archiving. All of these areas must be linked together. The material acquired must easily flow to the production systems. While newsroom computer systems already manage this by bringing in wire services or allowing remote dialup for reporters, the process of managing pictures and sound is not as efficient. DVCPRO and MPEG allow for compatibility, but that is just the first step. In the production cycle, systems must reduce the number of errors by eliminating retyping of information and other manual steps. Moving a video ID from a rundown into a nonlinear editor, and then onto the playback server is a clear example of this. New tools to manage the data online are also needed. Just being nonlinear is not enough; you need to find, sift, sort and use your material easily and quickly. Can you find out what clips have been used in a story, what alternate clips you might have from an interview to refresh your piece, how many stories use a particular sound bite, and can your editors all have access to the information and the media? These are important tools in today’s competitive news environment.

In addition, integration must consider an Internet strategy. Users often say that the Internet is critical to their success, but that it must be free – free in the sense that it does not overly burden their efforts to get shows on the air. News production systems will have to meet that challenge. Systems will need to make it easy to repurpose and deliver both new and old content to new channels. Over time, the process of delivering news to the Internet and to the antenna will merge into one process. David Schleifer is manager of Broadcast Field Marketing for Avid Technology.

The digital infrastructure Broadcast’s digital tomorrow Analog is dying – long live digital. By John Luff

As the last decade of the century comes to a close it is clear that we face a growing swell of change in the technology that provides the infrastructure of our business. The changes in the next decade will be just as astounding as those we have seen in the 1990s. The last decade has brought video servers, practical high-quality CCD cameras, flat-screen displays, video servers, digital transmission to the home, practical HDTV production tools and a host of other stunning developments.

In the past we have often thought of single-purposed devices and how to connect them in a string to make the path between the camera and receiver transparent. With analog technology and full-bandwidth digital systems, we strive to get a perfect rendition of the signal to the other end of the pipe. As compressed signals become a dominant theme, as indeed they already have in VTRs and many transmission systems, we need to think more clearly about what we are trying to achieve and how to best apply the available bits in an efficient network. In planning NTSC systems, I placed myself at the signal source looking towards the destination, ultimately the home. Today I find it more illuminating to look backwards from the final destination, back into the system, and think about what signals need to be delivered to the device in order for it to do its job. For instance, an ATSC encoder needs video, AC-3 audio, PSIP, closed-captioning data and, potentially, other streams to assemble a useful broadcast.

There is a convergence occurring among the underlying technologies in consumer electronics and professional broadcast electronics. Any of us old enough to remember what “HL” stood for on the side of a portable camera realize that consumer camcorders produce pictures and sound superior to those from the state-of-the-art broadcast camera scarcely 20 years ago. Now we see DV consumer recorders from many manufacturers that share concepts and even actual hardware (codecs) with serious professional camcorders. Compression hardware that began in teleconferencing has migrated upscale to broadcast, and is now moving downscale to the Internet and other uses. As the bandwidth of wired paths to the home increases, the importance of ubiquitously available compression will continue to increase.

New engineers In viewing the history of broadcast engineers, we might say we were the “fix-its” in a facility, without whom nothing kept running for long. However, today’s electronics are highly reliable and increasingly repairable only at the subassembly level. Moreover, we once administered a complex web of interconnected pieces and made hundreds of adjustments using waveform monitors and vectorscopes. We needed to adjust NTSC signal timing to a precision of better than 5ns – not a simple task. Now digital switchers time themselves, and signal equalization is automatic up to the point where a digital signal simply disappears. Most importantly we are now becoming network managers in complex local area networks of interconnected systems controlled and monitored by computers.

In the next 10 years, we will see the death of analog systems, the death of the ubiquitous linear media video recorder in production and air operations, and the rise of HDTV to the role of predominant media. Consumer electronics will continue to penetrate the edges of our universe, and may well show up in ways that eliminate the blurry lines separating professional and consumer electronics today. Test equipment will broaden to allow tests on compressed streams to provide meaningful qualitative judgements to be made.

I do not want to ignore the most important change in our industry. The linear push model of entertainment television has begunto crumble. Our role as technologists is to provide a conduit. As programs change in form, as in news delivered 24 hours a day via the Web or other low bit-rate pipes, we must change our view of what technology must provide to our colleagues who build the business case that supports our enterprise. We have a legitimate role as technical evangelists to explain what technology can do and to use our creative energy to facilitate their dreams. When Michael Fuchs began the experiment that became HBO, few understood how fundamentally satellite transmission would revise our technical world. Digital technology gives us new tools to craft incredible systems for serving the “public interest, convenience and necessity.”

John Luff is president of Synergistic Technologies, Canonsburg, PA.

Automating test and measurement Electronic “eyes” will oversee your signal. By Mark Everett

Estimating where this industry will be in the next 10 years is a tall order, and reducing the subject down to test and measurement is even harder. Furthermore, to limit the scope of this article, I have purposely avoided the inclusion of MPEG and ATSC testing – both of which are important, and both of which may be evaluated in their compressed state and in their reconstructed state.

One thing is fairly certain, the traditional XY deflected CRT display will be a thing for museums. We probably will have devices with a similar appearance, but they will most likely be VGA tubes or high-resolution, high-speed flat panels with progressively scanned displays rather than the continuous line CRT display of the last half of a century. Other primary changes to test and measurement will be the implementation of devices that discover, repair and report deviations from established requirements. Video production and transmission facilities will continue to have very high standards for signal quality, and they will have all sorts of difficulties finding and funding personnel who are capable of recognizing and solving signal quality problems. Further, to limit the scope of this article, I have purposely avoided the inclusion of MPEG and ATSC testing – both of which are important, and both of which may be evaluated in their compressed state and in their reconstructed state.

Evaluation The types of devices used will work in the originating digital format (the production digital format – AES/EBU, 601, 1080i and the like), rather than in analog origination format or the transmission format (AC-3, MPEG, ATSC, etc.). Monitoring nodes or probes that will be placed in every originating path and these devices will report to a central monitoring, alarm, repair and reporting station. Products that are now available can scan, evaluate and modify live video in real time on a pixel-by-pixel basis. Data can be collected against any number of time clocks (GPS, timecode, etc.) and data can include metadata identification or similar embedded IDs to accurately control the specific information and log. A fair amount of work is in process now to lay the framework to imbed such information in original video information.

The next challenge is to develop well-defined signal profiles established to assist in judging video levels, color intensities, hues, audio levels, equalizations and similar attributes. These profiles will have to be adaptable to sports, rock concerts, movies, cartoons, news, situation comedies and the list goes on. Imagine a movie such as “Dick Tracy” with the dramatic use of specific saturated colors. This represents at least one challenge to the type of profiles to which the system must adapt.

Profiling So, with that part of the concept understood, the next step is to apply the profile to material in real time, and to report any variances to a master control point – machine, not human. The master control point will, based on other guiding profiles and response parameters, apply corrections to the offending signal. Histograms will be developed and corrections will be issued based on continual input from the sensor. Certain signal parameters are easy, but many, such as content issues – a frozen frame, when the video should be moving, or film dirt or scratches are more challenging to capture. Algorithms can be developed and data can be evaluated against those and many other requirements.

Repair The next function is to repair the fault. Here, again, the profiles and histograms are used to develop adjustment parameters to repair the fault. It is possible that not all faults can be repaired, but even missing picture information can be developed based on the same prediction concepts as used in MPEG. Fault repair is a necessary step as algorithms can respond more quickly and accurately than can operators, and many faults on the pixel level, will probably be missed by operators. Certain functions, such as scratch repair, noise elimination and color limiting, are common today and can also be included in the type of real-time corrections applied to all video.

Reporting A fourth function will be logging and reporting all of these faults and the corresponding actions based on time stamps, sources, scene information, extracted metadata and other available information. From this information, engineers and managers will be able to review and evaluate actions, review processed video often and apply the resulting information to improved functional algorithms. A natural extension of this logging and reporting is at least some recorded history and presentation of the fault and the applied correction. Expect to see high-speed Internet connections available for real-time expert intervention – let the chief engineer view it, check it and adjust it at home via his PC. While some of this functionality is present today, higher speed interconnections and higher speed processors of the future will dissolve the limitations of the current approaches.

The future of test and measurement is led by a few truths. We will have fewer qualified technical operators and engineers available to perform routine functions. Digital and compressed digital formats will all but eliminate transmission and storage-based signal degradation. Humans will continue to disrupt the best efforts of automatic controls. The wrong program material will be selected for presentation. Key equipment will fail at the most inopportune time. All of these factors lead to the necessity of automated evaluation, reporting and correction – a necessity that future test and measurement equipment will have to address. Mark Everett is vice president, Advanced Technology for Videotek, Inc.

CONSUMER ISSUES Digital and HDTV in 2010 Will HD ever fly? By David Mercer

How far away is 2010? In the Internet era, some would argue that it might as well be 2100 if we are trying to predict what the world will be like. The pace of development in consumer technologies has accelerated so much in recent years that the crystal ball can seem more hazy than ever.

Our own research suggests that consumers are dissatisfied with what television has to offer today, but that they believe strongly that it will improve in the years to come. When asked to indicate how their time allocation will change, out of a selection of around 20 common household activities, consumers choose “watching television” as the activity which they expect will see the steepest decline in the coming years. At the same time they are looking forward to some of the technical innovations that will improve the television experience. Innovations that will improve digital television, flat panel displays and DVD all attract high levels of interest.

Nobody disputes the idea that digital would make much better use of that scarce resource we know as broadcast spectrum. What is debatable, however, is how we move from where we are today to where we want to be. Unfortunately for the U.S. as well as most other countries, the perceived wisdom has rested on the idea that analog television can be migrated to digital television simply through the process of introducing the new technology and expecting consumers to upgrade. Real-life experience is beginning to demonstrate that it’s not that simple.

No plan The U.S. DTV transition plan has been developed on the premise that terrestrial networks will gradually introduce HD programming and transmissions into their regular schedules. However, this is not obligatory within the FCC guidelines, and broadcasters are divided on the best way to proceed. Those that have started HD transmissions are effectively delivering to a nonexistent audience, and therefore unable to begin recouping costs. Others remain convinced that their digital capacity is best used for several SD channels and/or data services.

But the big mistake for the digital terrestrial TV strategy has been to ignore the competition. DBS satellite service providers such as DirecTV and Echostar have already persuaded 11 percent of homes to go digital. Digital cable services are now in around 4 percent of homes. By 2005 we expect 55 percent of U.S. households to have digital TV via either satellite or cable.

Satellite operators have introduced HDTV and cable is being forced, albeit slowly, to follow suit. The U.S. is one of the few places in the world where we expect a significant market for large-screen HDTV to develop. The evidence suggests that there are already one million consumers each year who spend an average $2000 on TV sets with 40-inch to 70-inch displays. If these consumers are willing to spend this much on TV sets relying on today’s NTSC standard, they would surely spend similar amounts, if not more, on HDTV sets. In fact, the arrival of HD should boost demand for TV sets of this size, as there are probably potential customers who have delayed buying a set because of the relatively poor quality offered by rear-projection display technology relying on NTSC.

Market size So, unlike some analysts who maintain that high definition has no future at all, we believe there is potential for a niche market to develop. The only question is how big is that niche? Today around 7 percent of U.S. homes own a TV with a display 40 inches or larger. High definition could conceivably increase this level perhaps to 15 or 20 percent. But if it is going to reach more people than this, we believe that HD sets of 40 inches or more will need to cost less than $1000. While new display technologies will continue to emerge, there is, as of yet, nothing on the horizon that makes this conceivable. Remember that the average price of a TV set today is $340. HDTV must compete in this context. The set-top box will remain dominant for many years because the easiest way for consumers to access new services (and for service providers to build an audience) is to upgrade existing TV sets. The last thing consumers want is to spend thousands of dollars on a new TV set only to have it become obsolete within a couple of years when the next new service is launched. The TV set will continue on its path towards becoming essentially a “dumb monitor” to which one or more devices are attached.

The problems associated with ATSC have been well discussed, and the pressure to revise the standard is likely to prove overwhelming. While ATSC’s technical weaknesses clearly need to be resolved, we also believe the opportunity should be taken to step back from the whole approach to the digital transition and reconsider the strategy. Terrestrial broadcasting operates in a highly competitive environment, and its capacity limitations mean that it can never offer direct competition with either satellite or cable delivery. In the not-too-distant future a fourth medium will begin to make an impact – Internet delivery of video services. Broadband Internet is not a viable competitor today, but technological enhancements are rapid and it will not be long before it starts to compete with today’s television market.

New features Finally, consumers will soon begin to see a wide range of enhancements to their TV services, ranging from content-related features such as background information and viewer-direction, to full-fledged Internet-type services like e-mail and Internet surfing.

Something to watch out for in particular is local storage: STBs with built-in hard disk drives will make a major impact on how television is perceived. The progress of interactive television will be fragmented, since it will be supported by a range of different STBs and service providers, but its future is now assured, if only because today’s TV companies know they must move in this direction to survive. In 10 years’ time, it is likely that: A high proportion of the households will have access to SD digital signals from satellite and cable operators; High definition will have a niche position within this market; A growing number of viewers will also be using Internet-based subscription video services; Many households will still own TV sets which are dependent on terrestrial analog signals, and a significant number of households – probably 20 percent or more – will still be entirely dependent on analog over-the-air TV; and, Interactive TV will be widespread in various forms.

One thing is certain: television will have to adapt to an Internet world or face a period of continued decline.

David Mercer senior analyst for Strategy Analytics, Newton, MA.

Tomorrow’s television displays What will your screen look like? By Joe Kane

After years of anticipation, HD imaging is upon us. Though we have had a seemingly endless amount of time to ramp up on HD display devices, only a few companies have come forward with products that do a good job of displaying HD images.

The accommodation of higher scan rates is a new science relative to mass-market TV sets. Most manufacturers have been slow to learn the requirements of video in order to accommodate high definition, including the right colors of red, green and blue. In practice they are doing little more than adapting existing NTSC technology. There is little recognition of the fact that if a set will properly accommodate the retrace time of 1080i, covering 720p is essentially free.

Displaying true HD images requires resolution, color and luminance capability. These parameters are mutually exclusive unless you are willing to pay a lot of money. Light output goes down as resolution goes up or as we approach the correct colors of red, green and blue in a CRT. How low does the light output get? Fifteen-foot Lamberts for one brand of 27-inch professional HD monitor with the wrong colors and five-foot Lamberts for another with the right colors. If a professional tries to increase the light output he will find most of the detail missing. One would argue that a 27-inch monitor is not large enough. Certainly a five or six-foot wide image is needed to do justice to true HD video.

Direct view CRT technology is limited in size, resolution and light output. Fixed array light valves are limited in resolution and contrast in the blacks. For the time being, real picture quality is only available from a nine-inch, electromagnetically focused CRT projector, of which there are three brands. You can expect 10-foot Lamberts off of a high-quality five-foot screen from these projectors. Light will go down as image area is increased. At least one of those three brands will probably show a true 1920×1080, but only at a reduced light output. By the way, rear screen technology has come forward in quality to successfully compete with the best of front screens.

If displaying HDTV is expensive and probably will continue to be that way for the next 10 years, what chance is there of HDTV becoming attractive to the average consumer, let alone the broadcaster? The probability is great if set manufacturers change the way they build sets. They are being forced in that direction by fixed array display devices, but have yet to make the connection with CRT technology. Any signal received by a fixed array must be converted to the display’s size before it can be shown. If applied to CRT displays, this approach could produce far better looking pictures at about the same price as today’s TV set.

CRT displays can produce as much as 40 percent more light output when driven with a progressive signal as opposed to an interlaced signal. Larger sets will require more than 480 active lines to produce the best possible picture. A 13-inch set might be good at 480 lines while a 27-inch set might require 600 lines. If the converters needed for the fixed array displays were attached to CRT displays, the set itself could do the best job its size would allow. A larger set would do a better job, creating a step-up market with greater rewards in picture quality, partially because of HDTV as a source. More important, these sets become universal – saleable anywhere in the world.

The future of all imaging is in displays running at their own best rate. High-quality scan rate converters are beginning to appear at costs that would fit into consumer-priced TV sets. While the highest-quality sets will continue to be expensive, consumers can expect to see new, lower-cost options soon.

Joe Kane is CEO of Joe Kane Productions, Hollywood. MAKING MONEY WITH DIGITAL TV’s business model Can TV survive the digital revolution? By Stephen Turpin, Jr.

Monumental decisions are being made about the future of the television industry, not the least of which is how the current business model should evolve to meet the changing environment. With fierce competition from cable, the Internet eroding station viewership and the continuing issues regarding programming costs and availability, broadcasters are beginning to realize their future will rely on a very different type of business approach. But as we head into the next century, what will that new approach be? More importantly, what are the elements that will influence it?

It is evident that the TV business model of the future will move towards narrowcasting. We have become a more segmented society in terms of lifestyles and values, and this has resulted in television having to meet the entertainment needs of a variety of segmented audiences. Consequently, broadcasters will migrate to narrowcasting as we enter the new century, and with this significant trend will come a whole new set of challenges.

Casting about

Multicasting will create opportunities for additional revenue streams by leveraging digital transmission capabilities. This new technology will increase the amount of data that can be transmitted by allowing the current single channel to be split into four new ones. As a result, broadcasters will be able to transmit four times the programming on one signal. With 10 stations in any given market, SDTV can potentially create 40 new channels.

One of the fundamental advantages of multiple channel offerings is its ability to air prepaid block programming while at the same time airing traditional programming. Because most local programming takes place during non-prime time hours, stations additional channels to fill during the day will have the ability to generate up to four streams of revenue in the same time segment. Ultimately, this will help to cover the additional costs of going digital.

Additionally, multicast programming can play an important role in filling the increasing demand for narrow or focused programming for specific audiences. A good example might be a local PBS affiliate narrowcasting programming into schools. At any given hour “Bill Nye the Science Guy,” “Nova,” a foreign language lesson and a history documentary could all be broadcast simultaneously to reach specific school-age groups.

Because multicast programming fits well into the trend toward narrowcasting, it has the potential to be a strong revenue source, and positively serve the marketplace. Of course all of this is academic if stations do not begin to make the commitment both financially and operationally to transition to digital. Therein lies the next challenge.

Going digital It’s apparent the federally mandated conversion to digital TV by the year 2006 will have a significant financial and operation impact on large and small stations alike. Today’s estimated conversion costs at $5 million, a station worth $20 million faces some real challenges. To survive the transition, the first and most important step will be to implement a thorough evaluation of the station’s current business and financial situation. Because the transition period may take up to five years (or more), and with conversion requiring significant up-front capital and unpredictable return on investment, broadcasters must start preparing for the inevitable negative impact on near-term cash flow.

Leveraging significant advertising revenue opportunities like the Salt Lake City Winter Olympics and the 2000 presidential election is one way to help bridge the conversion costs. Stations should also be trimming excess operating expenses to free up additional cash flow and monitor any non-essential capital expenditures. In addition to helping improve cash flow, these efforts will have a positive effect on stations’ ability to finance conversion costs.

The positive economic environment and its impact in terms of generating advertising revenue in the near-term will also play an important role in offsetting conversion costs. It is projected that during the next two years TV ad revenue should continue to increase, with a record $44.5 billion being spent in 2001. On an after-inflation adjustment basis, expenditures in 2001 should total almost $38 billion, or eight percent above the 1998 level.

Out of the box Although this short-term revenue picture is encouraging, it’s imperative that broadcasters consider other new ways of generating future profits, especially from non-broadcast revenue sources. One strong option is datacasting. Even without all the answers in place, datacasting looks to hold great potential for stations. Broadcasters could capitalize on these growing trends by selling unused spectrum during off-peak hours, or even provides complete end-to-end data delivery solutions.

While these are exciting and challenging times for our industry, we have the opportunity to not just survive but thrive in what will continue to be a competitive marketplace. To do so we must adapt, prepare for and utilize the digital revolution, and embrace a new business model that may be narrower in its focus, but broad in its return. Stephen Turpin, Jr. is vice president of the Communications/Entertainment Group of The CIT Group/Equipment Financing.

The Internet future Is the Internet an opportunity or an opponent? By Michael Wellings

Not so long ago most broadcasters had very little knowledge, understanding, or even interest in an network of computers. After all, the World Wide Web has only been in existence for eight years. In 1991, there were only 700,000 or so registeredInternet “hosts.” Now, the number is 56 million – a figure double that of two years ago. Much has changed in broadcast technology during those same eight years. We have witnessed the development of the multilevel, multiresolution MPEG-2 specification, and the implementation of digital broadcasting and production in various formats. Digital techniques are reshaping the landscape as we watch our fundamentally analog world being converted to ones and zeros. What does this mean for the future as true convergence overtakes the entertainment/media delivery system? As more and more content is digitized, analog transmission systems will become digital and, most likely, connect to a future version of the Internet.

With digital technology heating up acquisition and transmission, it follows that the Internet will become more interesting as a potential pipeline for media delivery and archival storage (see www.washington.edu/researchtv for an example of this trend). Since we are shooting, recording and editing digital, why not deliver in digital format directly to the end-user, converting to analog only at the endpoint? DTV and IBOC-FM (in-band, on-channel digital FM) are the first steps in this direction. In television the revolution will begin in earnest when broadcasters realize that they are originating 20Mb/s of multiplexed data, not just video. As the Internet grows in speed and reach, and television and radio stations become true data broadcasters, an Internet presence becomes all the more important.

The first “cyber-station” went online at INTEROP ’94 in Las Vegas (www.zdevents.com/interop), followed shortly thereafter by FM stations KUGS, WXYC, andWJHK. Today, broadcasters are among the most connected businesses in the country, with more radio and television stations going online everyday. E-mail, websites and streaming media are common themes spread across the industry. Although few stations would wish to return to the glory days of the teletype and of film cans delivered by mail, few believe that Internet delivery will replace the high-powered transmitters we have learned to love.

The commodity Internet struggles to keep up with demand. At the time the Internet was commercialized in 1995, its backbone pathways ran at a mere 45Mb/s. Large enterprises were typically connected at T-1 rate (DS-1, or 1.544Mb/s). Most were crawling along at the snail’s pace of 56Kb/s. Home users connect at 28.8Kb/s or worse. Remember waiting 30 minutes for that movie trailer to download, only to watch it in a 1-inch by 2-inch window on your computer screen? This low data bandwidth available to the consumer results in disappointingly poor, postage-stamp sized images – and only after seemingly infinite download intervals. But the picture is improving. Major network service providers are making huge investments in their Internet backbones. For example, AT&T just announced that due to burgeoning demand, it is accelerating its backbone upgrade from 2.4Gb/s to 10Gb/s in six months. Major enterprises routinely connect at DS-3 rates (45Mb/s), or in rare cases even faster. A small but growing number of home users can take advantage of DSL, which ranges to hundreds of kilobits per second or cable modems (tens of megabits per second). There are now well-known companies that base their entire product line on network streaming media tools (Real Networks, for example).

Even these changes leave us with an Internet that is woefully inadequate for handling serious, high-quality video. Technology can only take us so far when the basic network creeps along at such a dismal rate. This will not always be the case. Quickly and quietly, with few outside the network developer community realizing it, the network bandwidth is growing by orders of magnitude, and this could mean big changes for the broadcast industry. Internet2 is a reality (see www.Internet2.org). Internet2 sites can connect to this high-speed backbone at OC12 (622Mb/s) rates today and OC48 (2.4Gb/s) connections are expected sometime next year, with development pushing this out to OC192 (10Gb/s).

Internet2 member institutions such as the University of Washington in Seattle are in the business of pushing network development along by stressing OC12 and OC48 routers, Gigabit switches, and Gigabit Ethernet interfaces. In October of this year UW, in support of the ResearchTV consortium, and in partnership with Sony Corporation, demonstrated that it is possible to transmit HDTV (at 1080i rate) over Internet2 using Internet protocol (IP) and Gigabit Ethernet interfaces (see www.wash-ington.edu/hdtv for history and full detail).

The October demonstration was transmitted at two data rates: 40Mb/s MPEG-2 DVB-ASI, and Sony HDCAM 143Mb/s embedded in an SDTI transport data stream of over 200Mb/s. This traffic was carried over Internet2 between Stanford University and the University of Washington. Demonstrations such as these are important in that they stress the high-speed Abilene Internet2 backbone (see www.abilene.iu.edu/index.cgi for the network map and data traffic information), pushing hardware and software development along that will bring the reality of Gigabit Ethernet to the “commodity Internet.”

The ability to transmit SDTI rate HDTV and the somewhat slower DVB-ASI via an IP-based network has some future implications in video transmission methods. Satellite modems and digital microwave links are no longer the biggest or fastest data pipeline. As Internet data rates ramp up, IP over Gigabit Ethernet could become the best, and most reliable, long distance form of video transmission. The convergence of broadcasting and computing could well make IP transmission a viable alternative (or addition) to the suite of data pathways into our homes. Mostof us have experienced the uglier side of video-on-demand as we have stood in line to demand the latest video, only to find that the rental store is fresh out. Imagine true video-on-demand as the two-hour, HD movie we ordered is delivered at MPEG-2 rate over Gigabit Ethernet in less than three minutes. Just enough time to make the sandwiches. Those old SD films would show up on the home media server almost as fast as they were selected. While scaling video server and Internet capacity to achieve this vision will take a few years, the potential for using the Internet transport technology in other aspects of the broadcast industry is right around the corner.

What does the future really hold for broadcasting? Will high-powered transmitters go the way of the dinosaur, becoming fodder for wild tales told to the young? I won’t go so far as to predict the demise of electromagnetic waves as the preferred method of television delivery. RF has its place, even in the future of high-speed networking. There will always be those who are out of reach of optical fiber or coax, but the Internet is already being extended via broadcast and other wireless technologies.

The focus of broadcasters may well expand to include the Internet as a primary form of communication as the penetration of digital video overtakes analog. Television and radio networks are well positioned to implement new services to follow the higher network speeds that will surely come. Internet delivery of HDTV and SDTV on demand and multicast, Web-based online archival storage of historical footage, multiple-market presence for major market stations, Internet-delivered contribution and live production feeds are some of the possibilities. Broadcasters could become narrowcasters – targeting specific content to those who wish it. Some of these services are being developed today by those institutions that are part of Internet2.

In the future, RF transmission could well be reserved for mobile applications (communication/computing devices in cars, trains, airplanes, hovercraft, spaceships, and pockets). Users in fixed locations will utilize a suite of connected pathways (both wired and wireless) to access data at speeds geared to the application. Our mail, video, news, music and personal communication will come into our lives via the least expensive and/or the fastest means. Hardware will seamlessly switch connection methods depending upon what is available. When we must leave the fiber jack in the wall and become mobile, we’ll suffer through the slower RF forms of communication, never to be isolated from the information that drives our day.

What is the future of broadcasting? However it turns out, broadcasting and high-speed networking will certainly be linked. As it matures, data networking will become the giant of content delivery and acquisition. It is still not too late to become the first broadcast-quality, Internet-delivered TV station.

Terence E. Gray, PhD served as network data reference for this article. Michael Wellings is the chief engineer for Video and Television Technologies at the University of Washington.

COPYRIGHT 1999 PRIMEDIA Business Magazines & Media Inc. All rights reserved.

COPYRIGHT 2004 Gale Group