The great AC/DC debate – Technology Information
In addition to the power draw increasing, the need to cool this equipment intensifies.
Which is the better way to go: AC or DC? Ask three power engineers this question and you will likely get three different answers.
Of course, everyone views the AC/DC debate in the context of their own power engineering culture or operating experience. But their responses generally break down into two main camps:
Telco DC power engineers have had years of experience with lead-acid batteries. Batteries are reliable for long outages up to eight hours or more and, in fact, are always on so that service is never interrupted. Consequently, they rely very little on backup engine generators that may not always start when needed. Moreover, delivering DC power involves only one main conversion step from AC to DC, which minimizes potential points of failure.
AC power engineers believe that the combination of relatively reliable AC conversion equipment, which has a minimal battery reserve of only 15 minutes and is backed by standby engine-generator systems, affords a level of availability that is acceptable to most users. Furthermore, using batteries with long reserve times for large loads involves a big up-front investment, requires special handling and needs a lot of maintenance to ensure performance reliably.
There is not one right answer. The classical telephone central office (CO) environments will continue to be served by DC power systems. Enterprise data centers will remain the AC power systems domain. The arguments get interesting, however, as telecommunications and information technology converge. In these scenarios, critical loads in a single facility likely will require -48 VDC power and one or more commercial AC voltages. Here again, one solution does not fit all.
The new public network
Telecom facilities that can deliver multimedia voice, data and video are no longer confined to the CO. The new public network comprises facilities that house multimedia equipment and that are located close to customers with connections to backbone networks, as required.
The result is new types of buildings designed specifically for the new public network, or a refurbishing of existing commercial structures for telecom applications. The industry refers to these new facilities by various names such as Internet data centers, co-location or tele-hosting sites, and carrier hotels. Despite the nomenclature, each type of facility serves the needs of different service providers. Like the CO, these facilities are groomed for space, power, environmental control and security. It is perhaps the power aspects that are the most challenging for the operators and represent a significant portion of the total facility investment.
According to Nicholas Osifchin of International Power Strategies, there are three categories of new public network equipment.
Category I comprises large buildings that comply with the same new equipment building standards that apply to telephone equipment buildings. These buildings, primarily data centers, are designed mainly for Internet servers and data storage systems. They are equipped with protected power systems that can range up to 40 megawatts; many often require their own substation. Moreover, these are large sites ranging from 50,000 sq. ft. to more than 100,000 sq. ft. Category I sites deliver predominantly AC and a lesser amount of DC power with AC UPS, gen-sets, DC power plants with battery backup, and power subsystem monitoring and management systems. Target customers for this type of facility include enterprises, international ISPs, interexchange carriers (IXCs), and global service providers. Exodus Communications is the largest independent operator in this category.
Category II companies are co-location sites (or tele-hosting sites) that provide comparable but less extensive facilities and support services than Category I companies. These sites are also located close to incumbent local exchange carriers’ (ILECs’) COs and IXCs’ points of presence. They serve a more diverse market of medium- and large-sized competitive local exchange carriers (CLECs), application service providers and ISPs. Typical sites range from 10,000 sq. ft. to 30,000 sq. ft. There is a different balance between AC and DC that is dependent of the mix of customers in the site. DC is increasingly predominant and is supplied by the facility operator. Switch and Data and colo.com are examples of companies in this category.
Category III are carrier hotels that are mainly a real estate play. These companies serve a diverse group of small to medium-sized ISPs and niche local service providers. Carrier hotels provide basic co-location facilities and services that include protected power equipment, modem pools, voice and data switches, transmission equipment and in-building connectivity to ILEC and CLEC local loops. The Dallas ComCenter is an example of this type of facility provider.
The growth of these facilities has been driven by a steady outsourcing trend by the new carriers and at the same time an ongoing shakeout among the facility providers that will result in fewer but larger hosting companies serving these carriers.
Category I sites predominantly use AC much along the lines of the classical corporate data center. Power is delivered from AC uninterruptible power supplies (UPSs) through power distribution units (PDUs) to the load with generator backup for the UPSs, lighting and heating, ventilation and air conditioning (HVAC) units (Figure 1). DC power required for carrier transmission equipment is supplied by a separate DC power plant with batteries. The DC component is relatively small in the data center model, perhaps comprising 5% to 15% of the total power, depending on the size of the site.
A few distinguishing features are associated with these very large data centers. Power density is increasing with each generation of server technology, packing more processing power into smaller spaces. This plays into the user’s desire to get more productivity out of leased real estate. Some data centers are running 100 watts/sq. ft. to 200 watts/sq. ft. This means, however, that in addition to the power draw increasing, the need to cool this equipment intensifies. So HVAC operation is critical and must be constant.
It is this latter point that keeps data center operations managers awake at night. They know that if the commercial AC fails the backup generators then fail to start, causing their customers’ revenue-producing servers to fail because of overheating – even though the servers themselves may still be running off battery backup. Either way, the customer is out of service and maybe even out of business.
Category II sites generally have a higher proportion of DC power requirements because of a greater mix of telecom carriers such as CLECs along with smaller ISPs and ASPs. Accordingly, many of these sites are designed with DC power to serve a larger part of the load. In the case of Switch and Data, about 80% of its customer load is served by -48 VDC even though the company caters to a mix of CLECs, ISPs and ASPs (Figure 2). The company points out that a lot of new IT equipment can run on DC. This plays into Switch and Data’s advantage to offer high reliability as a selling point.
Moreover, the DC power system can be configured in a highly distributed fashion to serve a variety of accounts. Switch and Data will sell DC power to its customers in increments of no less than 20-amp distribution feeds. The company points out that distributing DC throughout a site is not as economical as with AC. For example, at 10 kilowatts on a dollar per watt basis, DC is more expensive than AC.
But when compared on the basis of dollar per watt per minute of backup, DC costs come down substantially. “Plus we can charge a premium for DC feeds,” says James Lavin, Switch and Data founder and chairman. “Power is a significant percentage of our revenues.” Switch and Data designs its sites for 85 watts/sq. ft. and handles cooling on a modular basis using a distributed HVAC approach so that lightly used units can be shut off until needed.
A blended approach
Liebert, in a paper presented at the September 2000 Intelec conference, advocated a combined AC/DC architecture. Liebert’s concept is to maximize the best attributes of AC- and DC-predominant architectures while increasing overall system reliability and lowering upfront and operating costs.
To do this, Liebert has coined the term “hybrid distributed redundant power system” (Figure 3). This hybrid system delivers AC to key computing gear from AC UPSs and PDUs that are backed up by standby generators. The UPSs have a small battery plant with about a 15 minute reserve, enough time for the generators to start. Any requirement for DC loads is served from “battery-less” DC rectifier plants that act like DC PDUs. The key here is that the DC plants are not equipped with large battery strings that are expensive, need heavy floor support and require a lot of maintenance.
Rather, all the battery backup resides at the UPS, which is small in comparison to a DC battery installation. A greater reliance, however, is placed on the performance of the standby generators and the automatic transfer switches when a commercial AC power outage occurs. Such hybrid AC/DC configurations are really intended for large data centers in which AC still dominates the power requirements. Liebert calculates that overall system reliability from such a hybrid configuration is on par with the highly reliable DC-only approach that has been used in COs for decades but at overall lower capital investment.
The evolution to the new public network is already affecting power in ways AC and DC power engineers had never anticipated. These changes have significant implications for power equipment vendors and hosting facility providers.
Count uptime, not reliability. As much as we debate the merits of 99.9%, 99.99%, or 99.999% reliability, what really counts in the customer’s mind is: “How long can I keep my equipment running?” Reliability calculations are only that – calculations. In large scale networks involving multiple sites, there are just too many variables and intangibles to make a reliability calculation stick. When it comes to making performance guarantees, hosting companies must keep it simple, and their vendors must provide appropriate support. Switch and Data, for example, guarantees four hours of redundant backup and two hours of non-redundant backup if the utility AC fails.
Address system solutions. Power equipment vendors must ensure they fully understand their hosting customers’ system requirements for a given site. One hosting manager at a long-distance carrier lamented that his DC power equipment vendors “needed three times the field force and twice the time” to commission the large DC plants they were installing. Basic installation steps such as torquing lugs and tightening crimps properly can make a difference between uninterrupted service and an outage even with sophisticated power conversion equipment.
Sell price/performance. New public network hosting companies are installing DC and AC UPS power systems in accordance with their assessment of cost, risk and quality of service. Many hosting companies like to adopt a “template” approach to developing their sites in different locations. Level 3 Communications calls it a “cookie-cutter” approach; Genuity refers to it as the “build unit.” This way, these carriers hedge against provisioning too much power ahead of demand.
This provisioning methodology minimizes capital expenditure and disruptions to customers. In reality, many hosting companies have found it is difficult to predict the mix of customers that will show up in the various locales and what their individual AC and DC power requirements will be. So the onus is on power system vendors to deliver flexible solutions that meet customer expectations for high availability and capital conservation.
The debate will continue about the relative cost and performance trade-offs between AC and DC powering methods. To make an informed decision about a given application, hosting companies need to know the comparative AC vs. DC data such as the cost per watt and the cost per watt per minute of backup. In the end, power equipment vendors must offer hosting companies cost-effective solutions while helping them manage the risks for their customers.
John M. Celentano is President of Skyline Marketing Group, Owings Mills, Md. He can be reached at firstname.lastname@example.org.
COPYRIGHT 2001 PRIMEDIA Business Magazines & Media Inc. All rights reserved.
COPYRIGHT 2003 Gale Group