The nuts and bolts of ODBMS – part 2; object database management systems

The nuts and bolts of ODBMS – part 2; object database management systems – includes related article

Barry Wetmore

A look at how object databases work, including some real-life applications

If you’re still trying to wrap your mind around the idea of an object database, try this: Think of an object database as a big bucket of tennis balls.

A relational database would store the rubber for the tennis ball in one part, the fuzz in another part and the adhesive somewhere else. Players would have to put a tennis ball together each time they wanted to serve. In this instance, a relational database would significantly slow down performance and turn a tennis tournament into a year-long event. However, in an object database, the tennis balls are stored exactly as you see them, in complete units ready to be used.

In today’s fast-paced telecommunications world, the tennis balls might represent enhanced services. When designing enhanced messaging that incorporates features such as interactive voice response delivered over wireline and wireless networks and bundled with cellular data (e-mail/fax via cellular), the service provider must carefully consider provisioning, surveillance and billing implications. The tools to set up these kinds of services – rapid prototyping, distributed data access and predictable performance – are aided with the use of object languages and object databases because, like our tennis players, service providers won’t have to stop each time to assemble all the necessary pieces.

How It Works

Pure object database management systems (ODBMSs) provide traditional database functionality (persistence, distribution, integrity, concurrency and recovery) but represent information models based on object models rather than relational models. ODBMSs typically provide permanent, immutable object identifiers to guarantee data integrity over the entire application life cycle.

Pure ODBMSs generally provide transparent distributed database capabilities (transparent object migration and distributed interactions) and can include advanced database management system (DBMS) functionality such as work group support, continuous availability and integral event notification. These systems are best suited for distributed data management applications.

Often, users that require pure ODBMSs may try projects with a relational DBMS (RDBMS) only to find the relational model too restrictive. In other cases, users migrate to pure ODBMSs from hard-to-maintain, internally developed database managers and flat file systems.

Scalability is one of the most desired attributes of information systems, and the term has a wide variety of meanings: portability across platforms, distributed data across the network for load balancing, performance optimization and insurance of locality of reference, plus scaling across time without major discontinuities for basic tasks such as schema modification. Of all these measures, the most significant is graceful and cost-effective scaling of transaction rates as the number of users and the size of the databases grow.

In the realm of object data distribution, a distributed database should appear to a user as a local database. The distribution of data should be completely transparent to the application developer and, ultimately, the user.

A key determinant of performance is the level of concurrency support by the ODBMS. An important goal of any DBMS (relational or object-oriented) is maximizing concurrent usage while minimizing contention (and waiting) for resources. Non-blocking processes – which are key to maximizing operator revenues – are generally considered non-negotiable. In RDBMSs, row-level locking is preferred to minimize contention. Leading RDBMSs from companies such as Oracle lock at the row level, which is the relational analog to object-level locking.

The typical alternative to object-level locking is page-level locking. ODBMSs that lock at the page level often suffer from the problem of “false waits,” in which two users working on different data collide, causing one to wait because the data elements happen to reside on the same page. False waits cause not only undesirable lock waits but can also cause false deadlocks, also referred to as “blocking.” False waits unnecessarily waste system resources and irritate end users.

Object-level locking is just one technique that can be used to improve concurrency. Others include nested transactions, which provide support for concurrent multiwindowed applications, and user-defined locking, which enables a programmer to customize the lock mode, maximizing concurrency in particularly demanding environments.

The Client-Server Arena

There are three major ways to implement a database in a client-server model, each with distinctive strengths and weaknesses [ILLUSTRATION FOR FIGURE 1 OMITTED!.

In the first model, RDBMSs employ a server-centric model, and the client performs no DBMS functions but sends only structured query language to the server for processing. Although this model is appropriate for host-based computing, the server-centric model is now considered outdated, thanks to client workstations that are equipped with many megabytes of low-cost memory and the need to conserve network bandwidth for revenue-related payloads.

Performing database processing on the server has two key disadvantages in the modern client-server environment. First, it requires network input and output for all database interactions, even for rereading frequently referenced, or cached, data.

Second, it fails to exploit the power of today’s client machines, forcing the processing to be completed on costlier servers. In many ways, the server-centric RDBMS implementation of the client-server model is a holdover from the days of host-based computing.

In the second model, ODBMSs generally place more processing responsibility on the client. Some ODBMSs implement a client-centric model, in which the server is reduced to an unintelligent page server, incapable of performing any function other than sending requested pages to clients. Although this approach exploits client processing power, it also causes excessive network traffic. The page server often can’t perform basic data manipulation functions such as applying restrictions to the data.

This means that all data in a database can be transferred from the server to the client for query processing.

In a stand-alone system, the client and server reside on the same machine. For client-server systems, however, the ODBMS must take steps to minimize network traffic and avoid overloading client applications with potentially irrelevant data.

The third, more balanced approach divides DBMS functionality between client and server. Balanced systems perform some DBMS functions, such as transaction management and session management, on the client machine and others, such as lock management and logging, on the server. The balanced design minimizes the number of network transfers by performing dual caching – on both the client and server – thus performing without any network traffic. In addition, the ability to perform server-based query processing ensures that only requested data will be transmitted from server to client, again minimizing network traffic and maximizing performance.

Overlooked Areas

Disk space management and I/O are frequently overlooked areas of DBMS technology. When they are not working properly, the effects can be felt throughout an entire organization.

In the vast majority of commercial applications DBMS performance is more dependent on I/O than on any other computing resource. The cost of I/O outweighs other costs by several orders of magnitude.

One of the most common problems with DBMSs is “saw-toothed” performance, slowly degrading performance due to database fragmentation. Systems that fragment the database do not support sustained performance because they can’t perform two important tasks: on-line reclamation of deleted space and on-line reorganization of data.

It’s interesting to note that benchmarks typically do not detect saw-toothed performance because they typically are not run for a sufficient amount of time to test the anti-fragmentation techniques – or lack thereof.

Although I/O often dominates database performance, certain applications that repeatedly reread cached data can be sensitive to central processing unit (CPU) usage. A “warm transversal” is the movement of data that has been cached locally for reuse in an application. A “cold transversal” occurs when data is pulled from the database server to the client.

Some ODBMS vendors have overzealously reduced CPU usage on warm transversals, but this is done at great cost to product functionality, distribution, heterogeneity and data integrity. Although the temptation exists to minimize CPU usage according to transversal, regardless of real-world application requirements, warm transversal performance is not a quest unto itself but rather one of many requirements that must be carefully balanced against the comprehensive requirements of the ODBMS application.

Users should set aggressive warm transversal performance thresholds that must be surpassed, but they also need to focus on other factors contributing to ODBMS performance. Be aware that some vendors and many benchmarks misleadingly focus on warm transversal performance. This approach can leave customers with an inaccurate view of real-world performance.

Telecom Applications

The following list of applications will help you better understand how ODBMSs are being used in the telecom industry:

Remote digital terminals. RDTs serve highly concurrent environments such as digital loop carrier systems. Supervisory systems and legacy operations support systems must access a large number of objects at the same time. For this type of application, the ODBMS must provide object-level locking for optimal performance by eliminating the needless concurrency conflicts characterized by some object databases implementation.

Transparent data distribution allows multiple supervisory systems to exchange objects between groups of RDTs across the network as they need updated information. Additionally, an ODBMS that operates 24 hours a day, seven days a week, including on-line addition of data volumes and compaction for reduced overhead, can be incorporated into the architecture to provide sustained performance and continuous availability.

Asynchronous transfer mode switch management. Like RDTs, ATM switches represent highly concurrent environments. As many as 200,000 to 400,000 active objects can be in use by 200 to 1000 execution threads within a switch at any one time. An ODBMS with object-level locking can provide the level of concurrency required for this type of application and can be used for inter- and intra-switch management to manage a switch itself as well as a group of switches.

Ideally, ODBMSs are suited for broadband transmission applications because they can reduce overhead and provide sustained performance. Equally important is their ability to support continuous availability, enabling dynamic switch configuration updates to on-line systems.

Service control points, service nodes and intelligent peripherals. Telecom providers are facing a services dilemma. On one side, customers are calling for rapid creation and deployment of new services such as customized routing, personal numbers and voice-activated custom local area signaling services. On the other side, the cost of implementing an Advanced Intelligent Network to support these services is prohibitive.

Telephone companies are discovering they don’t have to wait for end-to-end digital switching and SS7 signaling facilities to reap the rewards offered by intelligent network applications. Carriers can achieve very similar functionality by using existing switches at a fraction of the cost through intelligent peripheral/service node technology, which uses low-cost, Unix-based platforms.

ODBMSs are being used to create intelligent peripheral, or adjunct-type products in which services can be implemented quickly and with minimal additions to network infrastructure. They are being used as the main databases for service control points, service nodes and intelligent peripherals because of their need for high-speed, complex associative lookup and data access in a distributed environment.

Some services databases are read-only, some are read-mostly/write-sometimes and others are read/write. In all cases, though, services databases demand a high level of concurrency because they handle hundreds of calls each second and require quick remote access to the data without contention.

ODBMSs with object-level granularity provide maximum performance in a highly concurrent distributed environment. These databases minimize network traffic by balancing computer resources between the client and server – processing queries at the server, where they are closest to the data – and returning only the qualified objects to the client.

Head office collector. As telcos add new services – cellular, ATM, personal numbers, voice messaging and calling cards – and build advanced networks, the mundane tasks of accounting and billing present new and complex challenges. Customers are demanding new billing services such as fraud protection, credit limit analysis, daily billing information, message detail recording and consolidated billing.

In addition, new requirements such as TR-1343 automatic message accounting (AMA) call for very complex data structures and real-time, on-line billing modifications. These, in turn, are creating the need for a new technology for real-time data acquisition and billing systems.

ODBMSs are helping AMA become a reality by enabling real-time data filtering, scoping and discrimination in support of AMA data server, processing and management system functionality. The AMA data model is specified in object-oriented terms, making its implementation a perfect fit for an object database.

Operations support systems. Telcos are burdened with a huge investment in legacy operations support systems that for years have provided the centralized alarm, configuration, performance, accounting and security management functions for older technology network elements. However, these systems are unable to directly manage the RDTs, mobile telephone switches, ATM switches, intelligent peripherals and other new technology constantly being added to the network. These providers need a cost-effective way to leverage the investment represented by legacy OSSs while incrementally adding more sophisticated network elements.

ODBMSs can augment existing OSS architectures by mediating exchange between the older command structures and the new technology network elements. They provide a way to store the information from legacy OSSs by using its native data model and converting it to a model appropriate for managing these new technology network elements.

Customer network management. A recent trend among large businesses has been to lease lines from their local or interexchange carrier for their private voice and data networks. At the same time, these companies are demanding control over these networks while outsourcing networking infrastructure only. They want to generate and track their own service order requests, reconfigure and provision their leased lines, and get reports on billing, network traffic and transport parameters without calling the phone company and requesting the changes or reports.

Customer network management allows businesses to participate in both the insight into – and the control of – strategic networking facilities. Versant Object Technology, for example, is providing the technology required for a major interexchange carrier to deliver customer network management for its ATM network customers. The system gives control over a network employing different types of ATM switches, customer premises interfaces and routers from different vendors.

Obtaining billing, current configuration, or fault and performance information can be challenging in such a heterogeneous environment, where each device requires a specific protocol for communications and control. ODBMSs can hide the details of communications protocols and OSS languages to control and monitor these devices.

RELATED ARTICLE: Customer care revamped

Sprint Corp. has successfully taken on AT&T to gain a piece of the long-distance market, and its flexible billing systems have helped it compete. But the company has had difficulties with its acquisition process for business services. When a Sprint sales representative approached prospective business customers, they were faced with a host of forms and often a two- or three-week delay before service could begin.

Sprint created a new program called the Customer Information Systems Extension (ClSX), that covers Sprint’s commercial customer voice products and contains an extensive database of product options.

CISX, an object-oriented program, separates products and data from application logic and the presentation layer. The product repository resides in an object database management system, which allows Sprint to easily modify its program to reflect new products and programs.

“Object technology made a lot of sense. Sprint is frequently on the bleeding edge of technology. We are not afraid to take risks with emerging technologies if it means a competitive edge,” says Michael Rapken, director of customer acquisition and management for Sprint’s Network Service Delivery.

Barry Wetmore is Director of Telecommunications at Versant Object Technology, Menlo Park, Calif. Part 3 of this four-part series on object-oriented technology, appearing in the May 20 issue, will examine some of the best implementations of these applications.

COPYRIGHT 1996 PRIMEDIA Business Magazines & Media Inc. All rights reserved.

COPYRIGHT 2004 Gale Group