Rogue Web services: risks and success strategies

Rogue Web services: risks and success strategies

Matthew Fuchs

Like the hero of a Greek tragedy, Web services’ most compelling advantages are simultaneously its most serious dangers. Web services have passed the initial hype cycle. The convergence of industry support, ease of use, and the desire for cost-effective solutions for integration and services-oriented architectures (SOA) has made it a popular choice for architects, developers, and integration analysts, with numerous projects underway. Web services technologies are making inroads within organizations in much the same way Web site technologies proliferated. However, the benefits of loose coupling, decentralized development, and support for heterogeneity–rapid grassroots development of Web services with flexible, agile architectures–introduce a multitude of new issues organizations must address to prevent the negatives from outweighing the positives. Security, reliability, and performance are all critical issues to be specially managed in a Web services environment. This article looks at “rogue Web services,” already a growing concern in IT, particularly for organizations that have not applied top-down governance on usage.

Rogue Web Services

A rogue Web service (RWS) is a Web service that’s out of control. It might be perfectly benign, but unsanctioned by IT. Or it might be intentionally malicious–either attacking your systems or squatting and consuming your resources. It might even be an officially sanctioned service that unintentionally starts hammering other Web services due to a coding bug. Of course, even the most benign rogue service could turn up in the last category at any time–almost by definition it hasn’t gone through the same QA or testing as production code.

Perhaps the most compelling reason RWS threaten to become a significant danger is the ease with which they can be created. Although veterans of earlier large-scale distributed technologies, such as DCOM and CORBA, frequently disparage Web services, those were very complex systems requiring a fair amount of knowledge and programming skill to deliver a functional application. In addition, distributed object technologies were never able to break out of their silos. The prime differentiators between these systems and Web services are the ease with which a Web service can be constructed to perform fairly sophisticated tasks, and the loosely coupled nature of Web services technologies. These significantly lower the barriers to entry for both the technical know-how for building a Web service and the time required to get a new service initiated or integrated with an existing service. And that significantly increases the number of people capable of building an RWS.

Unsanctioned internal Web services, particularly clients, but servers as well, can arise on any computer accessible through HTTP. It takes relatively little programming skill to execute a Perl or Python script in a Command shell to listen for requests on a particular port, do some additional processing, and return the results. From there, it’s also possible to create a Web service client that creates messages for a variety of Web services and coordinates the result. These are often called “composite” Web services, but despite becoming a buzzword they are scarcely more difficult to build than ordinary ones, especially if you don’t worry about making them safe.

The primary means of describing a Web service, WSDL, is a fairly easy-to-read interface definition language. Unlike CORBA or ASN.1 stub generators, an astute programmer can easily generate a stub from the description, and there are many easily available generators for common programming languages. Even where that is not available, a message itself is often self-explanatory–a new message can be “cloned” from an old one just by replacing bits and pieces.

The barrier is even lower when Web services are easily integrated into the latest versions of popular desktop software, such as Word macros, Excel spreadsheets, and PowerPoint slides. There is explicit Web service support in MS Office 2003, but it is possible to access Web services through macros and extensions in earlier versions. Given an RPC-style service, a stub only needs a URL, a function name, and a list of parameter names, types, and values, to create a SOAP message. For simple return values, little is needed beyond simple pattern matching to retrieve the answer. A PowerPoint slide set containing a Web services call made publicly available could generate a request every time a particular slide is viewed.

Once a Web services message is prepared, it moves along one of the most ubiquitous and familiar protocols created–HTTP. Many programming languages already have libraries to create HTTP messages, but it is easy to create an HTTP message by hand and send it along a socket. From a programming perspective, it is a simple request/response requiring very little code.

So we see that Web services lower the barriers to entry for the construction of distributed applications for legitimate developers and users as well as for illegitimate ones.

Risks Associated with Rogue Web Services

Rogue Web services traffic is more difficult to protect against than random traffic because much of the danger is in information targeted at the application level that cannot be filtered at the IP level the way traditional firewalls can. It is quite possible for rogue traffic to originate behind the firewall from people in your own IT shop or even from end users. Also, RWS cannot be identified just by source and destination IP–it may be that the message is coming from an RWS at a partner location, so it’s important to cut off just the aberrant user, not the entire site. The destination host may contain any number of Web services through information not accessible at the IP or even Web server proxy level. While the server may recognize the URL, the actual identity of the operation being invoked is in the contents of the message, requiring a level of filtering capable of looking at application-level information.

RWS, even of the most benign sort, represent a threat to a company’s ability to control its own destiny. Even avoiding, for the moment, the worst possible abuses, unknown Web services can create a considerable drain on network resources. Allowing unimpeded grassroots development of Web services without any centralized attempts at standardization can lead to significant duplication as well as many avoidable mismatches among Web services. While the flexibility of the Web services SOA makes it much easier to deal with independently developed Web services, a small investment in shared design can go a long way to avoiding extra work in the long run. Therefore, it is important for an organization to control the set of technologies used.

As many Web services are a thin layer over existing applications, once access to a Web service spreads beyond the approved users, the damage can be as bad as any other kind of intrusion. The intruder can have the same kind of impact as anyone who has logged into your system. As more functionality becomes accessible through Web services, such as management and provisioning, there won’t be much that can’t be done using Web services. Worse yet, if your security credentials, such as private key, are stolen, then it is not just your internal systems that are compromised, but your expanded Web services environment as well, including fee-based services.

Success Strategies

Every organization is different. The most successful strategies depend not only on the technologies that are being used but also on the people and organizations involved. Organizationally, many IT groups deal with the rogue service issue through top-down governance, usually by an architecture and standards body. These groups define the ground rules for how services are created, what standards should be followed, and the rules that are required for corporate and industry compliance. In other organizations, governance of Web services is enforced by the CISO or associated security group. In still other organizations, it may be defined and enforced by the IT operations group. More often than not, all of these groups are somehow involved in defining the minimum security, monitoring, and management requirements for WS development, deployment, and management.

Many tools are in existence for detection, enforcement, and management of the XML Web service environment. A variety of sniffer tools are available for detecting XML and SOAP traffic, many of them free. Using simple rules, you can determine if the traffic is unsanctioned and fire off the necessary alert. Firewalls and other proxies can also be configured to perform content inspection, although they may lack sophisticated rule sets and the performance for more robust environments. UDDI directories and other service directories can be used to store sanctioned Web services to help ease management. A new class of product called XML Firewalls and Web Services Management (WSM) platforms can be used to address the security, monitoring, and management of services. These products are typically noninvasive and help detect and address RWS while providing a management framework and set of tools to enforce top down governance requirements. Many analysts agree that a fully integrated XML firewall and WSM solution provides, among many other benefits, the best solution for enforcement and ongoing administration for RWS.

Nevertheless, an important part of the value of Web services is lost in a regime that is strictly maintained. Not all Web services are made equal, and infrastructures that don’t appropriately distinguish between the varying requirements will veer unacceptably in one direction or another. An effective regime will be able to distinguish between core and periphery, where the core represents the bottom tiers of client/server architecture, and the periphery represents Web service clients. Another important distinction is among services that cause dynamic updates to information or consume significant resources (such as money), and those that don’t and may be simply informational. Rather than taking an overly restrictive stance, tools can be used to create policies to adaptively manage Web services traffic so that important systems are only accessible from approved clients, but others can be accessed in a more relaxed fashion with content filters at the periphery to inspect outgoing information.

The proliferation of RWS is not necessarily a bad sign. In fact, it might be said that this is an indication of the benefits that Web services provides organizations today. However, there are associated risks when Web services traffic is not appropriately controlled. A combination of managing the proper procedures and controls mixed with the appropriate technologies can enable any organization to realize the full value of Web services while minimizing the security and cost risks.

Dr. Matthew Fuchs is a member of the technical staff at Westbridge Technology. Previously, he was chief scientist for XML Technologies at Commerce One, and pioneered the theory and practice of using domain-specific languages in XML and SGML for distributed applications and agent-oriented communication over the Internet. At Commerce One he developed a variety of XML technologies, including SOX, the first implemented, publicly available, object-oriented Schema language and parser for XML.

COPYRIGHT 2004 Sys-Con Publications, Inc.

COPYRIGHT 2004 Gale Group