Cadastral survey accuracy standards*
Craig, Belle A
Introduction
This paper will examine the impacts of changing technology on the accuracy M of cadastral boundary surveys, and how those changes can affect the actual terms in which accuracy is expressed. It also explores the changing roles and responsibilities of the cadastral surveyor and the private sector survey community in defining, developing, and managing survey data for inclusion in a National Integrated Lands System.
The rapid rate at which advances in technology are occurring have outpaced the ability of many individuals and organizations to react quickly and appropriately to change. It will take years to fully understand the impact of emerging technology on the land surveying profession. The fields of land surveying and mapping have benefited from technology innovations in personal computers, total station instruments, and global positioning system (GPS) equipment, to name just a few. New tools for rapid acquisition of measured data are continually being developed and refined, and the Internet has provided the means to share such data with people worldwide.
Most of the public land in the U.S. is in the western states and in Alaska. Many socio-economic changes have occurred in the western U.S. that challenge the multiple-use philosophy of government agencies managing federal lands. Recreation and tourism on public lands have increased and often collided with the interests of ranchers and miners. Increased awareness of the environment and the sensitivity of natural ecosystems to external influences have changed land management policy on federal lands. Federal land management agencies have turned to technology, specifically Geographic Information Systems (GIS), as an aid to make complex management decisions about federal lands.
The Public Land Survey System (PLSS) is the basis of land tenure in 30 of the 50 states in the U.S. The boundaries of public land, often legally defined by the PLSS, are a federal land management agency’s first line of evidence of the existence of a federal interest in the lands they are tasked with managing. In response to the GIS needs of land managers, the Bureau of Land Management, Cadastral Survey has diversified its mission. Charged with establishing, marking, and maintaining boundaries of public lands, Cadastral Survey is participating in the development, design, and implementation of a National Integrated Land System (NILS).
The increased need for better tools to manage complex issues in a GIS environment has spawned the need for Cadastral Survey to develop and manage a Geographic Coordinate Data Base (GCDB) (http://www.blm.gov/nhp/what/lands/title/cadastre.htm). This database will serve as a spatial representation of the Public Land Survey System in a GIS environment, which will actively be managed by cadastral surveyors. The database is still incomplete in some states and, currently, the accuracy of the GCDB data is of a map quality. The database has been designed to allow for ongoing improvements to the accuracy of the spatial data with repeated inclusion of modern survey data. The GCDB will be the foundation of the National Integrated Land System.
When conducting surveys of federal lands, cadastral surveyors reference boundary surveys to the National Spatial Reference System (NSRS). This system is a network of Continuously Operating Reference Stations (CORS) and horizontal and vertical control stations maintained by the National Geodetic Survey (NGS). Geodetically referenced survey data are used to define the link between the Public Land Survey System and natural resources managed by federal land management agencies. Inclusion of accurately measured geographic data will serve to further refine the overall accuracy of the Geographic Coordinate Data Base, as well as provide accurate project control for future boundary surveys of public lands.
Current Cadastral Survey accuracy standards are inadequate and need to be changed to reflect the way modern field surveys are conducted and to be consistent with the Federal Geographic Data Committee (FGDC) standards for spatial data to facilitate data sharing (FGDC 1998). This paper will address the responsibility of Cadastral Survey to redefine accuracy standards for control surveys, which reference boundary surveys to the NSRS, and the accuracy standards for boundary surveys of federal lands.
Cadastral Survey Limits of Closure
Traditionally the reporting of accuracy in the various editions of the Manuals of Surveying Instructions has defined survey accuracy in terms best expressed by precision ratios. This method for evaluating survey accuracy is well known and published in many textbooks. Generally speaking, the allowable limits of closure for surveys of federal lands are derived from the summation of all of the latitudes and departures along the surveyed lines of a closed traverse. The purpose of establishing limits of closure is two-fold. The most important reason is to make sure all surveys meet a standard for accuracy. This allows for the orderly establishment of a uniform Public Land Survey System, which is, by definition, a rectangular system of survey. Standards must be established and met to maintain the general rectangularity of the system. Table 1 provides a summary of limits of closure documented in the Manuals of Surveying Instructions.
This look at historical instructions demonstrates that cadastral survey has gradually increased the expectation of the accuracy of field surveys and also changed the manner in which a standard for accuracy is expressed. The changes were based on refinements made in survey instruments and field survey techniques (The Manual of Survey
Instructions, 1947):
In reference to accuracy of surveys …The question relates to the matter of the dependability of the record direction and lengths of lines as currently returned, or the reliance that can be placed on those values. To what extent can those values be incorporated safely into other surveys that presume to set up definite standards of accuracy, or mapping purposes of various classes. ….This is a test that bears directly on the improved technique, which is now practiced in the making of public land surveys (3-234:238).
It should be noted that the standards of accuracy documented in manuals of survey instruction were created primarily for application to original and completion surveys. The General Land Office did not officially allow for dependent resurveys or retracements of original surveys of the PLSS until the passage of the Resurvey Act of March 3, 1909 (35 Stat. 845) as amended June 25, 1910 (36 Stat. 884: 43 U.S.C. sec. 772).
The Manual of Survey Instructions, 1947 refined a standard of accuracy that offered varying “classes of surveys.” These survey classes addressed issues such as the difficulty of terrain and the value of the land being surveyed as having merit in the determination of what an expected standard of accuracy would be. The issue of land values is still very important. A hundred years ago the value of public land in mountainous terrain, in a wilderness, may have been considered negligible. Many Americans currently feel that the value of our public land that has remained a wilderness is priceless. Consideration of land value and terrain should continue to play a role in the development and application of any new cadastral survey standards. The difference in land values in urban and rural areas should be taken into consideration when analyzing errors associated with boundary surveys (3-234:238-9).
Direct and Indirect Measurements
Measurements are made to determine unknown quantities. All measurements contain error. There are two methods-direct and indirect-by which measured quantities are determined. Direct measurements are made by applying an instrument directly to the unknown quantity and observing its value, such as measuring a distance between two points with a tape. Surveyors make indirect measurements when, for example, they measure angles and distances directly to a point to compute station coordinates. From these “directly obtained” coordinates other angles and distances may be derived indirectly by computation.
Public domain surveyors have historically used direct methods of measurement; they measured boundary lines by staying as close as possible to the true line. Most surveyed lines were measured directly with a chain. Off-line traverse methods only increased the amount of chaining needed to measure a line. Indirect methods of survey, such as triangulation, were used to make measurements across canyons or other obstacles and were generally an exception as a primary method of survey.
Today the most common method of survey measurement is indirect. With the introduction of the Electronic Distance Meters (EDMs), chaining as the dominant method of measuring distances became obsolete. Programmable calculators have simplified the task of survey calculations, and surveyors use combinations of field methods such as traversing and radial survey techniques to measure lines. The Global Positioning System (GPS) is by its very design an indirect method of determining a measured quantity. Conceptually similar to a conventional resection, the GPS method enables surveyors to determine the unknown horizontal position of a station from measurements made from GPS satellites whose positions are precisely known.
Both the direct and the indirect method of measuring surveyed lines have their advantages and disadvantages. For example, if the intent of a survey is to post and mark the lines of a timber sale, then direct measurements along the lines to be posted are the best method. However, the direct survey method is often more susceptible to propagation of errors such as transfer of azimuth and a general lack redundant measurements. In the case of indirect measurements the surveyor can minimize the number of instrument occupations made during the course of the survey. This, unlike direct measurement methods, allows the surveyor to minimize the propagation of error during the course of a survey. Indirect survey methods do, however; require more redundant measurements to validate survey data. Varying terrain and vegetation influence the choice of survey method and equipment, particularly if the goal is to optimize survey efficiency in the field. This notwithstanding, it is at the surveyors’ discretion whether they employ indirect or direct methods of survey and which combination of survey instruments they use to complete the survey.
It should be noted that the current Manual of Surveying instructions, 1973 was issued prior to the common use of the electronic distance meter. Many significant advances in survey technology have occurred in the last two decades. New technology has not only changed the field methods used or ways survey lines are measured, but refinements in instrumentation and use of GPS has provided the means to make more accurate survey measurements. Today instrument specifications for accuracy are better, and when the equipment is used properly, it will produce more reliable results. In order to further develop the concepts of accuracy of modern surveys, we need to examine how measured survey data are reduced and evaluated.
Methods of Survey Data Reduction
To understand the impact of new technology on the survey profession, we need to look at the evolution of reducing survey data in the public land survey system. This aspect of surveying has been radically redefined by the personal computer.
The PLSS is a unique survey system, continental in size and rectangular by definition. The PLSS has many unique characteristics, as described in “Geodetic Aspects of Land Boundaries in the PLSS Datum in, a Cadastral Computation System” (Wahl et al. 1992):
Many boundaries and most elements of the Public Land Survey System are defined in a geodetic sense, for example lines of constant true bearing, latitudinal arcs, meridians, long, straight lines, parallel and other equidistant lines, (p.1) …. Straight lines on the ground are lines of constantly changing bearing, (p.4) ….Lines of constant bearing in the PLSS datum will be “curved” on the ground, (p. 5)
These “curved” lines would include state boundaries, standard parallels, township exteriors, section lines, subdivisional lines of sections, and many grant and reservation lines. There are exceptions where boundaries are not lines of constantly changing bearing or curved, which might include portions of grant or reservation boundaries and some portions of state boundary lines.
In the current world of surveying there are two widely varying computational methods in common use. The first method is a simple plane survey computation performed on a local orthogonal coordinate system. Another method in use for control survey applications utilizes geodetic systems with spherical or ellipsoidal coordinate systems (latitudes and longitudes). A common variation of geodetic computations is the use of any number of coordinate projections or grids (Wahl et al. 1992, p.1).
Of the computational methods commonly available in existing software is the use of plane computations based on local orthogonal coordinate systems. Yet, it is the responsibility of cadastral surveyors to lay out of lines which require what is best described as a geodetic computational system. The methods that surveyors are to follow to achieve intended results are described in the current Manual of Survey Instructions, 1973:
Details of the plan and its methods go beyond the scope of textbooks on surveying. The applications to large-scale area requires an understanding of the stellar and solar methods for making observations to determine the true meridian, the treatment of the convergence of the meridians, the running of true parallels of latitude, and the conversion of the direction of lines so that at any point the angular value will be referred to the true north at that place (pp 1-3).
According to Wahl et al. (1992):
It is generally understood by surveyors that the use of simple plane methods while convenient, is not necessarily suitable for large-scale surveys. Why this is so is not usually so well understood. Plane survey computations have become associated with almost all boundary and construction surveying while geodetic methods are most often associated with control surveying, mapping, and route survey. However there are many large-scale surveys where this distinction between computational systems cannot be maintained. In many largescale surveys it becomes necessary to deal with some geodetic aspects of the survey (p. 1). … A good example of a survey system with significant geodetic components is one used throughout the western United States, the Public Land Survey System (PLSS)…during the course of the first original surveys… it became apparent that the term “rectangular” is a generality that cannot be effectively maintained over a large survey extent (p. 2).
Various editions of the Manuals of Survey Instructions tell the surveyor how to interpret certain mathematical results obtained when surveying on a sphere or ellipse using a simple plane orthogonal coordinate system. Among them is the “apparent misclosure” due to the convergency of meridians. Addressing this issue, Wahl et al. (1992) wrote: “A theoretically perfect survey will appear to misclose in the PLSS datum” (p. 6).
The common availability of personal and handheld computers has allowed cadastral surveyors to move beyond the simple plane orthogonal coordinate system to spherical and ellipsoidal coordinate systems that reflect the true geodetic nature of the Public Land Survey System. Computers became available to cadastral surveyors before adequate computer software. Working with the University of Maine, BLM’s Cadastral Survey developed a Cadastral Measurement Management (CMM) software. This software was developed specifically for dependent resurveys, but can be used for original surveys as well. At the time of its development there were no commercially available software systems that fully met the computational needs of the cadastral surveyor in the field. This software has allowed the cadastral field surveyor to work using a spherical or ellipsoidal coordinate system.
When using a continental or global coordinate system, such as latitude and longitude, the surveyor is able to spatially relate his boundary survey to the rest of the world. The premise of directly and accurately relating different types of spatial data, through the use of common coordinate systems, is a fundamental principal of a Geographic Information System. With the widespread development and uses of GIS there are now many software vendors that employ the use of geodetic coordinate systems. Global Positioning Systems and the software to reduce GPS data also use global coordinate systems.
The development of the CMM software made another important tool available to the cadastral field surveyor-the ability to analyze survey measurement data using a statistical method called least squares analysis. At the time CMM was developed many other software vendors employed this tool for data and error analysis. In the field of surveying the focus of other software applications was on control surveys, not large-scale boundary surveys. Currently, commercially available software does not include capabilities for doing specialized computations related to dependent resurveys of the PLSS, but it will soon.
Although the least squares analysis is a relatively new tool for the cadastral surveyor, this method of error analysis as developed in the eighteenth century. The first published article on the subject was written by Adrian-Marie Legendre in 1805, entitled “Methode des moindres quarres” (http://www.history.mcs.st.andrews.ac.uk/history/Mathematicians?Legendre.html). Originally developed for analyzing celestial observations, the method was first investigated by Pierre-Simon who laid its foundation in 1774 (http://www.history.rmcs.st.andrews.ac.uk/history/Mathematicians/Laplace.html). Carl Gauss extensively used the method as a student at the University of Gottingburg in 1794 and is accredited with its development (http://www.history.mcs.st.andrews.ac.uk/history/Mathematicisans/Gauss.html). Concurrent, with the work of Laplace, Legendre, and Gauss, the first public land surveys, covering parts of Ohio, were made by the Geographer of the United States in compliance with the Ordinance of May 20, 1785. Using a least squares method for the analysis of survey measurements would have been impractical at that time. The Cadastral Survey did not apply adjustments to raw measured data in the past because most adjustments were biased. It has taken over two hundred years for the least squares method to be fully appreciated but with new technology, using least squares for the adjustment and analysis of measured data has become common practice.
Least Squares Analysis vs. Precision Ratios
All measurements contain errors, and all references in this discussion refer to random error. The treatment of systematic error and blunders is excluded from this discussion. There is a recognized distinction between the terms accuracy and precision. Precision measures the degree of consistency between measurements and quantifies the size of the discrepancies. Accuracy is the absolute nearness of a measurement to the true value of a measured quantity. For the sake of this discussion the term accuracy will refer to relative, not absolute, accuracy because, as reported by Wolf and Ghilani (1997, 1.3:2):
* No measurement is exact;
* Every measurement contains errors;
* The true value of a measurement is never known; and
* The exact sizes of the errors present are always unknown. Past survey manuals expressed survey quality
standards in the form of a closure precision ratio. Using a precision ratio to evaluate survey error has a well defined place in determining the relative precision of past surveys. It is a well understood principle that during the course of a dependent resurvey the limit of closure or standard in place at the time of the original survey is how past survey measurements are judged today. It is because of this that surveyors need to continue to evaluate resurvey data and calculate precision ratios, or loop closures for their work. The role of the surveyor is not to improve upon the work of historic surveys but to generally evaluate the quality of those surveys made in good faith.
Using precision ratios to evaluate survey work preformed today is mandated by the current Manual of Surveying Instructions, 1973. This method of quantifying error makes no attempt to identify measurement mistakes, or impart any information as to the positional error associated with any particular corner point of a survey or dependent resurvey. Precision ratios serve only to imply the general quality of the relative precision of a closed traverse. The loop closure has minimum redundancy and does not evaluate scale or rotational errors. The professional surveyor who is tasked with evaluating the accuracy of his work can easily find better tools suited to produce unbiased results.
Numerous general methods are available to disclose error in survey measurements. For instance, three angles measured in a plane triangle must equal 180 degrees. The sum of the angles measured around the horizon at any point must equal 360 degrees, and the sum of latitudes and departures must equal zero for closed traverses that begin and end at the same point. Each of these conditions involves one redundant measurement. In the case of three angles of a plane triangle, if only two angles were measured, angle A and B, the third angle, C, could be computed as C = 180[degrees]- AB. The actual measurement of the angle is redundant but allows the surveyor to assess the errors in the measurements made. The total angular error could be distributed by adjusting the angles and forcing the sum of the angles of the triangle to equal 180 degrees. This adjustment of the measured data would result in statistically improved precision. There are many different ways to adjust survey measurement data; some are more arbitrary than others.
In surveying, redundant measurements are very important. Prudent surveyors check the magnitude of the error of their work by making redundant measurements. These extra measurements allow the surveyor to assess errors and accept or reject measurements. They also make valid adjustment of survey measurements possible. The more a measurement is validated by additional direct or indirect measurements, the greater the likelihood of the measurement approaching the true value of the measured line. While the process of adjusting a plane triangle is relatively simple, the process becomes much more complex when analyzing large survey networks. Adjustments correct measured values so they are consistent throughout the network. Many methods for adjusting data have been developed, but the least squares method has significant advantages over all of them.
Least squares adjustment is based on the mathematical theory of probability and the condition that the sum of the squares of the errors times their respective weights is minimized. The least squares adjustment is the most rigorous of adjustments yet, it is applied with greater ease than other adjustments because it is not biased. Least squares enable rigorous post-adjustment analysis of survey data and can be used to perform pre-survey planning. These data-processing functions are greatly improved when least squares are used to compute a set of errors that have the highest probability of occurring.
The most important aspect of using least squares is that surveyors can analyze all types of survey measurements simultaneously. This could include horizontal and slope distances, vertical and horizontal angles, azimuths, vertical and horizontal control coordinates, and GPS baseline observations. Least squares adjustments also allow for the application of “relative weights” to properly reflect the expected reliability of different measurement types. An example would be weighting a line measured with a tape differently than one measured with GPS.
Least squares analysis has the advantage that after an adjustment has been finished, a complete statistical analysis can be made from the results. Based on the sizes and distribution errors, various tests can be conducted to determine if a survey meets acceptable tolerances or whether measurements must be repeated. If blunders exist in the data, these can be detected and eliminated. Least squares analysis enables precisions for the adjusted quantities to be determined easily, and these precisions can be expressed in terms of error ellipses for clear and lucid depiction (Wolf and Ghilani 1997, 1.7:9).
When computing loop closures of a closed traverse, precision ratios can only imply the general magnitude of the error. The “clear and lucid depiction of precision expressed as error ellipses” has radically changed the way a surveyor can look at survey error. Using least squares adjustments surveyors can express error in terms of positional tolerance of a single point, the relative error of all of the points in a network, or the range of precision within a large network.
As all new alternatives to long-standing practice, least squares adjustment of data has its own detractions. This notwithstanding, common availability of computers has made the use of least squares practical to achieve the same results regardless of the user. Even the most basic knowledge of statistical methods for data analysis will greatly aid field surveyors and prevent the misapplication of this data adjustment. The following serves to remind the surveyor that least squares can assist in identifying mistakes or blunders in survey work, but the need to remove them is critical to maintaining the overall integrity of the work. Results of least squares adjustments of survey data are applied only to the random error, which is generally small in magnitude. No adjustment is final until all blunders are removed (Hamming 1986). Probably the major fault with least squares is that a single very wrong measurement will greatly distort the results because in the squaring process large residuals play the dominant part- one gross error 10 times larger than most of the others will have the same effect in the sum of the squares, as will 100 of the others. Great care should be exercised before blindly applying any result (as is so often done); at least look at the residuals, either by eye or by some suitable program, to see if one or possibly a few measurements are wildly off (25.1 p. 431).
The least squares method of analysis of survey measurements is now commonly used in all aspects of surveying. Every cadastral surveyor who surveys the boundaries of public land has computer hardware and software available to perform least squares analysis and adjustment of survey data. Real-time Kinematic (RTK) GPS makes use of this process in the field to resolve baseline measurements on the fly. Root mean square error is evaluated in the RTK GPS survey data logging device in the field. Statistical methods of data analysis are also used in many natural resources related professions. When the various data from different sources are combined in a GIS, one of the first questions that comes to mind is how accurate are the data? How closely does the virtual picture of reality mimic the real world or actual conditions in the field?
Requirements of a Cadastral Standard
Before we get too involved in a discussion of the applicability of a given standard we would like to define what we want the standard to do. For our discussion we will distinguish between a “standard” and a “specification.” Simply put, a standard attempts to define the quality of the work in a way that is ideally independent of the equipment or technology in use. A specification describes how to achieve a certain standard with a given set of tools, equipment or technologies.
We believe that any new cadastral standard should be technology neutral. The standard should be developed with the idea that it can be applied to new technology; the first test, however, is that the standard can be applied to current technology. This goal seems to fit the way most standard-making exercises are conducted, including the FGDC standards formulation. Another goal is that a new standard should be inclusive, i.e., it should not exclude major technologies that are currently considered acceptable. At the same time it should not permit the quality of the work to decrease from current standards. A standard should also be understandable and useable rather than confusing, ambiguous or difficult to apply.
What someone wants from a standard depends on how the data are used. For example, for mapping and GIS purposes the primary concern may be the positional accuracy of the corner locations, whereas for a boundary survey relative location accuracy from the adjacent parcel monumentation is critical and often of highest concern. However there are other survey aspects that are often overlooked, and this seems to be the case in all the standards we reviewed. Apart from the survey network used, procedures for how monuments are set need to be checked and evaluated. Another critical element is the actual stability of the monument itself. If a monumentation procedure only assures placement of the monument to the 3-cm level there are diminishing returns to evaluation the survey network at the 2-mm level. It is also clear that if the monument is subject to soil movement, frost heave, or man-made disturbances of a few centimeters, its use for future work is affected. Accuracy standards perhaps need additional elements to describe these factors.
Twenty years ago positioning accuracy needs were minimal; not even the U.S. Geological Survey (USGS) used cadastral data on a regular basis to depict land boundaries on their mapping products. At the typical USGS map scale of 1: 24K, 40 feet of accuracy was sufficient to conform to the National Map Accuracy Standards. The PLSS monuments found on the ground were the primary fiducial marks that related the cadastral survey to the map. Recently, and particularly over the past five years, we have seen a radical shift in the demand for and use of accurate spatial representations of cadastral surveys. Since about 1985 Cadastral Survey has began to require geodetic ties to ongoing surveys. With the advent of CMM, surveys are performed on a geodetic basis, while also integrating GPS (both static and RTK) in the new surveys. As a result of these changes BLM surveys are linked to other geospatial data directly instead of through their map depiction.
The modern BLM Cadastral Survey is a specialized subset of the traditional control survey. While it is useful (if not necessary) to advise users of the spatial accuracy of its products, the Cadastral Survey has the ability to perform surveys to meet a particular spatial need, such that what was once only incidental to the survey process is now an integral part of the execution of public land surveys, and these surveys produce spatial data as one of the primary outputs.
Application Modes of a Standard
A standard can be applied as a design tool, a requirement, and an evaluation tool. Used as a design tool, a standard will enable us to assess what equipment and methods we need to use on a particular project in order to achieve the standard. This application is part of planning for new work. If viewed as a requirement, a standard is applied during the duration of the project to ensure that the work complies with stipulated quality requirements. And lastly a standard can be applied as an evaluation tool to work of any source and vintage in order to “classify” the work so that various users can make best and proper use of the data from that source for varying purposes.
There are other terms that are sometimes used to describe these modes. An a priori method is one that is applied before the work is done, and it usually defines specific procedures and equipment to use. An a posteriori method is a standard or process that can be applied after the work has been done. This method is based on specific analysis of the survey data.
We may need to reiterate here why we want a standard in the first place. We are primarily involved in the original or subsequent location of land boundaries. The purpose of any standard is to assure a product or process meets a particular level of quality. The products that a cadastral survey produces are monuments and lines established on the ground, a written public record of the measurements, and evidence and reasoning behind any survey decision in the form of plats and field notes.
There has been an increasing demand for a variety of additional products that derive from a spatial depiction of the survey lines and parcels.
There are three different types of spatial data that Cadastral Survey collects. These are the Boundary data in the form of bearings and distances between points, data that ties boundary surveys to the National Spatial Reference System (NSRS), and historical record data that is collected for inclusion in the BLM’s Geographic Coordinate Data Base, GCDB. Each one of these types of data has very different expectations of accuracy, and as such should be classified differently. The GCDB data accuracy will not be considered in this discussion, but it is recognized as being dependent on the accuracy of cadastral surveys and NSRS control data from recent surveys.
In the early days of the Public Land Survey System the Act of February 11, 1805, was passed. This law declared that the original survey and its monuments are as if they “were without error” in the eyes of the law. This codification of a common law concept forgave a myriad of sins but also allowed the surveys to be completed expeditiously. Does this mean that there is no need for accuracy standards for Cadastral Surveys? We say no even though the same forces are in play today as then. First, we are predominantly not involved in performing original surveys. Today our primary role is as retracement surveyors. Whether doing original or resurvey work there has to be a compromise between scientific perfection, which usually takes longer and costs more, arid low-accuracy work and methods that take much less time to produce. It must be recognized that low-quality work affects the actual use of the boundaries down the road.
Current technology allows us to approach accuracy with high scientific precision for little or no incremental cost over a merely adequate accuracy. The advantages of doing so for current and future generations of boundary survey users abound. There will always be a compromise between accuracy and practicality, but the compromise is much closer to the ideal than ever before in history.
If one were to prioritize a list of the components of a cadastral survey, one would probably place good monumentation at the top of the list. Next would be the act of recording good descriptions of the monuments and the measurements relating to them. In the case of restored corners, documenting the decisions that were made about what and how the point was established is important. Following that would be the description and measurement of accessories to the monuments, and next would be measurements between a monument and surround monuments of the same survey describing the lines of the survey. Last would be the tie relating the survey to the National Spatial Reference System (NSRS).
This prioritization reflects our traditional and natural hierarchy of importance. In part this relates to the sanctity of the original monument, but a good portion of our ranking is derivative from the concepts of error propagation. For example, it was originally assumed that short line distance measurement was more accurate than long distance measurement, and that ties to accessories in the immediate vicinity of a corner would be more trustworthy than those from the nearest corner perhaps a half mile or a mile away. Similarly, measurements to nearby corners of the same survey were assumed to be more reliable than ties to distant control points. However, in the current technological climate, the difference between these levels has shrunk if not almost disappeared. In fact, it may be possible that the measurement in a cadastral survey to a bearing tree, taken to the nearest degree, and rounded to the nearest link or half link in distance, is less reliable or accurate than that to the nearest other monuments of the survey, and even on occasion to its coordinates relative to NSRS.
The monuments and lines of a survey or resurvey are the primary physical manifestation of the survey on the ground. If the physical evidence cannot be found because of inaccurate measurement, then they are of no use. If the monuments and lines become obliterated, then any method used to restore them will only be as reliable as the measurements left behind in the record or on the ground. A sloppy bearing to an accessory will lead to ambiguity and confusion at the very least, and it may lead to restoring the corner to the wrong place. The same can be said for the procedures defining restoring lost corners. If the record being used is inaccurate, then the procedure suffers.
The reason for going into this discussion is that accuracy (to the degree that it can be achieved economically) assures more stable boundaries. The boundaries, when in need of rehabilitation, can be restored in almost the same place. The stability that derives from good measurements has obvious real economic value. If the survey is related through standard procedures to the NSRS there is yet another level of stability added to the others we are familiar with; it is like an additional layer on the onion of information which may someday be evidence that contributes to the stability of boundaries.
Having accurate measurements is a good thing, and reasonable accuracy is now economical. If a monument is destroyed, it can reasonably be restored from its accessories to within a small spatial tolerance, generally less than the size of a monument cap. If its accessories are lost, then proportionate methods will restore it to nearly the same location. In addition, spatial representation in GIS systems will be relatively accurate, such that decisions about the locations of improvements and resources on the land will not be subject to costly errors and assumptions.
Possibly the most quoted book on land surveying in this century, Mulford’s Boundaries and Landmarks, addresses some of these concerns about defining accuracy. The point we wish to make is well stated by surveying educator and author Ben Buckner in his article on accuracy standards published in the Professional Surveyor magazine (1997), and it refers to the “Mulford effect.” Commenting on Mulford’s view that, “It is far more important to have faulty measurements on the place where the line exists, than an accurate measurement where the line does not exist at all,” Buckner wrote:
I don’t think Mulford intended for surveyors to disregard measurement accuracy. Yet, I have heard many surveyors quote this…and scoff at the idea of correcting for systematic errors or doing any kind of measurement analysis other than proportionate measurement. My own perspective on the subject, and what I would like to think Mulford would say now if he knew how many surveyors have misused his earlier statement, is that it is important to first locate the corner from [an] analysis of all relevant evidence bearing on its original position, applying common law rules and principles and, after the corner is thus located and monumented, to perform accurate measurements between the monuments, to analyze the measurement uncertainty, and to make appropriate and theoretically correct statements about this uncertainty.
In this statement, the use of measurements in the first phase of restoring the corner is implicit. If measurements are cited in a description or on a plat, they are part of the evidence. Where monuments are “called for,” the case law dictates that measurements are secondary or informative, but they must be considered nevertheless. Therefore, analysis of their precision and accuracy becomes involved in the process of analyzing the evidence. Furthermore, when all other evidence of the comer is lost, measurements rise to the status of “controlling.” Thus, the importance of accuracy and error control, both in the original measurements and in retraced measurements, cannot be denied.
Professional surveyors cannot ignore measurement accuracy and analysis of measurement uncertainty for three reasons. The first is explained in the previous paragraph. From a practical and legal standpoint, measurements are part of the evidence. The second is a more philosophical. Measurements embody the very meaning of surveying. Ignoring measurement accuracy and analysis is tantamount to a doctor ignoring medicine or a lawyer ignoring rules of evidence. Third, accuracy in measurement helps preserve the evidence for future generations. This may be the most important reason, since it affects both the public and the profession. It leaves the survey in better shape than before, to everybody’s benefit. It is simply the professional and the “right” thing to do (Buckner 1997).
Our own corollary to the Mulford’s famous quote, is: An inaccurate measurement even if on the correct line is a source of unending mischief. The best of all worlds is an accurate measurement on the correct line.
Examples of mischief abound. For example the confusion which surrounds the obscured monument that has conflicting measurements from accessories, or the monument that is lost and there are now conflicting or inaccurate measurements from the nearest corners. Coordinates and boundaries incorrectly depicted on maps and in GIS systems, and decisions made upon incorrect restorations based upon defective measurements do not aid in providing certain and permanent boundaries, unless all monuments last forever and are well and properly described and known to all adjoiners, a situation that seems to seldom exist for long.
The National Spatial Data Infrastructure
The National Spatial Data Infrastructure (NSDI) was created as a result of an Executive Order 12906 by President Clinton in 1994. The reasons that prompted the creation of NSDI have been spelled out in “Coordinating Geographic Data Acquisition and Access: The National Spatial Data Infrastructure” (http//:www.fgdc.gov/publications/documents/ geninfo/execord.html), as follows:
Geographic information is critical to promote economic development, improve our stewardship of natural resources, and protect the environment. Modern technology now permits improved acquisition, distribution, and utilization of geographic (or geospatial) data and mapping. The National Performance Review has recommended that the executive branch develop, in cooperation with state, local, and tribal governments, and the private sector, a coordinated National Spatial Data Infrastructure to support public and private sector applications of geospatial data in such areas as transportation, community development, agriculture, emergency response, environmental management, and information technology.
Section 1. Definitions
“National Spatial Data Infrastructure” (NSDI) means the technology, policies, standards, and human resources necessary to acquire, process, store, distribute, and improve utilization of geospatial data.
“Geospatial data” means information that identifies the geographic location and characteristics of natural or constructed features and boundaries on the earth. This information may be derived from, among other things, remote sensing, mapping, and surveying technologies. Statistical data may be included in this definition at the discretion of the collecting agency.
Section 2. Executive Branch Leadership for Development of the Coordinated National Spatial Data Infrastructure
The Federal Geographic Data Committee (“FGDC”), established by the Office of Management and Budget (“OMB”) Circular No. A-16 (“Coordination of Surveying, Mapping, and Related Spatial Data Activities”) and chaired by the Secretary of the Department of the Interior (“Secretary”) or the Secretary’s designee, shall coordinate the Federal Government’s development of the NSDI. (Clinton 1994)
This executive order defines the charter of NSDI and that charter includes the task of defining the types and quality of spatial data that will be used in Geographic Information Systems (GIS). As a result of this executive order, the FGDC has been charged with the responsibility to develop spatial data standards. Draft proposals of standards and final standards are available for public comment on the FGDC web site [http://www.fgdc.gov/standards/status/swgstat.html]. The purpose of the FGDC standards is to define a method to adequately report and define the positional accuracy of geospatial data. Many state governments and agencies have adopted these standards. A number of technical boards of registration for Professional Land Surveyors have adopted them through the legislative process.
Geospatial Positioning Accuracy Standard
Only a portion of the standards developed by FGDC apply to Cadastral Survey; it was the Geodetic Subcommittee which developed spatial accuracy standards for surveying. The surveying standards are entitled Geospatial Positioning Accuracy Standards, Part 1: Reporting Methodology and Geospatial Positioning Accuracy Standards, Part 2: Standards for Geodetic Networks [http://www.fgdc.gov/standards/status/sub1_2.html].
The draft FGDC Geodetic Subcommittee standard describes a general scheme of classification that is based on reporting coordinate data, with associated positional tolerances, specifically the relative error circle reported at 95 percent confidence (see appendix A). The FGDC national standard for spatial data accuracy insures flexibility by omitting threshold values that data must achieve; instead spatial data can be described as falling within an expected bandwidth or range of accuracies. This flexibility is well suited to the variety of methods and instruments used by cadastral surveyors. Agencies are encouraged by FGDC to establish pass-fail criteria for acquisition of spatial data by contractors. Developing pass-fail criteria would require careful consideration of many factors, and the criteria would need to be broad and inclusive rather than exclusive. They would also need to be independent of the methods used in making measurements, field conditions, or the survey instruments used.
The current draft of the FGDC standard describes two sets of values to be reported: “Network Accuracy” and “Local Accuracy.” Local accuracy is also referred to as relative accuracy in some sources. The results are reported in ranges of accuracy. The values are defined thus:
Network Accuracy of a control point is a value that represents the uncertainty in the coordinates of the control point with respect to the geodetic datum at the 95-percent confidence level. For NSRS network accuracy classification, the datum is considered to be best expressed by the geodetic values at the Continuously Operating Reference Stations (CORS) supported by NGS. By this definition, the local and network accuracy values at CORS sites are considered to be infinitesimal, i.e., to approach zero.
Local Accuracy of a control point is a value that represents the uncertainty in the coordinates of the control point relative to the coordinates of other directly connected, adjacent control points at the 95-percent confidence level. The reported local accuracy is an approximate average of the individual local accuracy values between this control point and other observed control points used to establish the coordinates of the control point.
The standard that is probably best suited to Cadastral Survey boundary surveys is a statistical method of analysis referred to as local accuracy. The draft FGDC standards for geodetic networks (FGDC 1998, Part 2, Section 2.2, pp. 2-4) contend that:
By supporting both local accuracy and network accuracy, the diverse requirements of NSRS users can be met. Local accuracy is best adapted to check relations between nearby control points. For example, a surveyor checking closure between two NSRS points is mostly interested in local accuracy (or in the case of the cadastral surveyor, a local control point relative to other survey points along a traverse or in a network of RTK GPS baselines). On the other hand, someone constructing a Geographic or Land Information System (GIS/LIS) will often need some type of positional tolerance associated with a set of coordinates. Network accuracy measures how well coordinates approach and ideal, error-free datum.
The current draft of the FGDC standard does not define the specific statistical methods used to derive local accuracy or the relative error ellipses on which it is based. It is important to note that the local relative error ellipse is not the same thing as the network or project error ellipse. A more complete technical discussion of the local error can be found in Appendix A of the 1996 Canadian Standard reproduced here in appendix A (Geomatics Canada; http://www.geod.nrcan.gc.ca/index_e/products_e/publications_e/Accuracy_Standards.pdf).
The Geomatics Canada standards document closely parallels the FGDC standard in most respects, and its appendices also correlate with earlier drafts of those for the FGDC standard. Information about the computation of the FGDC values is still lacking, as online comments from reviewers on the most recent (4/2003) FGDC standard indicate. The FGDC has indicated that some of these specifics may be included in the final draft or placed in additional referenced documents when finalized.
Other Standards
Over the past 15 years there have been numerous attempts to create new standards to reflect both current accuracy needs and new technology. Various approaches have been tried, including loop closures, theoretical uncertainty, positional tolerance, and other mixed standards which have evolved towards the FGDC type of a standard. One example is the standard published by the American Land Title Association (ALTA; http://www.acsm.net/alta.html), which follows an error propagation type model. The 1999 ALTA Standard defines positional uncertainty and positional tolerance thus:
“Positional Uncertainty” is the uncertainty in location, due to random errors in measurement, of any physical point on a property survey, based on the 95 percent confidence level. “Positional, Tolerance” is the maximum acceptable amount of positional uncertainty for any physical point on a property survey relative to any other physical point on the survey, including lead-in courses.
The standard, which is expressed as 20 mm plus or minus 50 ppm (parts per million), is based on controlling error propagation. The ppm component can be expressed as 1:20,000. The positional error is a function of distance from a given point; the larger the distance from the point the larger the positional error allowed. The ALTA standards are brief but seem to rely extensively on the surveyor’s judgment rather than on a defined a posteriori analysis. For example, one of the few paragraphs in the standard is:
The surveyor should, to the extent necessary to achieve the standards contained herein, compensate or correct for systematic errors, including those associated with instrument calibration. The surveyor shall use appropriate error propagation and other measurement design theory to select the proper instruments, field procedures, geometric layouts and computational procedures to control and adjust random errors in order to achieve the allowable positional tolerance or required traverse closure.
And later, under Computation of Positional Uncertainty:
The positional uncertainty of any physical point on a survey, whether the location of that point was established using GPS or conventional surveying methods, may be computed using a minimally constrained, correctly weighted least squares adjustment of the points on the survey.
It appears that there are different and sometimes multiple approaches to applying the ALTA standard. Many other standards of this type allow the user to meet a choice of criteria, or to meet the standard by a priori evaluation of methods or by a posteriori evaluation of the results. As a result they are relatively easy to apply, but may produce less rigorous results. The Canadian Standard discussed above makes this commentary on loop closure and error propagation based standards:
Precision measures are relatively simple to compute and are often used to estimate accuracy. They provide useful estimates of accuracy only if the data are unaffected by biases due to blunders or uncorrected systematic effects. Without some assurances that such errors do not exist, a precision measure provides information that is of limited use. ….For instance, a horizontal position may have been determined using the most precise GPS measurements and processing techniques, but if the positioned point is misidentified as one that is actually ten meters away, the precise position for the wrong point is of little use. While the precision measures may indicate that a precision of ten centimeters has been achieved, the bias introduced by misidentifying the point limits its accuracy to ten meters.
The FGDC format, like the Canadian example quoted, replaces ratio formats with a combination of network and local accuracy. We think this is an appropriate solution but cannot be completely sure what practical difficulties we may encounter implementing them until we are able to perform testing. There is currently not much software available that computes local error based on the FGDC draft approach. We are concerned therefore that while the draft FGDC standard may be appropriate for a number of our needs, it may have some weakness in the ease of use aspects. To quote the Canadian Standard document again:
Local accuracy indicates how accurately a point is positioned with respect to other adjacent points in the network. Based upon computed relative accuracies, local accuracy provides practical information for users conducting local surveys between control monuments of known position. Local accuracy is dependent upon the positioning method used to establish a point. If very precise instruments and techniques are used, the relative and local accuracies related to the point will be very good. … While a point may have good local accuracy it may not necessarily have good network accuracy, and vice versa. Different positioning applications will have varying objectives that emphasize either network or local accuracy, or have specific requirements for both types of accuracy.
Here is where one logical test of applying the FGDC standard raises questions. For a directly observed traverse network constrained to static GPS, it is certainly possible to develop a program to compute the network and local error values, however, at present we do not have software that does so. Another potential issue relating to the usability of the FGDC standard applies to use of RTK techniques. If the RTK procedure is used to obtain positions, possibly with error data, and check shots, then while we can say something about the relative accuracy of the coordinates from the RTK base, we will not be able to perform direct a posteriori statistical analysis of either the network or local accuracy values as defined by FGDC. This is particularly the case when the surveyor does not collect RTK GPS baseline data to analyze them in a network. A complete implementation of the FGDC standard, as we understand it, would require collection of baseline information and subsequent vector analysis of the data with least squares before the coordinates are included in the project network and evaluated for local error. The question then becomes, “Does this impose an unworkable burden on current practice and procedures?”
We have two initial concerns regarding the application of the FGDC local accuracy standard: 1) availability of software that will compute and report the standard, and 2) use of RTK techniques that do not easily allow for network evaluation. Where the network accuracy meets the local accuracy standard, a partial solution may be available for the first issue. Assuming that a properly computed network or local (project) network error value will always represent the upper limit of the local error values, then these values represent the worst case scenario for the local error. However this assumption still requires that the data be computed in a network. One solution apparent to solve the RTK issue is to collect baseline information and analyze the vector data in the project network using least squares. The other solution would be to use a priori error analysis that would be analogous to adding up the error values that contribute to the overall coordinate errors for the RTK coordinate. Another approach would be to define and test a specified set of procedures that are known to meet the FGDC standard.
From the discussion here it is clear that the draft FGDC standard network accuracy component can be used to define the positional accuracy of boundary surveys of federal lands and geodetic reference or control ties to the NSRS. Network accuracy should be applied when describing data that reference boundary surveys to the NSRS, such as when geodetic control measurements are made to PLSS survey monuments or cadastral project control monuments. The local accuracy component of the draft FGDC standard seems more applicable to boundary surveys, however the primary concern with this method is the lack of tools to compute the error values so that the standard can be applied.
Conclusions
The Cadastral Survey of BLM has historically taken full advantage of new technology to improve the efficiency of survey crews in the field. We must continue to recognize that accuracy has value, and the ability to define and describe the accuracy of our current and future products is essential to a variety of applications for which the data may be used.
Precision ratios which have been used in the past serve only to imply what the general quality of a closed traverse is. This method of evaluating error does not serve to identify measurement mistakes or impart any information as to the positional accuracy of any point along a traverse. Until recently, adjustments to survey measurements have not been used in cadastral survey. Rather, direct methods of measurement such as prolongation of line and chaining on true line to survey boundary lines and precision rations have been the preferred field methods. Indirect methods of measurement with GPS are, however, becoming more and more common, and statistical methods are proving to be the best for evaluating the accuracy of these measurements.
The most rigorous of data adjustments and the easiest to apply without bias are least squares. Computers and BLM’s Cadastral Measurement Management software based on least squares have made the application of adjustments to large-scale networks extremely practical.
The most constant of issues is one of responsibility. It is the responsibility of federal agencies and the Cadastral Survey to define and describe threshold values and pass-fail criteria for accuracy of modern cadastral surveys performed to locate and protect federal interest lands.
The authors feel that for cadastral applications, a dual or a “mixed” standard may be appropriate. The FGDC-defined network accuracy standard is suitable for classifying the spatial products of a cadastral survey, but local accuracy may be too difficult to compute or apply to boundary surveys at this time. Further work needs to be done to evaluate the possibilities of obtaining tools that will compute the local accuracy components of the proposed FGDC standard. Until then consideration should be given to a standard with options that would look more like the ALTA standard-as at least an alternative form of the local accuracy portion of the FGDC standard-that would complement positional tolerance of a point such as an error ellipse at 95 percent confidence. What is needed is an inclusive accuracy standard that reflects modern survey practices with regard for the needs of the cadastral survey professional and public.
Copyright American Congress on Surveying and Mapping Jun 2003
Provided by ProQuest Information and Learning Company. All rights Reserved.