Follow this strategy for better thermal design

Sparrow, Ephraim M

This methodology can help maximize creative and innovative design within the context of currently available tools.

Design is the crowning achievement of the engineering profession. The broad availability of software packages has drastically altered the design process to the extent that it might appear that individual creativity and innovation are no longer prerequisites.

This article argues the opposite – that creativity and innovation play a major role in design – and supports that opinion by presenting a logic-based, eleven-step thermal design strategy. The steps in this design strategy are:

1. Unambiguous identification of the desired result(s).

2. Specification of the desired accuracy.

3. Identification of the needed knowledge base.

4. A first pass at unearthing and assembling the knowledge base.

5. Identification of unavailabilities in the needed knowledge and steps to redress these deficiencies.

6. The use of highly simplified models.

7. Scoping.

8. Setting of upper and lower bounds.

9. Introduction of logic-guided intuitive assumptions.

10. Completion of the final model and its implementation.

11. Retrospective examination of results.

Present-day thermal design

There are towering differences between the field of thermal design as practiced when Kern published “Process Heat Transfer” a half century ago and thermal design today. The two obvious differences are “the computer” and the availability of an enormously enlarged database.

Computers are involved in the design process at two levels. At one level, software is available, based on internally stored correlations and the use of device-scale dimensions, that provides overall (in contrast to local) solutions for a wide range of heat-transfer problems with very little intervention on the part of the designer. At another level (often identified as scientific computing), discretized differential equations obtained by means of finite-difference, finite-volume, or finite-element representations of conservation laws are solved numerically to obtain both local and overall results, as needed.

With regard to databases, the flood of information, provided primarily by academics and published in an ever-increasing number of journals and conference proceedings, is made accessible by computer-based search engines that respond to properly chosen keywords.

Design professionals are generally a rather trusting lot, and they are often willing to rely on software written by others. They seem willing to consider software writers to be either more intelligent and/or better informed than the working design engineer. This hardly encourages creativity, innovation, and self-reliance.

In view of the powerful computerbased tools available for thermal design, is there a need for individual creativity and innovation in today’s design environment? The answer is an unequivocal yes, for two reasons:

the need for incisive yet simple models of the participating physical processes; and

the unavailability of all of the knowledge needed to solve any given problem.

It is widely recognized that the geometry and/or operating conditions of an actual system rarely coincide with geometries and operating conditions for which information is available. To counter this difficulty, modeling is employed. Modeling requires deep understanding of the physical phenomena occurring within the system geometry in response to the operating conditions. It is in the execution of modeling that individual creativity and innovation are required.

One might think that the availability of powerful workstations and supercomputers would obviate the need for modeling. Depending on the capacity of the computer, device geometries and operating conditions can be described in various degrees of detail. However, the computer program may demand unavailable, unrealistic, or irrelevant detail. For instance, the program may demand local information about the heat-transfer and fluidflow events that occur in the ambient atmosphere surrounding a device, yet this information is generally unavailable. Specifying in great detail the inputs to a finite-difference, finite-volume, or finite-element computer program too often leads to results of very limited applicability. These considerations suggest the adoption of modeling as a forerunner to the use of numerical methods.

Modeling is also motivated by the insufficiency of available databases. Despite the outpouring of published papers during the past several decades, a particular sought-after piece of information needed for design has a high probability of not being available. One reason for this is that many journal articles are written by academic researchers, who tend to idealize or sanitize problems so extensively as to significantly limit the applicability of their results to realword devices and applications. A related factor is that the results of academic research are too often presented in user-unfriendly formats.

Textbooks, often regarded as prime database repositories by young engineers, are quite narrow relative to actual design-data needs. Although present-day textbooks convey moreaccurate data than their predecessors, the topical coverage of current texts is not significantly different from that of the third edition of McAdams’ “Heat Transmission” published in 1954.

Several heat-transfer-focused handbooks are available and are renewed from time to time. These are the most-concentrated sources of heattransfer knowledge. I have had about a 50% success rate when seeking design-oriented information from these sources.

Database searches on the Internet often provide a difficult-to-handle deluge, which can be controlled by a more-focused selection of keywords. Such searches are the best mode for accessing journal articles, especially recently published ones. Thus far, I have had only moderate success in extracting needed design information from Internet database searches.

Although the present-day tools available for thermal design are drastically different from those of Kern’s day, the need for creativity and innovation is common to both eras. The objective of this article is to suggest strategies and methodologies for maximizing creative and innovative design within the context of the currently available tools.

The design strategy

The approach offered here is both structured and loose. The structure is the succession of specific steps to be followed. The looseness is found in the execution of several of the steps. Intuition thrives on looseness. If logic is applied in an attempt to explain an intuitive idea, the idea may wilt. Intuition plays a substantial role in the approach offered here.

This strategy takes a broad view of what is being designed. It applies not only to the design of devices, but also to the design of experiments intended to evaluate performance and significant basic data.

The eleven steps of the design strategy are as follows.

Identify the desired results

The first step is to establish, with a high degree of specificity, the result or results that are being sought.

For example, suppose a new component is being considered for incorporation into a product. Is it sufficient to determine the percentage change in some operating characteristic of the product due to the new component, or must the absolute change be determined? It is likely that a significantly different experiment would have to be designed for the latter than for the former. Determining percentage changes would likely require less instrumentation than determining absolute changes.

Very often, results for several individual characteristics may be desired. To facilitate a “lean” design, a hierarchical listing based on the true importance of the various characteristics is appropriate, and the experimental design can then be focused on determining the quantity at the top of the list.

In the design of devices and/or components, strict attention must be given to the main function of the device and to the identification of the specific measurable quantity that characterizes the performance of that function. A test apparatus for evaluating function performance should focus on obtaining definitive results for the characterizing parameter.

When the goal of a study is to provide information for use by others, one must carefully consider whether the result is presented in a userfriendly manner. For instance, suppose that the goal of a convectiveheat-transfer study were to supply heat-transfer coefficients for publication and for subsequent use by readers of a journal, and the measured heat-transfer coefficients were packaged in the usual way via a Nusselt number. However, in seeking a correlation between the Nusselt number results and the Reynolds number, the investigators found that the tightest correlation was achieved when the characteristic dimension in the Reynolds number was taken to be the thermal boundary-layer thickness, which had been measured along with heat-transfer rates, temperature differences, etc. This correlation was published without being accompanied by quantitative information about the thermal boundary-layer thickness, thereby rendering the correlation user-unfriendly.

Resource availability considerations may require compromise in the selection of the most-sought-after result. The absence of a critical instrument is often a factor in the selection process.

Specify the desired accuracy

The required accuracy for the end product plays a major role in the tools, methodology, and effort that are required to carry out the design process. The model of the physical processes that occur is highly influenced by the targeted accuracy.

For example, consider the design of an air-to-air heat exchanger. In conventional design practice, the convective-heat-transfer coefficients of the airstreams are assumed to be known constants all along their flowpaths. On those rare occasions when variations of the heat-transfer coefficients are considered, the variations are provided as known inputs to the design methodology, usually in the form of algebraic equations. For these cases, the design model is either algebraic in form or, at most, at the mathematical level of a firstorder, ordinary differential equation that is expeditiously solved by simple software.

On the other hand, if the flow passages are geometrically very different from those for which information is available in the literature and if high accuracy in the predicted performance of the heat exchanger is required, a model is needed that actually predicts the heat-transfer coefficients from scratch. Such a model involves numerically solving the partial differential equations for mass, momentum, and energy conservation. If low accuracy would suffice, the simpler model of the preceding paragraph could be used, thereby demonstrating the decisive role that the accuracy specification plays in model selection.

Accuracy standards not only influence the degree of resolution, input requirements, and computational methodologies for model-based design, but have an equally important influence on experiment-based design. In particular, the choice of instrumentation (precision, resolution, physical size, number of sensors, etc.), the isolation of the test section (insulation, minimally conducting support structure, guard heating, etc.), control of thermal boundary conditions (heater configuration, end effects, etc.), and attainment of steady state all must be considered when designing an experiment to achieve a desired level of accuracy.

Identify the needed knowledge base

This step focuses on determining the information needed to formulate and implement a mathematical or numerical design model. Less and somewhat different information is usually needed for an experimental model.

In this design strategy, the task of identifying the needed knowledge is separated from the task of obtaining that knowledge. Thus, here in Step 3, the heat-transfer coefficient for turbulent flow in a round pipe may be identified as one of many pieces of needed knowledge, but actual numerical values of the coefficient or the relationships from which the coefficient values can be calculated will not be sought until later.

The needed knowledge is first classified into one of two categories, or “wish lists:”

the quantities for which information is needed, for example, kinematic viscosity, heat-transfer coefficient, and freestream velocity; and

the relationships that connect the key parameters of the problem, such as the relationship between the Nusselt number and the Reynolds number.

Identifying the components of the needed knowledge base may appear to be rather straightforward. For example, if convective heat transfer is involved, it takes little imagination to realize that convective-heat-transfer coefficients are needed. It then follows that thermophysical properties such as thermal conductivity, Prandtl number, kinematic viscosity, and viscosity (or density) will probably be needed. However, instead of merely listing general categorical items such as “heat-transfer coefficient,” it is useful to be more specific – for instance, “heat-transfer coefficients for two spheres situated inline in a forced-convection freestream flow.”

Uncertainties will intrude into the identification process. For example, in the presence of a low-velocity, forced-convection flow, natural convection may intrude to create a flow regime designated as “mixed convection.” If there is an air gap between vertical walls, is it “conduction across the air gap” or “natural convection across the air gap”? If a rectangular duct is internally lined with insulation to dampen flow-induced sound, will the presence of the insulation affect the air velocity in the duct and thereby the convective heat-transfer coefficient? If questions such as these arise, they should be carefully noted and retained.

Begin assembling the knowledge base

Identifying the needed knowledge in Step 3 helps to define and focus the task of information collection. Some information sources have already been suggested earlier in the article. Now, a more thorough appraisal of information collection will be made.

Most technical knowledge is contained in written documents. Unfortunately, written technical documents are not noted for clarity of presentation.

All too often, journal articles convey sanitized presentations of problems that themselves have already been sanitized. Such publications either omit or gloss over information that might have great value in design. Intuitive conjectures rarely appear in written documents, and, in general, little or no stimulation of the reader’s creative and innovative juices is provided.

Written documents appear to be directed to “insiders,” that is, to persons who already know a great deal about the subject. Many journal articles are written by academics who are building on (i.e., closing gaps, refining, extending) prior journal articles also written by academics. This process builds up a chain of papers that evolve a fantasy-like reality, which may be quite different from nature’s reality. A case in point is natural convection in porous media – the many permutations and combinations of this problem are recorded in hundreds of papers.

Reports written by industrial practitioners are also internally directed. These reports tend to be authored by people who are unskilled in the reportorial arts. A great deal of important information may be omitted, either unknowingly or purposely.

Another shortcoming of written documents is that they do not provide a ready channel through which readers can get additional information to fill gaps caused by incompletenesses in the written presentation. Poster sessions encourage give-and-take exchanges between the author and reader, but once the session closes, the opportunity for direct author/reader interaction vanishes along with the information conveyed by the poster.

In my experience, the most useful source of needed knowledge is consultants. As used here, the word “consultant” includes coworkers, teachers, professional consultants, or any person with special knowledge in the technical field of interest. The advantages offered by consultants include:

focused expertise;

the opportunity to question, get answers, and question further;

less-sanitized information transfer than written documents;

the opportunity to learn which models, protocols, etc., were unsuccessful and why; and

access to private or proprietary information.

It is doubtful whether any other source would be able to provide such a broad range of information and advice as that provided by “consultants.”

Consultation interactions may be initiated in various ways. Personal networks are a very effective means of identifying possible consultants. A more-structured approach is to use a consultant registry that provides professional biographies on numerous experts and the ability to search using expertise-based key words. Alternatively, the Web sites of many university engineering departments include a listing of the faculty, along with detailed descriptions of their special interests and areas of expertise in which they may consult.

All other potential sources of the needed knowledge identified in Step 3 appear to be in the form of written documents. These are classified below (some have been discussed earlier), with the order reflecting my appraisal of their usefulness:

topical review articles;

specialized handbooks;

output of computerized literature searches;

patent literature;

internal reports and proprietary information;

journal articles;

conference papers; and textbooks.

A boon to both designers and researchers is the increasing number of review articles. Each such article focuses on a specific topic and seeks to set forth the current state of knowledge on that topic. Almost exclusively, the chosen topics fall in the category of thermal science rather than thermal engineering. Nevertheless, since the nonintuitive aspects of thermal engineering flow from thermal science, these reviews do benefit thermal engineers.

The longest-running review vehicle in the field of heat transfer is “Advances in Heat Transfer,” which was first published in 1964. In recent years, more-specialized review vehicles have appeared. The most recent (1997) of these is “Advances in Numerical Heat Transfer.”

In earlier years, handbooks tended to be widely encompassing but superficial. There were handbooks that attempted to provide comprehensive coverage of such broad topics as mechanical or chemical engineering. Within the past quarter century, morefocused, more-in-depth incarnations have appeared, and these have altered the image of the handbook. The “Handbook of Heat Transfer” first appeared in 1973. Later editions have followed, as have more-specialized derivatives, such as the “Handbook of Single-Phase Convective Heat Transfer” and the “Handbook of Numerical Heat Transfer.” Despite their designation as handbooks, these volumes have been written and edited by academics. They are competent repositories of basic (i.e., engineering-science-type) knowledge, but thermal design is not featured.

The handbook that purports to represent thermal design and engineering is the “Heat Exchanger Design Handbook” [which was reviewed in CEP’s August 1999 issue, p. 91 – Editor]. This handbook is written in significant part by heat exchanger professionals whose collective knowledge of design is widely acknowledged. Certainly, it contains deeper knowledge of a wider range of heat exchanger geometries than does any other source. On the other hand, there is very little exposition of the design process.

Although not labeled as a handbook, the recently published book “Process Heat Transfer,” by Hewitt, Shires, and Bott, contains extensive design-oriented information as well as some design methodology conveyed via worked-out examples. “Heat Exchangers – Selection, Rating, and Thermal Design,” by Kakac and Liu, is of the same stripe but on a lesser scale. If categorized on the basis of conveying applicable design-oriented information, Webb’s “Principles of Enhanced Heat Transfer” merits handbook status in addition to its engineering-science focus on phenomenological understanding.

The patent literature provides extensive information relevant to qualitative product development. That literature is a showcase of device-level conceptual design. On the other hand, the purposeful vagueness of patent descriptions makes it difficult to extract quantitative information from the patent literature.

Internal reports and proprietary information can be of great value to those having access to them. Unfortunately, internal reports are too often written for insiders who are themselves highly knowledgeable about the subject at hand. A consultant’s private information, when made available to clients, can be a valuable source of the needed knowledge.

As discussed earlier, journal articles, conference papers, and textbooks are mostly academic products and are likely to be highly sanitized.

Identify unavailabilities and address deficiencies

The differences between the knowledge needed for the design, as identified in Step 3, and the knowledge obtained from such sources as those set forth in Step 4, define the provisional incompleteness in the available knowledge base. I say “provisional” because there may be steps that can be taken to find further relevant information.

For instance, suppose that the selected information sources had not included consultants. Then, an obvious approach to gaining additional information would be to engage the services of consultants whose expertise is relevant to the just-identified gaps in the knowledge base.

If a computerized literature search had been mounted during Step 4, reconsideration of the keywords used for that search is appropriate. In addition, if the search for information was carried out with the thought of obtaining directly applicable data to match the expected operating conditions (e.g., a prescribed temperature variation) and no information was found to match that condition, results should now be sought for related but not congruent operating conditions. The strict geometrical configuration for which information is being sought should be relaxed to accommodate related geometries. If a needed thermophysical property value is not found, the impact of estimating the needed value on the desired result should be assessed.

With these modifications and with the addition of more information sources, the search for the needed knowledge is repeated. After this is accomplished, the new, enlarged knowledge base is compared with the knowledge base needed to solve the problem. It is reasonable to expect that this second round of search would diminish the gaps between the available knowledge and the needed knowledge. At this point, careful consideration has to be given to strategies for circumventing any remaining gaps.

Use highly simplified models

Various strategies may be used to help assess whether it is necessary to fill all remaining knowledge gaps.

One of the simplest strategies is sensitivity analysis. One parameter whose value is unknown as a consequence of a knowledge gap is selected for study. Trial values of that parameter are input to any somewhat descriptive, simple model of the problem, and the results are analyzed. If the results are insensitive to the selected trial values, then the search for more information about that parameter may be put on hold.

A sensitivity analysis can also be used to judge whether the results will be significantly influenced by the use of various relationships among the participating variables and parameters. For instance, the heat-transfer coefficients for forced-convection crossflow about a circular cylinder can be expressed by any of several different algebraic relationships. Although it is relatively easy to quantify the differences among the predictions, a sensitivity study is needed to demonstrate how the results will be affected.

To further aid in dealing with the gaps in the needed knowledge base, it is helpful to identify which physical processes are dominant and which are of secondary importance with respect to the desired result. Possible assessment pathways – either experiment or analysis/computation – have to be considered. At this stage, it may be more time- and cost-effective to pursue the analytical/computational option. The need then arises to formulate a model that is as simple as possible, but yet retains the essential features of the participating phenomena. The development of such a model requires both ingenuity and creativity.

Even without high creativity, some progress can be made along these lines. For example, consider an air-toair heat exchanger that consists of a stack of flat rectangular ducts with each duct positioned at right angles to both its upper and lower neighbors. This is a crossflow configuration, Assume that the walls between the ducts are made of a membrane that, ideally, is permeable so as to permit water vapor to be transferred between the air streams flowing within adjacent ducts, while totally inhibiting the passage of all other gases.

In its practical manifestation (in contrast with the ideal just described), the membrane permits air to pass through it at a rate proportional to the local cross-stream difference between the pressures in the ducts. The passage of air through the duct wall would change the mass flow-rates of the air in the two ducts. A model that accounts for the change in the mass flow-rates would be quite complex. To simplify the model, the streamwise variations in the mass-flow rates can be neglected. This allows the calculation of the pressure variations from well-established information for flow in an impermeable-walled, flat rectangular duct. Using this information, the local cross-stream pressure difference can be calculated and the air leakage evaluated.

Suppose that the desired result, the percentage leakage, were calculated from the simplified model to be no greater than 5%. For most engineering purposes, such a leakage would be considered of small practical significance. Therefore, for such outcomes, there would be very little motivation to develop a more-complex model that takes account of the variations in the mass-flow rates.

In the general realm of model simplification, two generic approaches have great utility. One is the so-called quasi-steady model, which treats a transient process as a succession of instantaneous steady states. The second is the reduction in the number of spatial dimensions in which the variations of the dependent variables are permitted to occur. The omission of a coordinate may be accomplished by mere fiat, but it is more satisfying to integrate over the coordinate to be omitted so that the dependent variables may be regarded as integral averages over that coordinate.

Scoping

In this scoping step, a model is created in which all processes and inputs are selected to favor the attainment of the desired result. If even under these most-affirmative conditions, the desired values of the result cannot be reached, then it may be concluded that the sought-after design is not achievable.

For example, when searching for an Lipper bound for heat-transfer performance, the thermal conductivity of structural members would be taken to be the largest among those for the various acceptable materials. Likewise, thermal contact resistances would be neglected. If heat losses were an issue, the lowest reasonable natural-convection heat-transfer coefficients would be selected. For an analysis involving a cylinder in crossflow, the highest heat-transfer coefficients among those provided by the numerous correlation equations would be used. The counterflow configuration would be used to model all participating heat exchangers. Clearly, this approach provides an upperbound prediction.

Set upper and lower bounds

The upper-bounding procedure of Step 7 may be complemented by a lower-bounding procedure, in which all processes and inputs are selected in order to obtain a lower numerical bound for the desired result. The attainment of both upper and lower bounds is especially useful when the bounds deviate from each other by an amount that is less than the desired accuracy. In that event, the problem may be regarded as solved, and either the upper or lower bound may be used for the final solution.

If the deviation between the bounds is only moderately larger than the needed accuracy, it is appropriate to reconsider and refine the input selections to the upperbounding and lower-bounding procedures. For example, the properties corresponding to those materials most likely to be used would be selected. If this tightening of the bounds reduces the deviation between the bounds to less than the needed accuracy, then the problem can be considered solved.

If the deviation between the bounds cannot rationally be reduced so that it falls within the desired accuracy, the bounding procedure will have to be regarded as fruitless.

Introduce logic-guided intuitive assumptions

The preceding steps were intended to provide insights into the sensitivity of the desired result(s) to the various elements of the model, including the processes selected for modeling and the input data required to implement the solution. Up to now, none of the steps has called for the use of the intuitions (i.e., organized experiences) of the participants. Now, it is appropriate to make use of such intuitive knowledge.

For instance, it is well known that the breakdown of laminar flow in pipes and ducts does not always occur at the Reynolds numbers cited in the literature. In situations where the flow upstream of a duct is free of bends, pulsations, and area changes, and if the general environment is relatively free of disturbances, laminar flow may persist to Reynolds numbers higher than expected. A similar situation may be encountered in external flows.

The issue of thermal contact resistance and its incorporation into a model is very much a matter of judgment. Even when it has been decided to take account of contact resistance, judgments have to be made about the pressure applied to the contacting surfaces. Under normal circumstances, measurements of the contact pressure are not made.

Short of performing multidimensional calculations, the decision to either include or exclude edge or end effects is a matter of judgment. Still another area where experience counts is the estimation of the downstream duration of fluid-flow disturbances.

Clearly, human experience and intuition can play a significant role in model refinement. In general, commercial software lacks intuitionbased modalities.

Complete and implement the final model

On the basis of Steps 1-9, it is expected that a final model will be formulated based on knowledge consistent with the desired accuracy of the sought-after results. Undoubtedly, the model will have to be solved numerically. Various software programs are available to do this. However, the spatial dimensionality of the software must be consistent with that adopted in the model, as must the accuracy of the computations with the accuracy requirements of the desired results.

Graphical presentation can make it easier to diagnose the validity of the solution. The plotted quantities should be selected, initially, to enhance the diagnosis function rather than to optimize the presentation of results for users and for ultimate publication. Tabular results are not needed at this time unless they are required to clarify any unusual patterns in the graphical presentation.

Retrospective examination of results

A careful examination of the numerical results is a necessary last step in the modeling procedure. The graphical presentations suggested in Step tO can be used to examine trends. The model-predicted trends have to be tested against expectations. For this type of test, only the gross trends are relevant, since it is doubtful that expectations can deal with local phenomena.

It is especially useful to have available from the numerical solutions certain limiting cases which, if not exactly congruent with, are close to cases that may be found in either the experimental or analytical literature. Although successful comparisons of the model-based results for limiting cases cannot be regarded as a full verification of the model, they add considerable credence.

This article is based on the Kern Lecture that the author delivered when he received the AIChE Heat Transfer and Energy Conversion Division’s 1998 Donald Q. Kern Award.

E. M. SPARROW is professor in charge of the Applied Heat Transfer Projects Laboratory at the Univ. of Minnesota, Minneapolis, MN (Phone: (612) 625-5502; Fax: (612) 624-1398; E-mail: esparrow@tc.umn.edu). He has been involved in the field of heat transfer for almost five decades, and his career has encompassed industrial practice, university teaching and research, and industrial consultation. He has taught over 10,000 students, guided 75 doctoral and 135 masters theses to completion,

consulted for hundreds of companies, and published approximately 600 papers. He has been accorded all the available teaching awards at the Univ. of Minnesota, as well as national teaching awards. He received the Jakob Award for the Science of Heat Transfer and the Kern Award for the Art of Heat Transfer. In 1986, he was elected to the National Academy of Engineering. He received BS and MS degrees from MIT and MS and PhD degrees from Harvard, all in mechanical engineering. He is a fellow of ASME.

Copyright American Institute of Chemical Engineers Feb 2000

Provided by ProQuest Information and Learning Company. All rights Reserved

You May Also Like

Consider Mixed-Flow Impeller Technology for Odor Management

Consider Mixed-Flow Impeller Technology for Odor Management Gans, Charles Propelling an odorous exhaust stream high into the air at …

Easily determine safety integrity level

Easily determine safety integrity level White, Robert S A team of operators and engineers can quickly select the right level of inte…

Optimize shell-and-tube heat exchanger design

Optimize shell-and-tube heat exchanger design Poddar, Tarun K Use this plot to identify the best combination of design parameters. <…

New sensor detects nerve gas

New sensor detects nerve gas A sensitive and selective sensor that can detect minute traces of the chemical nerve agents Sarin and Soman h…