^

Science and Environment

The power of analogies

STAR SCIENCE - Raymond R. Tan, Ph.D. -

“… some of the greatest advances in science have come about because some clever person spotted an analogy between a subject that was already understood, and another still mysterious subject.”  — Richard Dawkins, in “The Blind Watchmaker”

Imagine a hiker lost in the middle of a rainforest, trying to find his way to safety without the benefit of any navigation aids. If he can negotiate his way downhill to the river, he can then simply make his way along the bank until he finds his campsite and his companions; however, the dense forest canopy obstructs his view, and he has no real sense of the overall lay of the land. Instead, he knows the topography only of his immediate vicinity. At a loss for any other options, he decides on a simple and seemingly reasonable strategy: walk downhill at all times. Will our unfortunate traveler make his way out of the forest? Clearly, if the ground was nearly flat, and sloping steadily downhill toward the riverbank (and assuming further that the slope was detectable by the human eye), the “walk downhill” strategy would inevitably lead our hiker to safety. With a little bit of imagination, it should also soon become clear that if the ground was more rugged and undulating, the hiker could just as easily find himself trapped in a pocket of depressed land, still in the middle of the rainforest, and miles away from the river.

This simple thought experiment demonstrates the problem encountered in optimization problems encountered in science and engineering. Replace the hiker and his “walk downhill” rule with an algorithm, and replace the forest with a multidimensional mathematical landscape, and the essence of the problem is clear. Our problem is to minimize vertical elevation, which varies with spatial coordinates in the same way as land does with respect to location. The shape of the landscape is technically referred to as an objective function, which is simply a mathematical formula expressing some relevant measure of performance (for example, monetary cost, or environmental impact) as a function of decision variables that can be manipulated. The optimization task is to specify values of the decision variables which give the lowest value (known as the global optimum) of the chosen performance index. It is analogous to our hiker searching for spatial coordinates that give the lowest possible elevation — in this case, the water level of the river. His “walk downhill” rule is his search algorithm, which may be referred to as a greedy strategy. It simply involves going for immediate gains based on local knowledge. As we have just seen, the greedy search heuristic easily fails when faced with a rugged landscape, and may lead our hiker into a nearby pocket of low-lying land, a local optimum.

The ruggedness of a mathematical landscape depends on the nature of the algebraic expressions that relate the decision variables to the objective function. Linear functions are “flat” and hence slope unerringly in one direction; these are easy to navigate. On the other hand, nonlinear functions exhibit all sorts of humps, pockets and other forms of curvature, and thus may have multiple local optima waiting to trap a poorly conceived algorithm. Obviously one would prefer to deal with linear problems, but here lies a dilemma. Optimization involves a sequence of processes, the first of which is abstraction, or translating a real system into a mathematical representation. This step always involves some degree of simplification and approximation, which we will revisit shortly. Once a model has been developed, it is solved for an optimum; and once the optimum is found, the results from the model are interpreted and utilized in the real world, always bearing in mind the approximations that were necessary to build the model in the first place. The problem is that the abstraction and model solution steps are closely linked. Oversimplification yields a model which is easily solved, but the model itself may fail to give an adequately faithful representation of reality, and thus may not be useful at all. At the other extreme, a model may be a high-fidelity mathematical image of the real world problem, but may prove difficult or even impossible to solve. In this case, it may prove just as useless as the oversimplified one. Somewhere between the two extremes lies the workable compromise between accuracy or representation and computational tractability.

In many engineering applications, models that incorporate multifaceted aspects of design, such as cost, shape, weight, and so on, are more often non-linear than linear. How are such models solved, when we have already seen that they can lead greedy search algorithms to fail? Metaheuristic or stochastic algorithms provide one strategy to find very good (or nearly optimal) solutions to many difficult optimization problems. Interestingly, many of these have drawn on analogies from naturally occurring physical or biological phenomena. For example, simulated annealing was developed by a group of researchers working in an IBM lab in the early 1980s, primarily to optimize the design of electronic devices. It is based on a mathematical analog of the behavior of atoms as a metallic substance is gradually cooled down (or annealed) after heating in a furnace. Instructing our lost hiker to “walk downhill most of the time, but walk uphill every now and then” would be a rough implementation of simulated annealing to that particular navigation problem. Other algorithms are patterned after natural selection (such as genetic and evolutionary algorithms), social behavior of animals (such as ant colony and particle swarm optimization) and the adaptive behavior of immune systems (aptly referred to as artificial immune systems). Even now, researchers all over the world are working to improve these existing algorithms, in order to utilize them to solve complex engineering design or planning problems more effectively. And still others are undoubtedly dreaming up yet unheard of techniques, drawing on the power of imagination, creativity and analogy to find better ways of doing things.

* * *

Raymond R. Tan is a full professor of chemical engineering and university fellow at De La Salle University. His main research interests are process systems engineering (PSE), life cycle assessment (LCA) and pinch analysis. Tan received his BS and MS in chemical engineering and Ph.D. in mechanical engineering from De La Salle University, and is the author of about 50 articles in ISI-indexed journals in the fields of chemical, environmental and energy engineering. He is a member of the editorial board of the journal Clean Technologies and Environmental Policy, and co-editor of the forthcoming book Recent Advances in Sustainable Process Design and Optimization. Tan is also the recipient of multiple awards from the Philippine National Academy of Science and Technology and the National Research Council of the Philippines. E-mail at [email protected].

vuukle comment

BLIND WATCHMAKER

CLEAN TECHNOLOGIES AND ENVIRONMENTAL POLICY

DE LA SALLE UNIVERSITY

DOWNHILL

ENGINEERING

HIKER

MODEL

PHILIPPINE NATIONAL ACADEMY OF SCIENCE AND TECHNOLOGY AND THE NATIONAL RESEARCH COUNCIL OF THE PHILIPPINES

RAYMOND R

  • Latest
Latest
Latest
abtest
Recommended
Are you sure you want to log out?
X
Login

Philstar.com is one of the most vibrant, opinionated, discerning communities of readers on cyberspace. With your meaningful insights, help shape the stories that can shape the country. Sign up now!

Get Updated:

Signup for the News Round now

FORGOT PASSWORD?
SIGN IN
or sign in with