ALGORITHM

 

An algorithm is any well-defined procedure for solving a given class of problems. Ideally, when applied to any particular problem in that class, the algorithm would yield a full solution. Nonetheless, it makes sense to speak of algorithms that yield only partial solutions or yield solutions only some of the time. Such algorithms are sometimes called “rules of thumb” or “heuristics.”

Algorithms have been around throughout recorded history. The ancient Hindus, Greeks, Babylonians, and Chinese all had algorithms for doing arithmetic computations. The actual term “algorithm” derives from ninth-century Arabic and incorporates the Greek word for number (i.e., arithmos).

Algorithms are typically constructed on a case-by-case basis, being adapted to the problem at hand. Nonetheless, the possibility of a universal algorithm that could in principle resolve all problems has been a recurrent theme over the last millennium. The theologian Raymond Lully (c. 1232–c. 1315), in his Ars Magna, proposed to reduce all rational discussion to mechanical manipulations of symbolic notation and combinatorial diagrams. The philosopher Leibniz (1646–1716) argued that Lully’s project was overreaching but had merit when conceived more narrowly.

The idea of a universal algorithm did not take hold, however, until technology had advanced sufficiently to mechanize it. The Cambridge mathematician Charles Babbage (1791–1871) conceived and designed the first machine that could in principle resolve all well-defined arithmetic problems. Nevertheless, he was unable to build a working prototype. Over a century later another Cambridge mathematician, Alan Turing (1912–54), laid the theoretical foundations for effectively implementing a universal algorithm.

Turing proposed a very simple conceptual device involving a tape with a movable reader that could mark and erase letters on the tape. Turing showed that all algorithms could be mapped onto the tape (as data) and then run by a universal algorithm already inscribed on the tape. This machine, known as a universal Turing machine, became the basis for the modern theory of computation (known as recursion theory) and inspired the modern digital computer.

Turing’s universal algorithm fell short of Lully’s vision of an algorithm that could resolve all problems. Turing’s universal algorithm is not so much a universal problem solver as an empty box capable of housing and implementing the algorithms placed into it. Thus Turing invited into the theory of computing the very Cartesian distinction between hardware and software. Hardware is the mechanical device (i.e., the empty box) that houses and implements software (i.e., the algorithms) running on it.

Turing himself was fascinated with how the distinction between software and hardware illuminated immortality and the soul. Identifying personal identity with computer software ensured that humans were immortal since even though hardware could be destroyed, software resided in a realm of mathematical abstraction and was thus immune to destruction.

It is a deep and much disputed question whether the essence of what constitutes the human person is at base computational and therefore an emergent property of algorithms, or whether it fundamentally transcends the capacity of algorithms.

[[word count: 499]]

William A. Dembski


ALGORITHMIC COMPLEXITY

 

Algorithmic complexity measures the computational resources needed to solve computational problems. Computational resources are measured in terms of either time (i.e., number of elementary computational steps per second) or space (i.e., size of memory, usually measured in bits or bytes) or some combination of the two. If computational devices had unlimited memory and could perform calculations instantaneously, algorithmic complexity would be a nonissue. All real-world computers, however, have limited memory and perform calculations at fixed rates. The more time and space are required to run an algorithm, the greater its algorithmic complexity.

[[word count: 95]]

William A. Dembski


BOUNDARY CONDITIONS

 

Physical laws are characterized by their mathematical form, the values of universal constants, and the contingencies to which the laws apply—known as boundary conditions. For instance, Newton’s law of universal gravitation is an inverse square law (its mathematical form), employs the gravitational constant (a universal constant), and applies to certain boundary conditions (like the positions and momenta of the planets at a given time). Boundary conditions, because of their inherent contingency, hamper the physicist’s search for a theory of everything. Also, when the mathematical form of physical laws is nonlinear, as in chaotic systems, slight changes in boundary conditions can lead to enormous changes downstream.

[[word count: 106]]

William A. Dembski


DISSIPATIVE STRUCTURES

 

Dissipative structures are nonequilibrium thermodynamic systems that generate order spontaneously by exchanging energy with their external environments. Dissipative structures include physical processes (e.g., whirlpools), chemical reactions (e.g., Bénard cell convection), and biological systems (e.g., cells).  Ilya Prigogine, whose research on dissipative structures has been seminal, found that these structures, when far from equilibrium, can transform small-scale irregularities into large-scale patterns. The most intriguing application of Prigogine’s ideas is to the origin of life and biology generally. It is an open question whether the complexity and specificity inherent in biological systems can be reduced to the thermodynamics of dissipative structures.

[[word count: 104]]

William A. Dembski

 

 
TELEOLOGICAL ARGUMENT

 

According to the teleological argument, the order and complexity exhibited by the world are properly attributed to a purposive cause rather than a blind, undirected process. Historically, in looking for evidence of purpose, the argument has focused on the world as a whole, its laws, and structures within the world (notably life). The teleological argument has two recent incarnations. One employs the Anthropic Principle and focuses on the fine-tuning or “just-so” aspects of the physical universe required for human observers. The other constitutes a revival of design-theoretic reasoning in biology and is known under the rubric Intelligent Design.

[[word count: 101]]

William A. Dembski