You can read four articles free per month.
To have complete access to the thousands of philosophy articles on this site, please. The belief that mathematics is the surest path to the truth about the universe because the latter is at bottom mathematical has been very influential in Western thought. The idea of the universe as a gigantic computer, and the belief that everything including conscious experience is information that is either itself digital or can be digitised without loss, is but a recent manifestation of Pythagoreanism.
The question of the relationship between mathematics and reality has become increasingly urgent over the last century as physics, led by ever more abstruse calculations, makes spectacular advances. This has enabled extraordinarily precise predictions about every aspect of the material world from the very small Higgs bosons, etc to the very large gravitational waves, etc , and, through the technology underpinned by it, a massive amplification of our agency. This has persuaded some thinkers that mathematical physics has, or will have, the last word on many, perhaps most, aspects of the world in which we live.
If you want to know the time, you ask a policeman: This is disconcerting because of the increasing divergence between the world as physicists describe it and the world as we experience it. The quantum mechanics underpinning electronic communication systems — including the laptop on which I am writing this, and the email by which I will send it to the Editor of Philosophy Now — is entirely at odds with anything that could be remotely described as common sense. For many, the incomparable predictive and technological powers of physics have furnished incontrovertible evidence in support of its claim to grasp some, perhaps the, fundamental truths about the world; that it produces not merely models of reality but reveals reality itself.
And since the power of physics lies in the employment of mathematics, some have found it difficult to resist the Pythagorean conclusion that reality is mathematical. One of the most familiar examples of this unreasonable effectiveness of mathematics is also one of the most striking: The numerical coincidence Isaac Newton noted between the speeds of falling bodies on Earth, and between the parabolic pathways taken by thrown rocks and the elliptical orbit of planets, led to a mathematical law with a universal application.
An even more spectacular example, cited by Wigner, was the importation of matrix algebra into quantum mechanics.
This has proved extraordinarily powerful in predicting what is going on at the sub-atomic level. Matrix algebra was originally invoked in response to the observation that some rules of computation Werner Heisenberg was using to understand quantum results were formally identical with the rules of computation using matrices that had been established in the nineteenth century. There is clearly more to mathematics in physics than a convenient notational system.
Should we therefore conclude, along with some physicists, metaphysicians, and philosophers of science, that mathematics does not merely offer the most effective ways of modelling the universe, but that it is the most faithful portrait of what the universe really is like? Or, even more radically, accept the Platonic claim that mathematical objects even non-real items like the square root of -1 are real entities? My Quest for the Ultimate Nature of Reality According to this view, mathematics is not merely the best guide to reality, it is reality.
The easiest way to see what is wrong with this extreme mathematical realism is to examine actual examples of mathematical physics. Consider the most famous of all mathematical accounts of the world: As with any law, it describes a mathematical relationship between values of variables energy, mass that in the context of the equation have no other properties than quantity. More generally, and technically, physical laws are about the co-variance of quantitative parameters. The world of physical laws — which enables predictions of quantities — is a world of quantities without qualities.
This is not an accidental oversight. It is possible to obtain the less accurate models in appropriate limits, for example relativistic mechanics reduces to Newtonian mechanics at speeds much less than the speed of light. Quantum mechanics reduces to classical physics when the quantum numbers are high. For example, the de Broglie wavelength of a tennis ball is insignificantly small, so classical physics is a good approximation to use in this case.
It is common to use idealized models in physics to simplify things.
and personal reality, the historical development of mathematical The relation between mathematical models and reality lies at the bot-. To explore the relation between mathematical models and reality, four different domains of reality are distinguished: observer-independent.
Massless ropes, point particles, ideal gases and the particle in a box are among the many simplified models used in physics. These laws are such as a basis for making mathematical models of real situations. Many real situations are very complex and thus modeled approximate on a computer, a model that is computationally feasible to compute is made from the basic laws or from approximate models made from the basic laws.
In engineering , physics models are often made by mathematical methods such as finite element analysis. Different mathematical models use different geometries that are not necessarily accurate descriptions of the geometry of the universe. Euclidean geometry is much used in classical physics, while special relativity and general relativity are examples of theories that use geometries which are not Euclidean. Since prehistorical times simple models such as maps and diagrams have been used.
Often when engineers analyze a system to be controlled or optimized, they use a mathematical model. In analysis, engineers can build a descriptive model of the system as a hypothesis of how the system could work, or try to estimate how an unforeseeable event could affect the system. Similarly, in control of a system, engineers can try out different control approaches in simulations. A mathematical model usually describes a system by a set of variables and a set of equations that establish relationships between the variables.
Variables may be of many types; real or integer numbers, boolean values or strings , for example. The actual model is the set of functions that describe the relations between the different variables.
In business and engineering , mathematical models may be used to maximize a certain output. The system under consideration will require certain inputs. The system relating inputs to outputs depends on other variables too: Decision variables are sometimes known as independent variables. Exogenous variables are sometimes known as parameters or constants.
The variables are not independent of each other as the state variables are dependent on the decision, input, random, and exogenous variables.
Furthermore, the output variables are dependent on the state of the system represented by the state variables. Objectives and constraints of the system and its users can be represented as functions of the output variables or state variables. The objective functions will depend on the perspective of the model's user. Depending on the context, an objective function is also known as an index of performance , as it is some measure of interest to the user.
Although there is no limit to the number of objective functions and constraints a model can have, using or optimizing the model becomes more involved computationally as the number increases. For example, economists often apply linear algebra when using input-output models.
Complicated mathematical models that have many variables may be consolidated by use of vectors where one symbol represents several variables.
Mathematical modeling problems are often classified into black box or white box models, according to how much a priori information on the system is available. A black-box model is a system of which there is no a priori information available. A white-box model also called glass box or clear box is a system where all necessary information is available. Practically all systems are somewhere between the black-box and white-box models, so this concept is useful only as an intuitive guide for deciding which approach to take.
Usually it is preferable to use as much a priori information as possible to make the model more accurate. Therefore, the white-box models are usually considered easier, because if you have used the information correctly, then the model will behave correctly. Often the a priori information comes in forms of knowing the type of functions relating different variables. For example, if we make a model of how a medicine works in a human system, we know that usually the amount of medicine in the blood is an exponentially decaying function.
But we are still left with several unknown parameters; how rapidly does the medicine amount decay, and what is the initial amount of medicine in blood? This example is therefore not a completely white-box model. These parameters have to be estimated through some means before one can use the model. In black-box models one tries to estimate both the functional form of relations between variables and the numerical parameters in those functions.
Using a priori information we could end up, for example, with a set of functions that probably could describe the system adequately. If there is no a priori information we would try to use functions as general as possible to cover all different models. An often used approach for black-box models are neural networks which usually do not make assumptions about incoming data.
Alternatively the NARMAX Nonlinear AutoRegressive Moving Average model with eXogenous inputs algorithms which were developed as part of nonlinear system identification [3] can be used to select the model terms, determine the model structure, and estimate the unknown parameters in the presence of correlated and nonlinear noise.
The advantage of NARMAX models compared to neural networks is that NARMAX produces models that can be written down and related to the underlying process, whereas neural networks produce an approximation that is opaque. Sometimes it is useful to incorporate subjective information into a mathematical model. This can be done based on intuition , experience , or expert opinion , or based on convenience of mathematical form. Bayesian statistics provides a theoretical framework for incorporating such subjectivity into a rigorous analysis: An example of when such approach would be necessary is a situation in which an experimenter bends a coin slightly and tosses it once, recording whether it comes up heads, and is then given the task of predicting the probability that the next flip comes up heads.
After bending the coin, the true probability that the coin will come up heads is unknown; so the experimenter would need to make a decision perhaps by looking at the shape of the coin about what prior distribution to use. Incorporation of such subjective information might be important to get an accurate estimate of the probability.
In general, model complexity involves a trade-off between simplicity and accuracy of the model. Occam's razor is a principle particularly relevant to modeling, its essential idea being that among models with roughly equal predictive power, the simplest one is the most desirable. While added complexity usually improves the realism of a model, it can make the model difficult to understand and analyze, and can also pose computational problems, including numerical instability. Thomas Kuhn argues that as science progresses, explanations tend to become more complex before a paradigm shift offers radical simplification [ citation needed ].
For example, when modeling the flight of an aircraft, we could embed each mechanical part of the aircraft into our model and would thus acquire an almost white-box model of the system. However, the computational cost of adding such a huge amount of detail would effectively inhibit the usage of such a model. Additionally, the uncertainty would increase due to an overly complex system, because each separate part induces some amount of variance into the model. It is therefore usually appropriate to make some approximations to reduce the model to a sensible size.
Engineers often can accept some approximations in order to get a more robust and simple model. For example, Newton's classical mechanics is an approximated model of the real world. Still, Newton's model is quite sufficient for most ordinary-life situations, that is, as long as particle speeds are well below the speed of light , and we study macro-particles only. Any model which is not pure white-box contains some parameters that can be used to fit the model to the system it is intended to describe.