I.   Prologue/The Nature of the Question

It would be silly for me to do any type of critical analysis about data science for the simple reason that at this point in my life I cannot do the math. “To those who do not know mathematics it is difficult to get across a real feeling as to the beauty, the deepest beauty, of nature … If you want to learn about nature, to appreciate nature, it is necessary to understand the language that she speaks in” (Feynman, 1965, p. 58). It is sad mathematicians involved in the defeasible reasoning of areas such as statistics and data science have lost a sense of beauty for the unique language game they play. Among mathematicians seeking indefeasible reasoning, estimates are that as many as two-thirds are Platonists believing mathematical entities exist in a perfect and timeless realm discovered through their mathematical perceptions (Holt, 2012, p. 171). Of the remaining one-third, some such as the logician Bertrand Russell would argue, “I fear that, to a mind of sufficient intellectual power, the whole of mathematics would appear trivial, as trivial as the statement that a four-footed animal is an animal” (Holt, 2012, p. 183). Even if this latter statement describes the true nature of mathematics as simply nominal, it would be a simple argument to make from this premise that this “mind of sufficient intellectual power” must be the God of the ontological proof and thus would still make mathematics an impressive, unique, and very beautiful language game.

On the other hand, data scientists seem to have no problem pontificating and generating a vast collection of descriptive, interpretative, and normative statements based upon their mathematics. I have read hundreds of beginning pages telling me there are no objective facts nor raw data; all knowledge and truth are social constructs; and all interpretations and premises are subject to unavoidable structural biases but then the pages go on to describe the truth of very specific facts and data varying from how accounting created modern capitalism to how computer models of weather will accurately forecast the next century and then on to how all of this exists in an epistemic structure that contains racism, sexism, and a Nietzschean will-to-power as its driving force. At the end, the readings usually end up demanding normative language games are created to control the evil they see now and foresee coming and thus assume an epistemic knowledge of norms generally. For something that is supposed to be skeptical and relativistic, such post-modernist writing seems not only to “know” a lot but also seems to assume this knowledge must involve “truth”. Regardless, when data scientists get to the point of advocating and demanding normative structures for what they are doing, I and anyone regardless of our mathematics skills are entitled to get involved. Normative structures always eventually lead to their violent enforcement upon others who disagree with them, be it ethics or the law, and end up with an innocent soul such as Aaron Swartz facing a choice between 30 – 40 years of jail time or suicide simply for downloading information off the internet — except that due to the quantity of individuals affected by data science, the situation may involve millions of innocents.

What is knowledge in data science? More specifically, can its knowledge do without the presently defined scientific method with its beginning or foundation requirement of a hypothesis or theory so as instead to “view data mathematically first and establish a context for it later … correlation supersedes causation” (Anderson, 2008). Assuming Ockham’s Razor is a necessary attribute for anything to be a science including data science and the Razor’s historical development and use in modern holistic scientific knowledge as its most basic attribute and problem-solving technique for its methodology and perhaps its ontology, I will argue that not only does correlation rationally supersede causation but it normatively ought to do so. First, statistical and probability correlation is not without theory; there is a lot of theory in correlation specifically created to solve epistemic problems in science that other theory such as causation cannot solve and much of it has as much, if not more, predictive value than causation-based theory — its only problem is that it is not as aesthetically pleasing as any theory involving causation. Second, humanity has an inherent need for beautiful explanatory theory regardless of its scientific usefulness to serve as a basis for normative theory useful for forcing others how to live; thus, normatively, if data science does in fact — by social construct or ontology — value scientific usefulness as a good for society more than normative usefulness for a few powerful as it claims it does, it ought to prefer correlation to causation. Although an in-depth understanding of the language game of data science requires specialized technical knowledge, the fundamental meanings of its knowledge language are fairly accessible because the search for knowledge and truth as the instrumentalist philosopher John Dewey argued is ultimately a search for happiness and thus is or ought to be accessible to all.

II.   Pre-Modern Historical Perspective

The present battles over the nature of knowledge and truth go back to Ancient Greek Philosophy and thus stand no chance of resolution in this short paper — if ever. What those battles undisputedly eventually did give Western Civilization, primarily through the surviving writings of Aristotle, his contemporaries, and their students, is the concept of causation and deductive reasoning used together as a basic form of scientific method. These battles continued on through the Medieval Scholastics who through words attributed to William of Ockham — even though this principle could also be traced back to the Greeks — gave us Ockham’s Razor that was indirectly referenced most recently in published articles by Pedro Domingos when he writes about Isaac Newton’s rules of induction contained in his Principia Mathematica. The first two of Newton’s rules of induction are: “We are to admit no more causes of natural things than such as are both true and sufficient to explain their appearances. Therefore, to the same natural effects we must, as far as possible, assign the same causes” (Hawking, 2002, p. 731). The power and omnipresence of this simple Razor should not be underestimated though it usually is along with being ignored; it may not be an ontological attribute or logical necessity of the language game of science but to my knowledge it is universally accepted as a preferred methodological attribute of scientific language games especially its methodology — even in statistics and probability. “Since all models are wrong the scientist cannot obtain a ‘correct’ one by excessive elaboration. On the contrary following William of Occam he should seek an economical description of natural phenomena. Just as the ability to devise simple but evocative models is the signature of the great scientist so overelaboration and overparameterization is often the mark of mediocrity” (Box, 1976, p. 792).

All things considered, this is true for one simple reason: other than for aesthetics or for normative ulterior motives of power, there is no rational reason to complicate the already complicated unnecessarily. For each sense experience, there is always an uncountable amount of possible options for explaining them. Explanations such as F=ma are not the only possible means of describing an apple’s fall from a tree but only the simplest. The more complex the explanation, the more difficult to prove or disprove it thus simple theories are preferred to complex ones because they are better testable. Statistically and in terms of probability, all added assumptions introduce possibilities for error and thus raise the probability that a model is wrong. If the predictive power is the same, simplicity is preferred because it means less statistical error.

As exhibited by writers such as Foucault and by pseudo-sciences such as psychology, the Razor does not apply to language games in much of the humanities in which not only is using as many words as possible to write about known meanings is sometimes preferred but also using as many words as possible with no meanings is also sometimes preferred. “All art is propaganda. It is universally and inescapably propaganda; sometimes unconsciously, but often deliberately, propaganda” (Sinclair, 1925, p. 9). In language games such as psychology (except for behaviorism that may be a science) dealing with words such as “mind” and “consciousness” that cannot even be defined nominally, mathematically, or otherwise, and can never be tested in parameter controlled repeatable experiments nor given any predictive value, any failing explanation can always be modified by adding additional verbiage, parameters, or explanations so as to make it successful or at least give it aesthetic appeal.

The problem with the Razor, as its critics from Occam’s fellow Scholastics onto Galileo, Kant, and Karl Megner point out, is the meaning of the terms “true and sufficient … as far as possible”. How does one know when simplicity is negating knowledge? This problem does not make the Razor unsound or invalid but acts as simply a guide for its use as does an opposing adage whose source is lost in history known as Crabtree’s Bludgeon: “[n]o set of mutually inconsistent observations can exist for which some human intellect cannot conceive a coherent explanation, however complicated” (Author unknown).

III.   Modern Perspective

The original phrasing of the Razor by the Scholastics was simpler and usually in Latin went something to the effect of “all things being equal, the simplest solution tends to be the right one”. As exhibited by Newton’s rules of induction, those who built on the Scholastics and the modern era of philosophy started by Descartes with its emphasis on epistemology added the words “causes”, “effects”, and “induction” to the methodology. These added words and their meaning were necessary at the time in order to bring Aristotle’s scientific method up-to-date but also caused problems that the Scholastics had worked to avoid eventually leading to the modern skepticism of not only scientific knowledge but the entire concept of knowledge especially knowledge as truth. The epistemic problems of “cause and effect” and “induction” are closely related and were first argued in our modern age by David Hume in his A Treatise of Human Nature and an Enquiry Concerning Human Understanding. His critique is just as sound and valid today as it was in 1739.

Hume argued that sense experience of any two events at most only gives us a logical conjunction, A ∧ B. There is no way logically to conclude from this conjunction any necessary relationship of cause and effect; these words are at best simply a generalization of past experience that at any moment with new experience or new generalizations could be and eventually often are disjoined. This logical problem is exacerbated when contemplating induction. Induction reasons as follows: All observed A’s have also been B’s; C is a A; therefore C is a B. This is invalid reasoning. It is not a necessary truth that “instances of which we have had no experience must resemble those of which we have had experience” (Hume, 1740, Book I, Part III, section VI, p. 89). Induction gives no truth but of the negative kind, as Richard Feynman said: “[w]e never are definitely right, we can only be sure we are wrong” (Feynman, 1965, p. 158).

Modern epistemology has made the induction problem worse by the Ravens Paradox. The Ravens Paradox as described by the logician Carl Gustav Hempel goes as follows. Assume the following hypothesis based on sense experience: (1) All ravens are black or if P → Q. Logically, this statement is equivalent to: (2) If something is not black, then it is not a raven or if ∼Q →∼ P. Logically, in all cases in which (2) is true, (1) is also true. Likewise, in all circumstances where (2) is false (1) is also false. Using induction, science and anyone with basic reasoning skills given a general statement such as all “ravens are black” would have no problem with using the sense experience of a black raven to support the hypothesis (1) that all ravens are black. However this same reasoning applied to the logically equivalent hypothesis (2) should allow for the sense experience of my blue shirt — something that is not black and not a raven — as proof that all ravens are black. This conclusion is a paradox because it logically states we have gained knowledge about ravens by looking at my blue shirt.

This conclusion is a paradox in scientific reasoning even if science were to answer that (2) is based on sound evidence and a valid proof of (1); after all, if we were in fact able to observe enough of reality so as to experience everything that is non-black or non-raven, we could generalize that the remaining unobserved experience must consist of black ravens. Unfortunately, this solution only makes matters worse. The logical forms of (1) and (2) can also be used as logically equivalent statements for “all ravens are white” and thus scientific theory can in theory use my blue shirt — a non-white non-raven — as proof that all ravens are white. My blue shirt is rationally valid proof by induction of all ravens being both black and white. It appears that theory can only give us “truth” of the negative kind: what is false. Falsification is the only certainty – as Karl Popper and John Stuart Mill pointed out: “No amount of observations of white swans can allow the conclusion that all swans are white, but the observation of a single black swan is sufficient to refute that conclusion” (Blaug, 1992, p. 13).

It is because of paradoxes in rationality such as this that we have Crabtree’s Bludgeon and why statistics and data science must deal with spurious correlations such as those exhibited in the Appendix A attached (Vigen, 2013). With the emergence of 20th Century Technological Society and its exponential production of facts — objective, constructive, raw, not raw, or whatever you want to call them — and the resulting big data for human society generally and science specifically, it seems that nominal words, aesthetically pleasing or not, meaning knowledge and truth can be mass-produced, even by ignorance, through paradoxes of which the Raven’s Paradox is only one. “But if thought corrupts language, language can also corrupt thought” (Orwell, 1968, p. 137).

Thus at the start of the 20th Century even before the computer and any inkling of data science, as scientific methodology had already spread into convoluted physical investigations involving large quantities of uncontrollable parameters and into what is now called the social sciences in which it was and is still impossible to test hypotheses or to control all relevant parameters even when they could be tested, there began calls by scientists, logicians, mathematicians, and others for replacement of causation by correlation. “In 1911, Karl Pearson, inventor of the correlation coefficient, dismissed causation as ‘another fetish amidst the inscrutable arcana of even modern science’” (Barrowman, 2014, p. 26). Bertrand Russell, whose work proved that mathematics could not be reduced to logic, also concluded from that work that “‘[t]he law of causality,’ he said with a flourish, ‘like much that passes muster among philosophers, is a relic of a bygone age, surviving, like the monarchy, only because it is erroneously supposed not to do harm’” (Frisch, 2015). As early as 1909, at the beginning of his career, Albert Einstein was arguing for probabilistic theories to supercede causal explanations in physics; an opinion that he might have rejected mid-career but to which he arguably returned later in his career (Frisch, 2015) (Lehmkuhl, 2014). “Neo-Russellians” in the 21st Century continue demanding for a rejection of causation as “wooliness” (Frisch, 2015).

IV.   The Present Perspective on Knowledge and Truth

Thus, to get to the historic point, in the modern world we have reached a fairly fragmented concept of truth but a holistic definition of knowledge. In response to Pontius Pilate’s question “Quid est veritas”, the modern analytic world answers that each sentence supplies its own conditions for being true: the statement “snow is white” is true if and only if snow is white. Such countless truths seem valueless in isolation but when conjoined in humanity’s struggle to survive its own Heart of Darkness and the universe trying to kill us, they make up the intertwined threads of a holistic “fabric” of knowledge in which any and all truths are dependent on the truth of the others:

      The totality of our so-called knowledge or beliefs, from the most casual matters of geography and history to the profoundest laws of atomic physics or even of pure mathematics and logic, is a man-made fabric which impinges on experience only along the edges. Or, to change the figure, total science is like a field of force whose boundary conditions are experience. A conflict with experience at the periphery occasions readjustments in the interior of the field. Truth values have to be redistributed over some of our statements. Re-evaluation of some statements entails re-evaluation of others, because of their logical interconnections; the logical laws being in turn simply certain further statements of the system, certain further elements of the field. Having re-evaluated one statement we must re-evaluate some others, whether they be statements logically connected with the first or whether they be the statements of logical connections themselves. But the total field is so undetermined by its boundary conditions, experience, that there is much latitude of choice as to what statements to re-evaluate in the light of any single contrary experience. No particular experiences are linked with any particular statements in the interior of the field, except indirectly through considerations of equilibrium affecting the field as a whole.

      If this view is right, it is misleading to speak of the empirical content of an individual statement, especially if it be a statement at all remote from the experiential periphery of the field. Furthermore it becomes folly to seek a boundary between synthetic statements, which hold contingently on experience, and analytic statements which hold come what may. Any statement can be held true come what may, if we make drastic enough adjustments elsewhere in the system. Even a statement very close to the periphery can be held true in the face of recalcitrant experience by pleading hallucination or by amending certain statements of the kind called logical laws. Conversely, by the same token, no statement is immune to revision. Revision even of the logical law of the excluded middle has been proposed as a means of simplifying quantum mechanics; and what difference is there in principle between such a shift and the shift whereby Kepler superseded Ptolemy, or Einstein Newton, or Darwin Aristotle? (Quine, 1951, pp. 35-36)

Such holistic concept of knowledge naturalized to science leaves philosophers arguing as to whether this reduces truth and knowledge to mere pragmatic or instrumental value or of any value, but how has the language game of science responded to all these paradoxes and resulting skepticism of the modern world? Certainly, at least not for the moment, by giving up on truth and knowledge. As Quine’s student Catherine Z. Elgin has stated, science most certainly did not “make the world safe for postmodernist claptrap” but used its creativity to use not only truth but also falsehood to achieve scientific knowledge through such language games in its methodology as “curve smoothing”; “ceteris paribus claims”; “idealizations”; “stylized facts”; “a fortiori arguments from limiting cases”; and even models made up of “fiction” to serve the epistemic meaning and goals of scientific truth (i.e., the ideal gas law) (Elgin, 2004).

What are the meaning and goal of scientific truth? All science is empirical and its meaning is based on the reality that exists outside our consciousness that we can experience but about which we need to create social construct or subjective wordgames in order to share the experience with each other. Thus, despite post-modernist concentration on the wordgames instead of the objective reality of which we subjectively speak in order to avoid being silent about it that does not go away simply because they decide to ignore it, the meaning of scientific words including truth will always be the same:

    As an empiricist I continue to think of the conceptual scheme of science as a tool, ultimately, for predicting future experience in the light of past experience. Physical objects are conceptually imported into the situation as convenient intermediaries — not by definition in terms of experience, but simply as irreducible posits comparable, epistemologically, to the gods of Homer. Let me interject that for my part I do, qua lay physicist, believe in physical objects and not in Homer’s gods; and I consider it a scientific error to believe otherwise. But in point of epistemological footing the physical objects and the gods differ only in degree and not in kind. Both sorts of entities enter our conception only as cultural posits. The myth of physical objects is epistemologically superior to most in that it has proved more efficacious than other myths as a device for working a manageable structure into the flux of experience. (Quine, 1951, pp. 35-36)

One of the modern world’s most creative techniques for dealing with the paradoxes of reason and the convoluted complexity of the universe in order to continue seeking predictive knowledge is the language of statistical and probabilistic analysis. The history of statistics in Western Civilization begins with the historian Thucydides in his History of the Peloponnesian War describing how the Athenians used a sample portion of a fortification to estimate the strength of the fortification and then explodes exponentially with the modern world. In the last couple of centuries, statistics and probability have gone through an enormous amount of theory varying from developing the relatively simple math necessary to calculate life tables for governments and insurance companies to Bayesian statistics and its battles with Frequentists and onto its progeny data science successfully using applied mathematics to create algorithms to find correlations in massive amounts of data without any hypothesis or explanation linking any of that data. For most of the modern era, the world has had two models of scientific explanation: 1) the deductive-homological using mathematical general laws and principles to give deductive universal relationships that usually but not always give a cause and effect relationship, “the laws and theoretical principles involved are of strictly universal form: they assert that in all cases in which certain specified conditions are realized an occurrence of such and such a kind will result” (Hempel, 1965, p. 49); 2) the probabilistic-statistical using mathematical general laws and principles to give inductive specific relationships, “explains a given phenomenon by showing that, in view of certain particular events and certain statistical laws, its occurrence was to be expected with high logical, or inductive, probability” (Hempel, 1965, p. 51 ). The first suffers both the “relic” of cause and effect and the paradoxes of induction; the second need not worry about cause and effect but suffers from the paradoxes of induction. In the post-World War II era, scientific methodology proceeded into new areas and problems with new sciences such as genetics and both quantum and relativity physics establishing both models not only overlapped but often failed to explain scientific success. Thus, the paradoxes of induction and cause and effect and the skepticism they paradoxically “caused” once again took over leading the philosopher of science Paul Feyerabend to assert: “[s]cience is essentially an anarchic enterprise: theoretical anarchism is more humanitarian and more likely to encourage progress than its law-and-order alternatives” (2010, p. 1).

V.   Correlation Supercedes Causation

Now some in the progeny of statistics and probability called data science enter the epistemic fight by claiming a further model of scientific explanation: no explanation. “Correlation is enough”. Sounds great! No cause and effect fiction to make the reasoning dishonest and paradoxical. No induction paradoxes we must dishonestly ignore in order to pretend we have a universal explanation. Just data, algorithms, and resulting mathematical correlations that are useful for predicting the future until they are wrong — just like all other models of science. As described by Chris Anderson of Wired, it is working as well as and even better than other scientific reasoning in areas varying from advertizing and biology to quantum physics and zoology. Correlation without explanation is giving predictive value to areas whose immense data is not susceptible to the limitations of the human mind’s ability to satisfy the Razor — correlation is now the satisfaction of the Razor. It is a model of scientific knowledge that is in denial as to its being a model while its critics deny that it is a model.

With no cause and effect nor induction paradoxes, correlation is an honest and logical satisfaction of the most basic attribute of the wordgame that is science: predictive meaning and value. So what is the problem? “Data without theory is lame” is the call of battle; “… this approach can make you look really smart for a time. Then things change, for no apparent reason, because the rule contains no mechanism and no explanation …” (Storage, 2016). That’s it? Correlation that works to solve problems and give predictive value without explanation is lame because it does not explain? Kinda missing the point. All present science mainly through the fiction of cause and effect and the illogic of induction only provides explanations for its predictions until the predictions stop working; eventually, historically, they all stop working as the sense experience they try to explain gets more complicated. Correlation simply achieves the same predictive value only without the fiction of explanation and the paradoxes of induction plus it can deal with the vast amount of facts or data now produced by the modern world. The old school scientific models cannot deal directly with this complexity and neither can the human mind’s ability to synthesize that vast amount of facts and data into theory or explanation.

According to post-modernism and its prophet Foucault the situation with explanation — or without it — is much worse; according to them, science and its explanations are not simply honest attempts to describe a reality existing outside consciousness but social constructs created by the most powerful at any given time deciding or defining “truth” in order to control the powerless. The problem of power is the same with or without explanation, so pursuant to the Razor, there should be no issue with using correlation instead of causation as the simpler option.

The problem with correlation without theory is not with the methodology, epistemology, or ontology, it is aesthetic and normative. Humans, including scientists, want explanation and the power it gives; not only the power over nature but the power over others it gives when the explanations expand from being epistemic norms to norms for ultimate value. Scientists and Nietzschean artists alike want to proclaim “God is dead” and write about the “God delusion” so that they can become demigods based on their delusions of explanation and reasoning as a foundation for what others ought to be.

For example, contemplate genetics with it present scientific value of predicting and thus creating the possibility for preventing or curing illness and disease — a value whose reality and potential increases every day toward a better material future. As long as it has statistical analysis and an assumption of randomness to make its correlations between genes and environment, it does not matter why it works or whether DNA changes the environment or the environment changes DNA. Genetically speaking, geneticists can use the most popular explanation for the predictive value of their science consisting of evolution; others such as geneticists Francis S. Collins and Francisco J. Ayala who are religious Christians can explain it by BioLogos; others who are strict rationalists can explain it by intelligent design; and now even technocrats have entered the dog fight by asserting the universe is a giant computer simulation by a greater intelligence than our own (Collins, 2006) (Ball, 2017). Genetics does not care about any of these explanations in order to solve the problems presented to it and to make predictions; only the advocates of these assertions and explanations care. There is a modern need to make science a religion, something it is not nor was ever intended to be:

   Let them have their belief, if it gives them joy. Let them also give talks about that. ‘We touch the infinite!’ And some people say … ‘Ya ya, he says he touches the infinite.’ And some people say ‘Ya ya! He says he touches the infinite!’ But to tell the little children in school, ‘Now that is what the truth is,’ that is going much too far.  (Horgan quoting Feyerabend, 2016)

It was never in the meaning of scientific words or for the language game of science to be a religion or to deal with the ultimate values or meaning of life. “We feel that even if all possible scientific questions be answered, the problems of life have still not been touched at all” (Wittgenstein, 1998 edition, at Proposition 6.52).

Granted, correlation gives us the spurious correlations of Appendix A and many more. My favorite one is the recently discovered inverse correlation over the last two hundred years between global average temperature and the number of pirates (Storage, 2016). In order to stop global warming, instead of sending SEAL teams to kill pirates, we should be hoisting the Jolly Roger and sailing the high seas as pirates, yeah! How is this any different from physicists telling us we and the universe are made up of unobservable atomic particles that are not really particles but waves of mathematical probabilities and relationships between mathematical calculations of four mathematical forces that exist entirely as numbers until we observe them at which point they become a particle number that does not exist in any particular point in space because of an Uncertainty Principle — as long as the force of gravity is negligible (Tegmark, 2006)(Tegmark, 2014). We believe them when they tell us this stuff because physicists can use these explanations to make atomic bombs even though most of us have no clue as to what they are talking about. The explanations work. If correlations work, no matter how ridiculous they may appear at first — for example, if increasing the population of pirates in the world actually does reduce global warming — then they work and are scientifically equivalent to any explanation that works regardless of any lack of cause and effect or other explanation.

Furthermore, anyone can argue with the correlations expressed by Appendix A and disagreed with them normatively regardless of their predictive value. Anyone can make normative intelligent decisions as to whether they should be acted upon — assuming the algorithms and source data are not kept secret that is an issue beyond this essay. One can argue with numbers. Numbers unlike all other language games do not have normative meaning until we give them such meaning. How does one argue with a religious belief in an intelligent or personal God; with a secular religious belief in evolution asserting the survival of the fittest with the fittest defined as those who survive; or with artificial intelligence coders believing the universe is one big computer simulation? People believe those beliefs because they want to believe in them not because numbers on a graph assert them. With numbers, we can at least agree to argue about the numbers and then go on to normative theories. The language of numbers does not have normative meanings until translated into words, all other languages do have normative meaning existentially.

VI.   Correlation Helps the Conceptualization of Normative Language

Dealing with problems of subjectivism, skepticism, and relativism is new to scientific knowledge but has been a problem for normative wordgames such as morality, ethics, and law for millennia. David Hume also succinctly stated this problem:

      In every system of morality, which I have hitherto met with, I have always remarked, that the author proceeds for some time in the ordinary way of reasoning, and establishes the being of a God, or makes observations concerning human affairs; when of a sudden I am surprised to find, that instead of the usual copulations of propositions, is, and is not, I meet with no proposition that is not connected with an ought, or an ought not. This change is imperceptible; but is however, of the last consequence. For as this ought, or ought not, expresses some new relation or affirmation, ’tis necessary that it should be observed and explained; and at the same time that a reason should be given, for what seems altogether inconceivable, how this new relation can be a deduction from others, which are entirely different from it. But as authors do not commonly use this precaution, I shall presume to recommend it to the readers; and am persuaded, that this small attention would subvert all the vulgar systems of morality, and let us see, that the distinction of vice and virtue is not founded merely on the relations of objects, nor is perceived by reason. (Hume, 1739, §1, pp. 469-70.)

There is no logical way to proceed from what “is” to a necessary what “ought” to be defined as morality. Thus all of ethics and morality by logical necessary are subjective and made up of relativistic concepts. All the theories that moralists create to avoid this Hume’s Guillotine separating what “is” from what “ought” to be have failed and will fail because they rely on a necessary rational connection that does not and cannot exist.

Correlation has the potential to resolve Hume’s Guillotine. There is no rational crevice separating a correlation between what “is” and what “ought” to be. Saying something is moral because it correlates with results we want should remove the “moral crumble zone” that eventually leads to the imprisonment of innocents such as Aaron Swartz because the law will have no moral explanation to justify such imprisonment.

VII.   Conclusion: The Real Dispute is Aesthetic and Normative

Modern science and intelligentsia do not want to lose explanation to correlation because of the loss of power this would entail. Whether they are feminists, racists, sexists, anti-racist, or whatever, everyone wants the power of scientific explanation to be the new religion to replace the old school Western religions of faith. Correlation without theory at best only gives probability; it will make them equal to old school religion with its Pascal’s Gamble as its only proof and give credence to Paul Feyerabend’s argument for a separation between state and science equal to the separation between state and religion — or the converse, an equal sharing of power between science, religion, and the state so they fight among themselves for power and leave the rest of us alone. It was the equal but separate powers of church and state counterbalancing each other that gave us modern Western Civilization and the good of modern Technological Society; perhaps we need these three present powers-that-be to fight among themselves as counterbalances to each other to continue historical progress into the next phase, whatever that may be.


Anderson, C. (2008). The end of theory: the data deluge makes the scientific method obsolete. Wired, June. Retrieved from

Ball, P. (2017) We might live in a computer program but it may not matter. BBC Earth. Retrieved from:

Barrowman, N. (2014). Correlation, causation, and confusion. The New Atlantis, A Journal of Technology and Society, Vol. Summer/Fall 2014. Retrieved from

Blaug, M. (1992). The methodology of economics: or, how economists explain. London: Cambridge University Press.

Box, G.E.P. (1976). Science and statistics. Journal of the American Statistical Association, Vol. 71 (No. 356), pp. 791-799.

Collins, F. (2006). The language of God. N.Y., N.Y. : Free Press.

Domingos, P. (2015). The master algorithm. N.Y., N.Y.: Basic Books.

Elgin, C.Z. (2004). True enough. Philosophical Issues, Vol. 14, pp. 113-131

Feyerabend, P. (2010). Against method, new edition. Brooklyn, N.Y.: Verso.

Feynman, R. (1985). The character of physical law. Cambridge, MA: M.I.T. Press

Frisch, M. (2015). Why things happen: either cause and effect are the very glue of the cosmos, or they are a naive illusion due to insufficient math. But which? Aeon, Vol. 23 June 2015. Retrieved from

Hacking, Ian. (1990) The taming of chance. Cambridge, UK: Cambridge University Press.

Hawking, S. (2002). On the shoulders of giants: the great works of physics and astronomy. London: Running Press.

Hempel, C. G. (1945). Studies in the logic of confirmation II. Mind. Vol. 54 (No. 214), pp. 97–121. doi:10.1093/mind/LIV.214.97

Hempel, C.G. (1965) Two models of scientific explanation. Retrieved from:

Holt, J. (2013). Why does the world exist?: an existential detective story. N.Y., N.Y.: Liveright Publishing Corporation.

Horgan, J. (2016) Was philosopher Paul Feyerabend really science’s “worst enemy”? Scientific American, Vol. 24, October. Retrieved from:

Hume, D. (1739). A treatise of human nature and an enquiry concerning human understanding, 1888 edition. Oxford: Clarendon Press. Retrieved from

Lehmkuhl, D. (1914). Why Einstein did not believe that general relativity geometrizes gravity. Studies in History and Philosophy of Modern Physics, Volume 46, Part B, pp. 316-326.

Numbers, R.L. (2003). Science without God: natural laws and christian beliefs. In:When Science and Christianity Meet, edited by David C. Lindberg. Chicago: University Of Chicago Press.

Orwell, S. (Ed.). (1968). The collected essays, journalism and letters of George Orwell. Vol. 4. London: Martin Seeker & Warburg. Retrieved from

Poovey, Mary. (1998) A history of the modern fact. Chicago: The University of Chicago Press.

Porter, Theodore. (1988) The rise of statistical thinking, 1820-1900. Princeton: Princeton University Press.

Quine, W.V.O. (1951). Two dogmas of empiricism. Philosophical Review, Vol. 60, Part 1, pp. 20–43. doi: 10.2307/2266637

Quine, W.V.O. (2013) Word and object, new edition edition. Cambridge, MA : The MIT Press.

Sinclair, U. (1925). Mammonart – an essay in economic interpretation. Pasadena, CA: Self-Published.

Storage, B. (2016). Data without theory is lame. The Multidisciplinarian. Retrieved from:

Tegmark, M. (2006) On math, matter and mind. Foundations of Physics, Vol. 36, Issue 6, pp 765–794.

Tegmark, M. (2014). Our mathematical universe: my quest for the ultimate nature of reality. N.Y., N.Y. : Knopf Doubleday

Vigen, T. (2013). Spurious correlations. Retrieved from

Wittgenstein, L. (1921). Tractatus logico-philosophicus, 471st edition, 12 January 1998. translation from the German by C. K. Ogden. Mineola, NY : Dover Publications.