Why Bayesianism fails / Почему байесианизм терпит неудачу

Why Bayesianism fails / Почему байесианизм терпит неудачу

by Евгений Волков -
Number of replies: 1

 

My attempt to reconcile Bayesian reasoning and critical rationalism.

In short: Bayesian inference is not a form of induction. If we have a definite set of specified alternatives, and if we can compute the probability of possible observations under each alternative, the subsequent modification of the probability distribution is logically entailed by our premises — which makes it a deductive inference. The logic of science does not consist of picking the most probable explanation from a set of alternatives known in advance : it consists of creating new ones. Furthermore, the probabilities computed by a model, no matter its predictive success, have nothing to do with the probability of the model itself being true.

All criticism welcome.

Why Bayesianism fails

https://medium.com/@clovisroussy/why-bayesianism-fails-8544eefa2bef

Clovis Roussy

Nov 10 ·2019  10 min read

“If anything is to be probable, then something must be certain.” — Lewis Carroll

Ferdinand Hodler, Le Lac Léman et le Mont Blanc au lever du soleil

Can evidence support an hypothesis?

Empirical support lies at the core of our idea of rationality. We ask for “evidence-based policies”; we admonish each other to “back up claims with data”; we reject statements that are not “supported by the facts”. And yet, as a matter of logic, the idea of empirical support is surprisingly difficult to pin down.

This is the problem of induction, most famously stated by David Hume, who believed it to be insoluble. Our knowledge of the world consists of theories that explain what we see in terms of things that we don’t see. How can we infer general theories from limited observations? We can’t deduce them from the evidence, since their very nature is to go beyond the evidence. Can a theory be confirmed by the evidence compatible with it, or made more probable? Does evidence allow us to feel more confident in our beliefs? If so, by which kind of logic?

These questions are central to a profound debate in philosophy of science. Steven Pinker made a passing reference to this debate in Enlightenment Now:

Our beliefs about empirical propositions should be calibrated by their fit to the world. When scientists are pressed to explain how they do this, they usually reach for Karl Popper’s model of conjecture and refutation, in which a scientific theory may be falsified by empirical tests but is never confirmed. In reality, science doesn’t much look like skeet shooting, with a succession of hypotheses launched into the air like clay pigeons and shot to smithereens. It looks more like Bayesian reasoning: a theory is granted a prior degree of credence, based on its consistency with everything else we know. That level of credence is then incremented or decremented according to how likely an empirical observation would be if the theory is true, compared with how likely it would be if the theory is false.

The first answer is arguably the most famous. According to Popper, evidence can never, in any way, support or justify a theory, or make it more probable. He believed that David Hume’s statement of the problem of induction was “a gem of priceless value for the theory of objective knowledge: a simple, straightforward, logical refutation of any claim that induction could be a valid argument, or a justifiable way of reasoning”.

Popper’s solution comes from the realization that we do not need induction to create knowledge. The fact that a scientific theory cannot be supported by evidence does not amount to a demonstration that it is false: whether or not a theory is true is independent from whether we can prove it. Science, according to Popper, is based on the logical asymmetry between verification and refutation. No amount of evidence can ever prove that a theory is true: however, if any statement deducible from a theory is false, it proves that the theory is false. We can create knowledge, therefore, by making unsupported and unjustified guesses, and seeing which ones withstand our attempts to refute them.

But Popper’s negative account of empiricism proved difficult to accept. The idea of supporting evidence is a resilient one. In Fashionable NonsenseSokal and Bricmont expressed a common criticism of Popper that resurfaced many times in the history of philosophy:

When a theory successfully withstands an attempt at falsification, a scientist will, quite naturally, consider the theory to be partially confirmed and will accord it a greater likelihood or a higher subjective probability. The degree of likelihood depends, of course, upon the circumstances: the quality of the experiment, the unexpectedness of the result, etc. But Popper will have none of this: throughout his life, he was a stubborn opponent of any idea of “confirmation” of a theory, or even of its “probability”. […]

Obviously, every induction is an inference from the observed to the unobserved, and no such inference can be justified using solely deductive logic. But, as we have seen, if this argument were to be taken seriously — if rationality were to consist only of deductive logic — it would imply also that there is no good reason to believe that the Sun will rise tomorrow, and yet no one really expects the Sun not to rise. With his method of falsification, Popper thinks that he has solved Hume’s problem, but his solution, taken literally, is a purely negative one: we can be certain that some theories are false, but never that a theory is true or even probable. Clearly, this “solution” is unsatisfactory from a scientific point of view.

The second approach mentioned by Pinker, Bayesian reasoning, is seen as a possible remedy. According to Bayesianism, probabilities represent degrees of belief in statements, which can then be incremented or decremented according to the evidence. The idea is simple. We start with a set of possible hypotheses, each with a given probability of being true. The probability distribution is supposed to incorporate all the relevant information we already have: if we know nothing else, all possibilities will have equal probability. Then, we look at the evidence, and ask ourselves: how probable was it to observe that evidence, given each possible hypothesis? Using a famous mathematical rule called Bayes theorem, we can then update the probability of each possible hypothesis, given the probability of the evidence. Reasoning in this way is also known as “inverse probability”, because instead of computing the probability of observations according to causes, we assign probabilities to possible causes, according to our observations.

This is often seen as a rigorous, mathematically impeccable formalization of empirical support and rationality itself. Bayesianism was adopted by several popular science authors, including Sean Carroll and Nate Silver, and enthusiastically promoted by the online group of thinkers known as the “Rationalist community”, organized around the writings of Eliezer Yudkowsky and Scott Alexander.

In what could arguably be considered the Bible of Bayesianism, Probability Theory: The Logic of Sciencethe late E.T. Jaynes had some scathing criticism for Popper and others who have denied the possibility of induction. He refers to them as the “irrationalists” and criticizes Popper in these terms:

In denying the possibility of induction, Popper holds that theories can never attain a high probability. But this presupposes that the theory is being tested against an infinite number of alternatives. […] It is not the absolute status of an hypothesis embedded in the universe of all conceivable theories, but the plausibility of an hypothesis relative to a definite set of specified alternatives, that Bayesian inference determines. […] an hypothesis can attain a very high or very low probability within a class of well-defined alternatives. Its probability within the class of all conceivable theories is neither large nor small; it is simply undefined because the class of all conceivable theories is undefined. In other words, Bayesian inference deals with determinate problems — not the undefined ones of Popper — and we would not have it otherwise.

Popper always rejected the idea of searching for probable theories. On the contrary, because we want theories with high informative content that make specific predictions, he argued that a better theory will always mean a less probable theory. In a paper titled “A proof of the impossibility of inductive probability”, Popper and his collaborator David Miller set out to demonstrate, in a technical fashion, that the part of an hypothesis that is not deductively entailed by the evidence is always strongly counter-supported by it. According to them, “this result is completely devastating for the inductive interpretation of the calculus of probability”.

According to Jaynes, “written for scientists, this is like trying to prove the impossibility of heavier-than-air flight to an assembly of professional airline pilots.”

As an adherent of the Bayesian approach to statistics and probability, and an admirer of Jaynes, my thesis here is that Popper was right. Rationality, including Bayesian reasoning, does indeed consist only of deductive logic. (As David Miller put it, “the use of Bayes theorem does not characterize Bayesianism any more than the use of Pythagoras’ theorem characterizes Pythagoreanism”).

I believe the debate between Bayesians and Popperians comes from a misunderstanding of the word “induction” as used by Bayesians. Bayesian inference is not a form of induction: it is entirely deductive. If we have a “definite set of specified alternatives” with a probability distribution, and if we can use this model to compute the probability of future observations under each of those alternatives, the subsequent modification of the probability distribution is logically entailed by our premises — which makes it a deductive inference. We are not learning anything beyond what we already put into our model and what we subsequently observe: we move smoothly from a prior set of assumptions to a posterior set of conclusions, according to clear mathematical rules.

It seems preposterous to suggest that such an important philosophical debate turns on a misuse of words, but I really believe that’s what’s happening here. We were misled to call Bayesian inference “inductive probability” because it makes it look like evidence can support an hypothesis without deductively entailing it. But in fact, the evidence only supports that hypothesis via a prior set of probabilistic assumptions that are not supported by the evidence.

This how David Miller expresses the problem:

There is nothing at all inductive about Bayesian conditionalization. Statements of probability are not statements about the external world, and how they are amended in light of the new evidence is determined perfectly deductively. […] Discovering an item of evidence that makes an hypothesis more (or less) probable is not a scientific advance; it is simply a move.

More fundamentally, it is easy to see how Bayesianism fails as a philosophy of science. The logic of science does not consist of picking the most probable explanation from a set of preordained alternatives — it consists of creating new ones and putting them to the test. The set of all possible scientific explanations does not obey the probability calculus, simply because they cannot be known in advance. As David Deutsch observed, the negation of a scientific explanation does not constitute an alternative explanation.

Jaynes seems to think it’s ridiculous to talk about the set of all possible scientific explanations, because such a set is not well-defined in terms of probability theory. But this is precisely the point. Anyone concerned with the truth must admit that the answers we are looking for may not already be contained in our existing models. Given a set of alternative hypotheses, the probabilities we assign to them depend upon the validity of that model — which remains mysterious. This is what makes Bayesianism a static philosophy of science. It is not compatible with the growth of knowledge — the creation of new explanations and new models.

Furthermore, the probabilities computed by a model have nothing to do with the probability of the model itself being true. If evidence can deductively change a probability distribution, via a framework of assumptions, in no way can it “support” that framework as a whole. Even if a Bayesian model achieves extraordinary predictive accuracy, that accuracy does not logically imply that the model contains any truth about the world (although you might conjecture that it does to explain why it works so well). There could always be better explanations. In the Popperian view, it’s the model as a whole, with its assumptions about the set of possibilities, that should be seen as conjectural, with its better alternatives waiting to be conjectured into existence. No amount of predictive success can tell you that your model is probably true — except, maybe, in light of another, more general model, subject to the same objection.

The most elegant statement of that argument comes from Jacob Bronowski:

Philosophers who have tried to quantify the weight of new evidence have often said that it increases the probability of the theory. But I have already remarked that Popper insists, and rightly insists, that we cannot assign a probability to a theory: for probabilities have to conform to a calculus which (he holds, and I hold) can only be made to apply consistently to physical events or logical statements about them. I put this by saying that probability requires the events which it subsumes to have a distribution, but a theory and all its possible alternatives do not have a unique distribution. It is true that a theory can contain a parameter whose possible values have a distribution, so that we can assign a probability to the hypothesis that the parameter has one range of values rather than another. But this is not the same thing as calculating a probability for the theory as a whole.

As a final note, I want to give an example of the misuse of probability theory to express epistemological truths. Sean Carroll and Nate Silver both remark that when a Bayesian thinker assigns a probability of 1 or 0 to a given statement, it means that no evidence will ever change their mind. Thus, to reflect the uncertain and revisable nature of scientific knowledge, they somehow imply that there is something irrational about thinking that something has a probability of one or zero. This idea is also known as Cromwell’s rule, after the famous quote from Oliver Cromwell: “I beseech you, in the bowels of Christ, think it possible that you may be mistaken.”

This, to me, is a misconception. If I fill an urn with black marbles, it is not irrational, based on my model, to say that there are 100% chances that the next marble I’ll draw will be black. It’s not an assertion of epistemic or metaphysical certainty, or a form of dogmatism. It’s a straightforward deduction from the information I have about the content of the urn. The model itself is still conjectural. Any result other than a black marble would flatly refute it. What’s irrational is not assigning probabilities of 1 or 0: it is holding on to models that don’t work, perhaps because they wrongly assigned probabilities of 1 and 0.

I beseech you, in the bowels of Christ, to see the difference.

So, can data support an hypothesis? My answer: yes, in a deductive manner, given a well-specified set of all possibilities known in advance, and prior conjectures about what the evidence would look like under each of those possibilities.

The resilience of the idea of empirical support may be due to the fact that, since a rational thinker can only know a finite set of possible alternative explanations, the psychology of belief and our subjective sense of plausibility could reflect in some way the mathematics of Bayesian probability, in the sense described by Sokal and Bricmont. For practical purposes, it’s possible that the idea of evidential support for our beliefs cannot be uprooted from the human mind. However, we should be very clear about what we mean by that. Such a support can only be deductive and mediated by models consisting of unproven and often implicit conjectures.

2650 words

In reply to Евгений Волков

Re: Why Bayesianism fails / Почему байесианизм терпит неудачу

by Евгений Волков -
  • Fredrick George Welfare
    Fredrick George Welfare The issue of whether prediction results in accuracy, as validity, is a separate issue, that depends on one’s perspective towards science and logic.

    Bayesianism is simply the updating of one’s hypotheses on the basis of new information, new data, new 
    theoretical criticisms, etc. Stupidly, many experts continue to make predictions from states of non equilibrium that use to be states of equilibrium. Foreign policy analysts, statesmen, and economists cannot seriously be predicting outcomes on the basis of neoconservative or neoliberal presuppositions today, they must update on the basis of current events. But they don’t and instead we get denials and rejections of The Bayesian!

    When the facts conflict with convictions, should the convictions persist and the facts be denied??
    Скрыть комментарий или пожаловаться на него
  • Phil Wood
    Phil Wood This sense is somewhat close to Popper's thoughts on the matter, although these were not published.
    Скрыть комментарий или пожаловаться на него
    Скрыть 13 ответов
    • Luc Castelein
      Luc Castelein Can you tell us more about this, Phil?
      Скрыть комментарий или пожаловаться на него
    • Phil Wood
      Phil Wood Luc Castelein It was in some of the Nachlass materials. Basically Popper thought Bayesian inference as scientific method was inductionist. The irony for me is that Bayesian statistical inference, on the other hand, has many admirable falsificaitonist ideas, such as techniques for determining if the assumed statistical distribution was a warranted choice.
      Скрыть комментарий или пожаловаться на него
    • Luc Castelein
      Luc Castelein I have to confess that I don't understand the formulas. I have only read the article quickly, but it seems to me that an argument is made that I always make about accounting. If you use a formula based on certain assumptions about reality, the result is dependent on your assumptions. It is very well possible that reality is completely different. That customer might not pay. These machines might not last ten years. While people might be very critical and try to falsify whatever they do, I don't really understand how any result is not dependent on the assumptions. Or is there sthg I don't understand?
      Скрыть комментарий или пожаловаться на него
    • Phil Wood
      Phil Wood Luc Castelein I think that's the nub of it. Basically, the logic of the conjecture is: If my data are informative and the assumptions of my theory hold, then if I am a coherent rational person, I should believe the following things about the unknown parameter (such as a range of credible values). The falsificationist critic can formally criticize the assumptions (say, that a parameter is normally distributed) by looking at the distribution from things like the MCMC sampler and conjecture a new distribution or point out that the observed sampler doesn't even look like any parametric distribution at all, as a partial or complete refutation of the approach. Now of course, you can also pick at whether the data are representative (outliers, nonlinear or even dynamic relationships) Unfortunately, Bayesian statisticians pay a great deal of attention to the distribution assumption, but not so much to the question of whether, say, cross-sectional data can be generalized to the individual. FWIW.
      Скрыть комментарий или пожаловаться на него
    • Luc Castelein
      Luc Castelein I forgot almost everything I learned about statistics. I will read some introductory stuff when I'm pensioned in a few years (if I'm lucky to stay alive). But basically, the question is: how can you account for the unknown? Aren't these formulas always based on present-day knowledge? Don't we agree that we cannot predict future knowledge or future observations? What if you see a black swan all of a sudden...
      Скрыть комментарий или пожаловаться на него
    • Luc Castelein
      Luc Castelein Anyway, thanks for trying to help, Phil Wood!
      Скрыть комментарий или пожаловаться на него
    • Fredrick George Welfare
      Fredrick George Welfare Luc Castelein If we cannot predict, what is the point?
      Скрыть комментарий или пожаловаться на него
    • Fredrick George Welfare
      Fredrick George Welfare Phil Wood The missing term is ‘updating.’ Have you ever seen this term being used in this context?
      Скрыть комментарий или пожаловаться на него
    • Phil Wood
      Phil Wood Fredrick George Welfare Well yes, if I'm following your point, yes, if you accept the statistical machinery, then you're allowed to update. My point, though, is that there's nothing that forces you into accepting your initial conjectural Bayesian statistical model without kicking the tires. Does that make sense?
      Скрыть комментарий или пожаловаться на него
    • Luc Castelein
      Luc Castelein I think I will have to read that last remark a few times...
      Скрыть комментарий или пожаловаться на него
    • Luc Castelein
      Luc Castelein You mean that it gives us unjustified, fallible knowledge, just like the rest of our knowledge? That Popper had a point, but that it is usefull for practical purposes?
      Скрыть комментарий или пожаловаться на него
    • Phil Wood
      Phil Wood Luc Castelein Well yes, I think it does. At least to my heretical self, the entire Bayesian enterprise is a bold and very unlikely conjecture that's been very productive. After all, what earthly reason would one have to thinking that beliefs follow the laws of statistical distributions? It's a realy stretch that you "have to take on faith." That said, it solves many problems that our traditional inference procedures had significant problems with.
      Скрыть комментарий или пожаловаться на него
    • Andrew Crawshaw
      Andrew Crawshaw Fredrick George Welfare we can predict, but not on the basis of data. Data is used to test theory, not to make predictions.
      Скрыть комментарий или пожаловаться на него
    Евгений Волков
    Напишите ответ...
     
  • Kenneth Allen Hopf
    Kenneth Allen Hopf But .. my only criticism would be that your conclusion is trivially true. This is a conclusion I reached about 20 years ago. I've talked about it on and off ever since, though nobody has ever paid any attention. Undoubtedly, my reasoning was flawed. So perhaps you can point out my error. It went as follows:

    Whether Bayesian or not, there can be no probabilistic inference of any sort whatever without an assumption to the effect that the sample space is random. I think this is true. But the reason it's true is that the assumption of randomization is in effect a generalization, i.e., a covering generalization that renders a deduction possible. You're saying in effect that, within a certain margin of error, any subset of the sample space is representative of the whole. If that is not true, then you cannot draw any conclusion at all. The reason this does not appear to be obvious is that you're dealing with bad mathematicians. That is, they simply forget about the assumption of randomness and deal with sample spaces as if there need be no initial assumption made about them in the first place. If they didn't do that, then it would be clear from the outset that probabilistic inferences of any sort must be deductively valid and consequently must have nothing whatever to do with induction. These inferences are not ampliative and anyone who thinks they are just doesn't know what they're talking about.

    In short, I would claim that you can generalize your conclusion to all probabilistic inferences, not just those of Bayesian design. What do you think?
    Скрыть комментарий или пожаловаться на него
    Евгений Волков
    Напишите ответ...
     
  • Dan Langlois
    Dan Langlois 'It seems preposterous to suggest that such an important philosophical debate turns on a misuse of words, but I really believe that’s what’s happening here.'

    I incline to reiterate, though, that it seems preposterous.


    'More fundamentally, it is easy to see how Bayesianism fails as a philosophy of science.'

    I object to 'easy to see', here, at least. What seems easy to see, is that we are considering one of the most important developments in epistemology in the 20th century. And, heck, not, I hope, just to anger Popperians, but also, one of the most promising avenues for further progress in epistemology in the 21st century. I think it seems rather easy to see that there are important results in the Bayesian analysis of scientific practice. That said, I guess you could still emphasize important potential problems for Bayesian Confirmation Theory and for Bayesian epistemology generally, but I wouldn't insist that it is easy to evaluate their seriousness.
    Скрыть комментарий или пожаловаться на него
    Евгений Волков
    Напишите ответ...
     
  • Kenneth Allen Hopf
    Kenneth Allen Hopf Of course, it should not be surprising that probability and statistics, as a "field" of mathematics, is deductively valid. This is just what one would expect given that it is true of all mathematics.
    Скрыть комментарий или пожаловаться на него
  • Kenneth Allen Hopf
    Kenneth Allen Hopf The successful application of Bayes' Theorem to a scientific theory is not useless, but nor does it banish the element of conjecture from the method. Rather, it removes the element of conjecture from the theory and places it instead upon a standardized assumption to the effect that the relevant sample space is adequately randomized. This assumption is a conjecture, and if it isn't true, then inference breaks down. Thus there is no conflict between critical rationalism and bayesian inference when the application of bayesian inference is properly understood. As David Miller said to Bayesians, anything you can do I can do just as well. Further, there is nothing in this analysis that appeals to the misunderstanding of words, nor to the dismissal of bayesian analysis. You can use it, if you wish. But you should not pretend that, by doing so, you have thereby escaped the ultimately conjectural status of your theory.
    Скрыть комментарий или пожаловаться на него
  • Luc Castelein
    Luc Castelein Clovis Roussy:thanks for sharing this!
    Скрыть комментарий или пожаловаться на него
  • Fredrick George Welfare
    Fredrick George Welfare I direct this post to Phil Wood and all others. This first quote is from Norman H Anderson 2001 'Empirical direction in Design and Direction,'

    "Bayesian Theory. Bayesian theory can be written in a simple symbolic form:


    Posterior Probability = Prior Probability * Evidence

    where * symbolizes an integration operator. Consider estimating a population mean from sample data. Prior probability represents our belief about the location of the mean before obtaining the Evidence, that is, the information extracted from the sample. Prior probability and Evidence are integrated to produce Posterior probability, that is, our belief about the location of the mean after integrating the Evidence.
    Belief probability is not anarchic, as it might seem, for it is subject to the universal laws of probability theory, just as frequentist probability. Thus, if your Posterior follows a normal distribution, its standard deviation determines the 95% Bayesian belief interval by the same formula as for the 95% confidence interval. Your Prior belief may be any arbitrary personal opinion, it is true, but your Posterior belief must obey the Bayesian formula of Equation 1 for integrating the given Evidence with your Prior. Two persons may have very different Priors but they will converge on the same Posterior with Evidence from repeated samples (assuming they evaluate the Evidence in the same way).a

    Bayes’ theorem itself is a noncontroversial result from the early years of probability theory. When the Prior in Equation 1 is based on objective frequencies, everyone agrees that Bayes’ theorem applies, as illustrated with the algebraic formula of Note 19.1.3h, page 639. Allowing subjective Priors might thus seem a straightforward extension of frequentist statistics.

    Instead, subjective probability leads to statistical theory fundamentally different from classical frequentist theory. This difference appears in the way the two approaches treat randomness. In Neyman–Pearson theory, the true mean is a fixed property of the population and each possible random sample generates a confidence interval. Hence the confidence interval is random.

    Bayesian theory conditionalizes the analysis on the given sample. It is illegitimate to consider what might have happened with other possible samples. The data analysis must be based solely on what did happen, never on what might have happened. The belief interval must be fixed, therefore, since it is based on the sample actually obtained. Instead, the “true mean” is random. A 95% belief interval thus contains the “true mean” with probability .95.

    To highlight this issue, suppose you have a sample from a normal distribution so that confidence interval and Bayesian belief interval are numerically equal. You must allow the possibility that you have a ‘‘bad” sample, for which the interval does not contain the true mean. Neyman and Pearson argue that it thus makes no sense to say that your one particular interval contains the true mean with probability 1 − α; that’s why they introduced confidence as a distinct concept from probability.
    Bayesians also say nothing about the probability that your one particular interval contains the true mean. Instead, they redefine probability to refer to your belief about the true mean, given the evidence of your one particular sample. This is entirely legitimate. Remarkably, it turns out to have notable statistical advantages, both technical and conceptual, over the classical approach. It does run into trouble, however, because it does not recognize Fisher’s principle of randomization." p610-613 epub 3rd edition

    Taber and Lodge 2006 'Motivated Skepticism in the Evaluation of Political Beliefs,'

    "Ideally, one’s prior beliefs and attitudes—whether scientific
    or social—should “anchor” the evaluation of new
    information and then, depending on howcredible is some
    piece of evidence, impressions should be adjusted upward
    or downward (Anderson 1981). The “simple” Bayesian
    updating rule would be to increment the overall evaluation
    if the evidence is positive, decrement if negative.
    Assuming one has established an initial belief (attitude or
    hypothesis), normative models of human decision making
    imply or posit a two-step updating process, beginning
    with the collection of belief-relevant evidence, followedby
    the integration of new information with the prior to produce
    an updated judgment. Critically important in such
    normative models is the requirement that the collection
    and integration of new information be kept independent
    of one’s prior judgment (see Evans and Over 1996)." p755

    "From another perspective, with which we also have
    sympathy, Bayesian updating requires independence between
    priors and new evidence (Evans and Over 1996;
    Green and Shapiro 1994; but see Gerber and Green 1998).
    In the extreme, if one distorts new information so that it
    always supports one’s priors, one cannot be rationally responsive
    to the environment; similarly, manipulating the
    information stream to avoid any threat to one’s priors is
    no more rational than the proverbial ostrich." p767
    Скрыть комментарий или пожаловаться на него
    • Phil Wood
      Phil Wood I agree with the literature you cite. It's just that it seems less of a leap to me to make frequentist statements about the world given that the theory corresponds to observed external random variation in the world. Making the assumption that my psychological beliefs must also obey distributional assumptions seems more of a creative, albeit very productive, leap. I've been doing Bayesian statistics now for the past 40 years and was exposed to it from Melvin Novick at the U of Iowa. I'm a "believer" but am trying to put a philosophical hat on when thinking about the general process.
      Скрыть комментарий или пожаловаться на него
    • Fredrick George Welfare
      Fredrick George Welfare Phil Wood

      It may be that the demarcation is over randomness and how this is understood from both the probabilist perspective given sequential or stochastic frequencies from multiple logistical sources, and from the ‘go figure’ or ‘never saw it coming
      ’ camp that has experienced sudden change and surprise on several occasions. Should outliers receive attention?

      It is certainly true that elite governed fields can abruptly change in spite of statistically based expectations and in spite of the mundane doxa of cultural and national reproduction.
      Скрыть комментарий или пожаловаться на него
    • Phil Wood
      Phil Wood Fredrick George Welfare Maybe a concrete example brings the discussion forward productively. To illustrate how I believe that modern Bayesian inference can be viewed as critical rationalism, suppose I do a simple t-test and specify that my prior and posterior beliefs follow a t-distribution. Before I start to calculate critical intervals on my posterior beliefs, however, I look at the multiple chain monte carlo distribution from my draws. I look at this and see, much to my surprise that the MCMC distribution is skewed. I therefore change my proposal distribution and now say that my beliefs are modeled by a chi-square distribution (which contains skew- there are other distributions I could use). Rerunning the model and looking at the MCMC sampler, I now see that my distribution looks like my new proposal distribution. There are, of course, other moves I could make like rescaling the variable or identifying influential or outlying observations, but this one move highlights that I make my conjectural Bayesian model, propose the MCMC distribution as a critical test, and kill off the first Bayesian analysis in favor a model which has better "survival value." Does that work as an example?
      Скрыть комментарий или пожаловаться на него
    • Fredrick George Welfare
      Fredrick George Welfare Phil Wood The differential statistical analysis you provided seems to mean that the new model, that explains the data, is more valid, more rigorous, than the initial model. So, what have you done? An updating of your hypothesis or theory? A refutation of your or the initial position?

      I think the Bayesian takes the given gathered data and using statistics, analysis, critique, contexts, ... forms a possible explanation, an hypothesis. As new information arrives, temporally, this initial position is recalculated, updated, and the prior explanation is restated with the new integrated information.
      Скрыть комментарий или пожаловаться на него
    Евгений Волков
    Напишите ответ...
     
  • Fredrick George Welfare
    Fredrick George Welfare When the facts get in the way of an ideological conviction, or any other kind of anchored belief, the facts must be bent or discarded, but the conviction should remain unchanged. (I think this is a quote but I do not know who said it.) This has been the problem for Popper, through Kuhn to the recent Sociology of Science paradigm. The facts are often discounted or ad hoc hypotheses are invented to "account" for them. The problem is however that 'The Bayesian Man' is not recognized and the same old convictions hold sway. This is the problem.

    Let's take the example of college teaching. For many years, the professors claimed that methods matter, student achievement on 'final exams' varied by professor's methods. The educational researchers however claimed that there was no significant difference between one method and another, achievement on final exams either was not significantly different or there were other causative factors. This is early 60's and before. Today however, educational researchers widely recognize the impact of teaching methods, ie visualization, questioning and participation techniques, lecture style and format, e.g. spelling it out! But, teachers are reluctant to accept the educational research claiming that no matter what they do, it makes little difference, especially with regard to very young students in grammer school. Thus we have a high drop out rate from high schools and from early college! Administrators struggle with teachers and professors to improve their teaching techniques because they make a difference while the teachers and professors maintain their convictions/beliefs.

    It is a matter of convictions versus responsibility.
    Скрыть комментарий или пожаловаться на него
  • Cyrus Contractor
    Cyrus Contractor I am probabaly missing something, or more probably missing a lot due to my lay knowledge on this but to my mind saying " The logic of science does not consist of picking the most probable explanation from a set of alternatives known in advance : it consists of creating new ones. " is quite a narrow definition of science. Perhaps that definition fits the requirements of your thesis but for me not a full definition of science. Science often does examine a set of probable explanations of what ever results it generates. It may then design a test (aka experiment) to discover if the most probable (or indeed any) explanation(s) are valid enough to move forward with. When the most reasonable explanation is not valid or sufficient then indeed it may then as a consequence of the scientific method then create a new theory, again to be come tested in it's own turn. So sciences use as a tool to understand is not confined to solely creating new explanations, it can ratify existing ones too and indeed should do so on a continuium to ensure we are still doing or building policy on facts and not fictions. So personally I would avoid the definition used of science and make it broader ?
    Скрыть комментарий или пожаловаться на него

3572 words