Why Bayesianism fails / Почему байесианизм терпит неудачу

Re: Why Bayesianism fails / Почему байесианизм терпит неудачу

by Евгений Волков -
Number of replies: 0
  • Fredrick George Welfare
    Fredrick George Welfare The issue of whether prediction results in accuracy, as validity, is a separate issue, that depends on one’s perspective towards science and logic.

    Bayesianism is simply the updating of one’s hypotheses on the basis of new information, new data, new 
    theoretical criticisms, etc. Stupidly, many experts continue to make predictions from states of non equilibrium that use to be states of equilibrium. Foreign policy analysts, statesmen, and economists cannot seriously be predicting outcomes on the basis of neoconservative or neoliberal presuppositions today, they must update on the basis of current events. But they don’t and instead we get denials and rejections of The Bayesian!

    When the facts conflict with convictions, should the convictions persist and the facts be denied??
    Скрыть комментарий или пожаловаться на него
  • Phil Wood
    Phil Wood This sense is somewhat close to Popper's thoughts on the matter, although these were not published.
    Скрыть комментарий или пожаловаться на него
    Скрыть 13 ответов
    • Luc Castelein
      Luc Castelein Can you tell us more about this, Phil?
      Скрыть комментарий или пожаловаться на него
    • Phil Wood
      Phil Wood Luc Castelein It was in some of the Nachlass materials. Basically Popper thought Bayesian inference as scientific method was inductionist. The irony for me is that Bayesian statistical inference, on the other hand, has many admirable falsificaitonist ideas, such as techniques for determining if the assumed statistical distribution was a warranted choice.
      Скрыть комментарий или пожаловаться на него
    • Luc Castelein
      Luc Castelein I have to confess that I don't understand the formulas. I have only read the article quickly, but it seems to me that an argument is made that I always make about accounting. If you use a formula based on certain assumptions about reality, the result is dependent on your assumptions. It is very well possible that reality is completely different. That customer might not pay. These machines might not last ten years. While people might be very critical and try to falsify whatever they do, I don't really understand how any result is not dependent on the assumptions. Or is there sthg I don't understand?
      Скрыть комментарий или пожаловаться на него
    • Phil Wood
      Phil Wood Luc Castelein I think that's the nub of it. Basically, the logic of the conjecture is: If my data are informative and the assumptions of my theory hold, then if I am a coherent rational person, I should believe the following things about the unknown parameter (such as a range of credible values). The falsificationist critic can formally criticize the assumptions (say, that a parameter is normally distributed) by looking at the distribution from things like the MCMC sampler and conjecture a new distribution or point out that the observed sampler doesn't even look like any parametric distribution at all, as a partial or complete refutation of the approach. Now of course, you can also pick at whether the data are representative (outliers, nonlinear or even dynamic relationships) Unfortunately, Bayesian statisticians pay a great deal of attention to the distribution assumption, but not so much to the question of whether, say, cross-sectional data can be generalized to the individual. FWIW.
      Скрыть комментарий или пожаловаться на него
    • Luc Castelein
      Luc Castelein I forgot almost everything I learned about statistics. I will read some introductory stuff when I'm pensioned in a few years (if I'm lucky to stay alive). But basically, the question is: how can you account for the unknown? Aren't these formulas always based on present-day knowledge? Don't we agree that we cannot predict future knowledge or future observations? What if you see a black swan all of a sudden...
      Скрыть комментарий или пожаловаться на него
    • Luc Castelein
      Luc Castelein Anyway, thanks for trying to help, Phil Wood!
      Скрыть комментарий или пожаловаться на него
    • Fredrick George Welfare
      Fredrick George Welfare Luc Castelein If we cannot predict, what is the point?
      Скрыть комментарий или пожаловаться на него
    • Fredrick George Welfare
      Fredrick George Welfare Phil Wood The missing term is ‘updating.’ Have you ever seen this term being used in this context?
      Скрыть комментарий или пожаловаться на него
    • Phil Wood
      Phil Wood Fredrick George Welfare Well yes, if I'm following your point, yes, if you accept the statistical machinery, then you're allowed to update. My point, though, is that there's nothing that forces you into accepting your initial conjectural Bayesian statistical model without kicking the tires. Does that make sense?
      Скрыть комментарий или пожаловаться на него
    • Luc Castelein
      Luc Castelein I think I will have to read that last remark a few times...
      Скрыть комментарий или пожаловаться на него
    • Luc Castelein
      Luc Castelein You mean that it gives us unjustified, fallible knowledge, just like the rest of our knowledge? That Popper had a point, but that it is usefull for practical purposes?
      Скрыть комментарий или пожаловаться на него
    • Phil Wood
      Phil Wood Luc Castelein Well yes, I think it does. At least to my heretical self, the entire Bayesian enterprise is a bold and very unlikely conjecture that's been very productive. After all, what earthly reason would one have to thinking that beliefs follow the laws of statistical distributions? It's a realy stretch that you "have to take on faith." That said, it solves many problems that our traditional inference procedures had significant problems with.
      Скрыть комментарий или пожаловаться на него
    • Andrew Crawshaw
      Andrew Crawshaw Fredrick George Welfare we can predict, but not on the basis of data. Data is used to test theory, not to make predictions.
      Скрыть комментарий или пожаловаться на него
    Евгений Волков
    Напишите ответ...
     
  • Kenneth Allen Hopf
    Kenneth Allen Hopf But .. my only criticism would be that your conclusion is trivially true. This is a conclusion I reached about 20 years ago. I've talked about it on and off ever since, though nobody has ever paid any attention. Undoubtedly, my reasoning was flawed. So perhaps you can point out my error. It went as follows:

    Whether Bayesian or not, there can be no probabilistic inference of any sort whatever without an assumption to the effect that the sample space is random. I think this is true. But the reason it's true is that the assumption of randomization is in effect a generalization, i.e., a covering generalization that renders a deduction possible. You're saying in effect that, within a certain margin of error, any subset of the sample space is representative of the whole. If that is not true, then you cannot draw any conclusion at all. The reason this does not appear to be obvious is that you're dealing with bad mathematicians. That is, they simply forget about the assumption of randomness and deal with sample spaces as if there need be no initial assumption made about them in the first place. If they didn't do that, then it would be clear from the outset that probabilistic inferences of any sort must be deductively valid and consequently must have nothing whatever to do with induction. These inferences are not ampliative and anyone who thinks they are just doesn't know what they're talking about.

    In short, I would claim that you can generalize your conclusion to all probabilistic inferences, not just those of Bayesian design. What do you think?
    Скрыть комментарий или пожаловаться на него
    Евгений Волков
    Напишите ответ...
     
  • Dan Langlois
    Dan Langlois 'It seems preposterous to suggest that such an important philosophical debate turns on a misuse of words, but I really believe that’s what’s happening here.'

    I incline to reiterate, though, that it seems preposterous.


    'More fundamentally, it is easy to see how Bayesianism fails as a philosophy of science.'

    I object to 'easy to see', here, at least. What seems easy to see, is that we are considering one of the most important developments in epistemology in the 20th century. And, heck, not, I hope, just to anger Popperians, but also, one of the most promising avenues for further progress in epistemology in the 21st century. I think it seems rather easy to see that there are important results in the Bayesian analysis of scientific practice. That said, I guess you could still emphasize important potential problems for Bayesian Confirmation Theory and for Bayesian epistemology generally, but I wouldn't insist that it is easy to evaluate their seriousness.
    Скрыть комментарий или пожаловаться на него
    Евгений Волков
    Напишите ответ...
     
  • Kenneth Allen Hopf
    Kenneth Allen Hopf Of course, it should not be surprising that probability and statistics, as a "field" of mathematics, is deductively valid. This is just what one would expect given that it is true of all mathematics.
    Скрыть комментарий или пожаловаться на него
  • Kenneth Allen Hopf
    Kenneth Allen Hopf The successful application of Bayes' Theorem to a scientific theory is not useless, but nor does it banish the element of conjecture from the method. Rather, it removes the element of conjecture from the theory and places it instead upon a standardized assumption to the effect that the relevant sample space is adequately randomized. This assumption is a conjecture, and if it isn't true, then inference breaks down. Thus there is no conflict between critical rationalism and bayesian inference when the application of bayesian inference is properly understood. As David Miller said to Bayesians, anything you can do I can do just as well. Further, there is nothing in this analysis that appeals to the misunderstanding of words, nor to the dismissal of bayesian analysis. You can use it, if you wish. But you should not pretend that, by doing so, you have thereby escaped the ultimately conjectural status of your theory.
    Скрыть комментарий или пожаловаться на него
  • Luc Castelein
    Luc Castelein Clovis Roussy:thanks for sharing this!
    Скрыть комментарий или пожаловаться на него
  • Fredrick George Welfare
    Fredrick George Welfare I direct this post to Phil Wood and all others. This first quote is from Norman H Anderson 2001 'Empirical direction in Design and Direction,'

    "Bayesian Theory. Bayesian theory can be written in a simple symbolic form:


    Posterior Probability = Prior Probability * Evidence

    where * symbolizes an integration operator. Consider estimating a population mean from sample data. Prior probability represents our belief about the location of the mean before obtaining the Evidence, that is, the information extracted from the sample. Prior probability and Evidence are integrated to produce Posterior probability, that is, our belief about the location of the mean after integrating the Evidence.
    Belief probability is not anarchic, as it might seem, for it is subject to the universal laws of probability theory, just as frequentist probability. Thus, if your Posterior follows a normal distribution, its standard deviation determines the 95% Bayesian belief interval by the same formula as for the 95% confidence interval. Your Prior belief may be any arbitrary personal opinion, it is true, but your Posterior belief must obey the Bayesian formula of Equation 1 for integrating the given Evidence with your Prior. Two persons may have very different Priors but they will converge on the same Posterior with Evidence from repeated samples (assuming they evaluate the Evidence in the same way).a

    Bayes’ theorem itself is a noncontroversial result from the early years of probability theory. When the Prior in Equation 1 is based on objective frequencies, everyone agrees that Bayes’ theorem applies, as illustrated with the algebraic formula of Note 19.1.3h, page 639. Allowing subjective Priors might thus seem a straightforward extension of frequentist statistics.

    Instead, subjective probability leads to statistical theory fundamentally different from classical frequentist theory. This difference appears in the way the two approaches treat randomness. In Neyman–Pearson theory, the true mean is a fixed property of the population and each possible random sample generates a confidence interval. Hence the confidence interval is random.

    Bayesian theory conditionalizes the analysis on the given sample. It is illegitimate to consider what might have happened with other possible samples. The data analysis must be based solely on what did happen, never on what might have happened. The belief interval must be fixed, therefore, since it is based on the sample actually obtained. Instead, the “true mean” is random. A 95% belief interval thus contains the “true mean” with probability .95.

    To highlight this issue, suppose you have a sample from a normal distribution so that confidence interval and Bayesian belief interval are numerically equal. You must allow the possibility that you have a ‘‘bad” sample, for which the interval does not contain the true mean. Neyman and Pearson argue that it thus makes no sense to say that your one particular interval contains the true mean with probability 1 − α; that’s why they introduced confidence as a distinct concept from probability.
    Bayesians also say nothing about the probability that your one particular interval contains the true mean. Instead, they redefine probability to refer to your belief about the true mean, given the evidence of your one particular sample. This is entirely legitimate. Remarkably, it turns out to have notable statistical advantages, both technical and conceptual, over the classical approach. It does run into trouble, however, because it does not recognize Fisher’s principle of randomization." p610-613 epub 3rd edition

    Taber and Lodge 2006 'Motivated Skepticism in the Evaluation of Political Beliefs,'

    "Ideally, one’s prior beliefs and attitudes—whether scientific
    or social—should “anchor” the evaluation of new
    information and then, depending on howcredible is some
    piece of evidence, impressions should be adjusted upward
    or downward (Anderson 1981). The “simple” Bayesian
    updating rule would be to increment the overall evaluation
    if the evidence is positive, decrement if negative.
    Assuming one has established an initial belief (attitude or
    hypothesis), normative models of human decision making
    imply or posit a two-step updating process, beginning
    with the collection of belief-relevant evidence, followedby
    the integration of new information with the prior to produce
    an updated judgment. Critically important in such
    normative models is the requirement that the collection
    and integration of new information be kept independent
    of one’s prior judgment (see Evans and Over 1996)." p755

    "From another perspective, with which we also have
    sympathy, Bayesian updating requires independence between
    priors and new evidence (Evans and Over 1996;
    Green and Shapiro 1994; but see Gerber and Green 1998).
    In the extreme, if one distorts new information so that it
    always supports one’s priors, one cannot be rationally responsive
    to the environment; similarly, manipulating the
    information stream to avoid any threat to one’s priors is
    no more rational than the proverbial ostrich." p767
    Скрыть комментарий или пожаловаться на него
    • Phil Wood
      Phil Wood I agree with the literature you cite. It's just that it seems less of a leap to me to make frequentist statements about the world given that the theory corresponds to observed external random variation in the world. Making the assumption that my psychological beliefs must also obey distributional assumptions seems more of a creative, albeit very productive, leap. I've been doing Bayesian statistics now for the past 40 years and was exposed to it from Melvin Novick at the U of Iowa. I'm a "believer" but am trying to put a philosophical hat on when thinking about the general process.
      Скрыть комментарий или пожаловаться на него
    • Fredrick George Welfare
      Fredrick George Welfare Phil Wood

      It may be that the demarcation is over randomness and how this is understood from both the probabilist perspective given sequential or stochastic frequencies from multiple logistical sources, and from the ‘go figure’ or ‘never saw it coming
      ’ camp that has experienced sudden change and surprise on several occasions. Should outliers receive attention?

      It is certainly true that elite governed fields can abruptly change in spite of statistically based expectations and in spite of the mundane doxa of cultural and national reproduction.
      Скрыть комментарий или пожаловаться на него
    • Phil Wood
      Phil Wood Fredrick George Welfare Maybe a concrete example brings the discussion forward productively. To illustrate how I believe that modern Bayesian inference can be viewed as critical rationalism, suppose I do a simple t-test and specify that my prior and posterior beliefs follow a t-distribution. Before I start to calculate critical intervals on my posterior beliefs, however, I look at the multiple chain monte carlo distribution from my draws. I look at this and see, much to my surprise that the MCMC distribution is skewed. I therefore change my proposal distribution and now say that my beliefs are modeled by a chi-square distribution (which contains skew- there are other distributions I could use). Rerunning the model and looking at the MCMC sampler, I now see that my distribution looks like my new proposal distribution. There are, of course, other moves I could make like rescaling the variable or identifying influential or outlying observations, but this one move highlights that I make my conjectural Bayesian model, propose the MCMC distribution as a critical test, and kill off the first Bayesian analysis in favor a model which has better "survival value." Does that work as an example?
      Скрыть комментарий или пожаловаться на него
    • Fredrick George Welfare
      Fredrick George Welfare Phil Wood The differential statistical analysis you provided seems to mean that the new model, that explains the data, is more valid, more rigorous, than the initial model. So, what have you done? An updating of your hypothesis or theory? A refutation of your or the initial position?

      I think the Bayesian takes the given gathered data and using statistics, analysis, critique, contexts, ... forms a possible explanation, an hypothesis. As new information arrives, temporally, this initial position is recalculated, updated, and the prior explanation is restated with the new integrated information.
      Скрыть комментарий или пожаловаться на него
    Евгений Волков
    Напишите ответ...
     
  • Fredrick George Welfare
    Fredrick George Welfare When the facts get in the way of an ideological conviction, or any other kind of anchored belief, the facts must be bent or discarded, but the conviction should remain unchanged. (I think this is a quote but I do not know who said it.) This has been the problem for Popper, through Kuhn to the recent Sociology of Science paradigm. The facts are often discounted or ad hoc hypotheses are invented to "account" for them. The problem is however that 'The Bayesian Man' is not recognized and the same old convictions hold sway. This is the problem.

    Let's take the example of college teaching. For many years, the professors claimed that methods matter, student achievement on 'final exams' varied by professor's methods. The educational researchers however claimed that there was no significant difference between one method and another, achievement on final exams either was not significantly different or there were other causative factors. This is early 60's and before. Today however, educational researchers widely recognize the impact of teaching methods, ie visualization, questioning and participation techniques, lecture style and format, e.g. spelling it out! But, teachers are reluctant to accept the educational research claiming that no matter what they do, it makes little difference, especially with regard to very young students in grammer school. Thus we have a high drop out rate from high schools and from early college! Administrators struggle with teachers and professors to improve their teaching techniques because they make a difference while the teachers and professors maintain their convictions/beliefs.

    It is a matter of convictions versus responsibility.
    Скрыть комментарий или пожаловаться на него
  • Cyrus Contractor
    Cyrus Contractor I am probabaly missing something, or more probably missing a lot due to my lay knowledge on this but to my mind saying " The logic of science does not consist of picking the most probable explanation from a set of alternatives known in advance : it consists of creating new ones. " is quite a narrow definition of science. Perhaps that definition fits the requirements of your thesis but for me not a full definition of science. Science often does examine a set of probable explanations of what ever results it generates. It may then design a test (aka experiment) to discover if the most probable (or indeed any) explanation(s) are valid enough to move forward with. When the most reasonable explanation is not valid or sufficient then indeed it may then as a consequence of the scientific method then create a new theory, again to be come tested in it's own turn. So sciences use as a tool to understand is not confined to solely creating new explanations, it can ratify existing ones too and indeed should do so on a continuium to ensure we are still doing or building policy on facts and not fictions. So personally I would avoid the definition used of science and make it broader ?
    Скрыть комментарий или пожаловаться на него

3572 words