-
Eric-Jan Wagenmakers23.03.26, 18:30
In Bayesian model selection, the Ockham factor is the extent to which the maximum likelihood
Go to contribution page
needs to be discounted to attain a fair assessment of a model's predictive performance. As a correction for selection, the Ockham factor quantifies the amount of prior mass that was wasted
on parameter values that are undercut by the data. In this talk I will outline several complementary... -
Jeremias Knoblauch24.03.26, 09:30
In this talk, I provide my perspective on the efforts to develop inference procedures with Bayesian characteristics that go beyond Bayes' Rule as an epistemological principle. I will explain why these efforts are needed, as well as the forms which they take. As an example, I will focus on the recently developed predictively oriented (PrO) posterior, which expresses epistemic uncertainty as a...
Go to contribution page -
Jan-Willem Romeijn24.03.26, 10:15
In statistics it is often said that "All models are wrong but some are useful". This dictum seems to suggest that statistical models are somehow truth-apt, or factive. In my talk I investigate how we might conceptualize statistical models so that they are, and I argue that ideas on the factivity and truth-aptness of models can be used to clarify certain debates over statistical methodology....
Go to contribution page -
Sabine Hoffmann24.03.26, 11:30
When analyzing their data, researchers usually need to choose among a number of statistical models that are not diverse and flexible enough to adequately capture the data generation mechanism. In this situation, they often make the problem fit the tools by introducing auxiliary assumptions that are at best questionable and at worst indefensible. This talk will illustrate on past and ongoing...
Go to contribution page -
Vincent Fortuin24.03.26, 12:15
Occam’s razor appears in several perspectives across statistics, information theory, and learning theory. In Bayesian model selection, it emerges through the marginal likelihood, which automatically trades off goodness of fit with the volume of parameter space supported by the data. Closely related ideas arise in the Minimum Description Length (MDL) framework, where model selection is...
Go to contribution page -
Julia Stadler24.03.26, 14:00
The distribution of galaxies in the Universe holds important information about the origin and evolution of the Universe. It can elucidate some fundamental questions in current cosmology, for example, the nature of Dark Energy that accelerates the expansion of the Universe. Extracting this information from observations, however, poses considerable challenges in theoretical modeling and...
Go to contribution page -
Matteo Guardiani (Max Planck Institute for Astrophysics)24.03.26, 14:45
Model comparison in high-dimensional Bayesian inference remains computationally challenging due to the intractability of the marginal likelihood (evidence). Variational inference offers an attractive alternative by providing a tractable lower bound to the evidence, the Evidence Lower Bound (ELBO), which can be optimized and estimated efficiently even in very large parameter spaces.
In this...
Go to contribution page -
Rafael Fuchs (Munich Center for Mathematical Philosophy, LMU)24.03.26, 15:30
Classical convergence results show that Bayesian agents who entertain the true hypothesis H as one of their alternatives will become certain of H’ in the limit. If H is not in the set of alternatives, on the other hand, we may still converge on the 'best' alternative (e.g. in terms of minimal KL-divergence, see Barron 1998). However, it has also been demonstrated how this can fail, if...
Go to contribution page -
Jakob Maria Schröder24.03.26, 15:50
In adaptive Bayesian spectroscopy, physics fixes the forward model and what remains is the choice of prior. This prior also implicitly encodes a certain effective size of the model which make model and prior selection overlapping problems. In spectroscopy we typically have real structural knowledge. Qualitatively speaking, spectra are smooth, variance is finite, certain frequency ranges...
Go to contribution page -
Mario Hubert24.03.26, 16:10
This talk explores the steps that need to be taken to build a causal model for EPR experiments. Wood and Spekkens (2015) argued that causal discovery algorithms are insufficient for this task: they cannot distinguish EPR from Bell-inequality-violating correlations, since both share the same independence relations. Bell inequality violations instead provide a hypothesis space of possible causal...
Go to contribution page
Wähle Zeitzone
Die Zeitzone Ihres Profils: