You have just completed your registration at OpenAire.
Before you can login to the site, you will need to activate your account.
An e-mail will be sent to you with the proper instructions.
Important!
Please note that this site is currently undergoing Beta testing.
Any new content you create is not guaranteed to be present to the final version
of the site upon release.
A basic challenge for probabilistic models of cognition is explaining how probabilistically correct solutions are approximated by the limited brain, and how to explain mismatches with human behavior. An emerging approach to solving this problem is to use the same approximation algorithms that were been developed in computer science and statistics for working with complex probabilistic models. Two types of approximation algorithms have been used for this purpose: sampling algorithms, such as importance sampling and Markov chain Monte Carlo, and variational algorithms, such as mean-field approximations and assumed density filtering. Here I briefly review this work, outlining how the algorithms work, how they can explain behavioral biases, and how they might be implemented in the brain. There are characteristic differences between how these two types of approximation are applied in brain and behavior, which points to how they could be combined in future research.
Abbott, J. T., & Griffiths, T. L. (2011). Exploring the influence of particle filter parameters on order effects in causal learning. In Proceedings of the 33rd annual conference of the cognitive science society.
Anderson, J. R. (1991). The adaptive nature of human categorization. Psychological Review, 98(3), 409-429.
Beck, J., Pouget, A., & Heller, K. A. (2012). Complex inference in neural circuits with probabilistic population codes and topic models. In F. Pereira, C. Burges, L. Bottou, & K. Weinberger (Eds.). Advances in neural information processing systems (Vol. 25, pp. 3059-3067). Curran Associates, Inc.
Bishop, C. M. (2006). Pattern recognition and machine learning. New York, NY: Springer.
Brown, S. D., & Steyvers, M. (2009). Detecting and predicting changes. Cognitive Psychology, 58, 49-67.
Daw, N. D., & Courville, A. C. (2008). The pigeon as particle filter. In J. Platt, D. Koller, Y. Singer, & S. Roweis (Eds.). Advances in neural information processing systems (Vol. 20, pp. 369-376). Cambridge, MA: MIT Press.
Daw, N. D., Courville, A. C., & Dayan, P. (2008). Semi-rational models of conditioning: The case of trial order. In N. Chater & M. Oaksford (Eds.), The probabilistic mind (pp. 431-452). Oxford, UK: Oxford University Press.
Doucet, A., de Freitas, N., & Gordon, N. (2001). Sequential Monte Carlo methods in practice. New York: Springer.
Hinton, G. E., & Van Camp, D. (1993). Keeping the neural networks simple by minimizing the description length of the weights. In Proceedings of the sixth annual conference on computational learning theory (pp. 5-13).
Huang, Y., & Rao, R. P. (2014). Neurons as Monte Carlo samplers: Bayesian inference and learning in spiking networks. In Z. Ghahramani, M. Welling, C. Cortes, N. Lawrence, & K. Weinberger (Eds.). Advances in neural information processing systems (Vol. 27, pp. 1943-1951). Curran Associates, Inc.
Levy, R., Reali, F., & Griffiths, T. L. (2009). Modeling the effects of memory on human online sentence processing with particle filters. In D. Koller, D. Schuurmans, Y. Bengio & L. Bottou (Eds.), Advances in neural information processing systems (Vol. 21, pp. 937-944).
Lieder, F., Griffiths, T., & Goodman, N. (2012). Burn-in, bias, and the rationality of anchoring. In Advances in neural information processing systems (pp. 2690-2798).
Medin, D. L., & Edelson, S. M. (1988). Problem structure and the use of base-rate information from experience. Journal of Experimental Psychology: General, 117, 68-85.
Minka, T. (2001). A family of algorithms for approximate Bayesian inference. Unpublished doctoral dissertation. MIT, Boston.
Neal, R. M. (1993). Probabilistic inference using Markov chain Monte Carlo methods. Tech. rep. no. CRG-TR-93-1. Department of Computer Science, University of Toronto.
Probst, D., Petrovici, M. A., Bytschok, I., Bill, J., Pecevski, D., Schemmel, J., et al. (2015). Probabilistic inference in discrete spaces can be implemented into networks of LIF neurons. Frontiers in Computational Neuroscience, 9.
Shanks, D. R. (1985). Forward and backward blocking in human contingency judgment. Quarterly Journal of Experimental Psychology, 97B, 1-21.
Shi, L., & Griffiths, T. L. (2009). Neural implementation of hierarchical Bayesian inference by importance sampling. In Advances in neural information processing systems (pp. 1669-1677).
Shi, L., Griffiths, T. L., Feldman, N. H., & Sanborn, A. N. (2010). Exemplar models as a mechanism for performing Bayesian inference. Psychological Bulletin and Review, 17, 443-464.
Simon, H. A. (1955). A behavioral model of rational choice. The Quarterly Journal of Economics, 69(1), 99-118.
Tassinari, H., Hudson, T. E., & Landy, M. S. (2006). Combining priors and noisy visual cues in a rapid pointing task. The Journal of Neuroscience, 26(40), 10154-10163.
Tversky, A., & Kahneman, D. (1978). Causal schemata in judgments under uncertainty. In Progress in social psychology. Lawrence Erlbaum.
Van Rooij, I. (2008). The tractable cognition thesis. Cognitive Science, 32(6), 939-984.
Wainwright, M. J., & Jordan, M. I. (2008). Graphical models, exponential families, and variational inference. Foundations and Trends in Machine Learning, 1(1-2), 1-305.