How to grow a mind (2011)

How to grow a mind: statistics, structure, and abstraction.
Science. 2011 Mar 11;331(6022):1279-85.
24th Annual Conference on Neural Information Processing Systems (NIPS)
Tenenbaum JB, Kemp C, Griffiths TL, Goodman ND.
http://www.ncbi.nlm.nih.gov/pubmed/21393536

In coming to understand the world-in learning concepts, acquiring language, and grasping causal relations-our minds make inferences that appear to go far beyond the data available. How do we do it?

This review describes recent approaches to reverse-engineering human learning and cognitive development and, in parallel, engineering more humanlike machine learning systems.
Computational models that perform probabilistic inference over hierarchies of flexibly structured representations can address some of the deepest questions about the nature and origins of human thought: How does abstract knowledge guide learning and reasoning from sparse data? What forms does our knowledge take, across different domains and tasks? And how is that abstract knowledge itself acquired?

41:20 Probabilistic programming language
Universal language for describing generative models + generic tools for (approximate) probabilistic inference.
– Probabilistic logic programming (Prolog)
– Probabilistic functional programming (Lisp) or imperative programming (Matlab)

related:
How to build a better brain
Carnegie Mellon University, 2011
http://www.youtube.com/watch?v=dLSqs7qhNwQ

Advertisements

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s