June 2008
DeepLearningWorkshopNIPS2007 < Public < TWiki
(via)Theoretical results strongly suggest that in order to learn the kind of complicated functions that can represent high-level abstractions (e.g. in vision, language, and other AI-level tasks), one may need "deep architectures", which are composed of multiple levels of non-linear operations (such as in neural nets with many hidden layers). Searching the parameter space of deep architectures is a difficult optimization task, but learning algorithms (e.g. Deep Belief Networks) have recently been proposed to tackle this problem with notable success, beating the state-of-the-art in certain areas.
This workshop is intended to bring together researchers interested in the question of deep learning in order to review the current algorithms' principles and successes, but also to identify the challenges, and to formulate promising directions of investigation. Besides the algorithms themselves, there are many fundamental questions that need to be addressed: What would be a good formalization of deep learning? What new ideas could be exploited to make further inroads to that difficult optimization problem? What makes a good high-level representation or abstraction? What type of problem is deep learning appropriate for?
The workshop presentation page show selected links to relevant papers (PDF) on the topic.
June 2007
Temporal difference learning - Wikipedia, the free encyclopedia
(via)Temporal difference learning is a prediction method. It has been mostly used for solving the reinforcement learning problem. "TD learning is a combination of Monte Carlo ideas and dynamic programming (DP) ideas." [2] TD resembles a Monte Carlo method because it learns by sampling the environment according to some policy. TD is related to dynamic programming techniques because it approximates its current estimate based on previously learned estimates (a process known as bootstrapping). The TD learning algorithm is related to the Temporal difference model of animal learning.
February 2007
Introduction to Machine Learning
(via)This page has pointers to my draft book on Machine Learning and to its individual chapters. They can be downloaded in Adobe Acrobat format. Although I have tried to eliminate errors, some undoubtedly remain---caveat lector. Certain elements of the typography (overflow into margins, etc.) have not been polished.
The notes survey many of the important topics in machine learning circa 1996. My intention was to pursue a middle ground between theory and practice. The notes concentrate on the important ideas in machine learning---it is neither a handbook of practice nor a compendium of theoretical proofs. My goal was to give the reader sufficient preparation to make the extensive literature on machine learning accessible. The draft is just over 200 pages (including front matter).
November 2006
Dictionary of the History of Ideas
by 1 other (via)The Dictionary of the History of Ideas: Studies of Selected Pivotal Ideas, edited by Philip P. Wiener, was published by Charles Scribner's Sons, New York, in 1973-74. The Dictionary of the History of Ideas also appeared in Chinese- and Japanese-language editions.
Online electronic version.
1
(4 marks)