Portal:Machine learning
Machine learning
Machine learning is a scientific discipline that explores the construction and study of algorithms that can learn from data. Such algorithms operate by building a model based on inputs and using that to make predictions or decisions, rather than following only explicitly programmed instructions.
Machine learning can be considered a subfield of computer science and statistics. It has strong ties to artificial intelligence and optimization, which deliver methods, theory and application domains to the field. Machine learning is employed in a range of computing tasks where designing and programming explicit, rule-based algorithms is infeasible. Example applications include spam filtering, optical character recognition (OCR), search engines and computer vision. Machine learning is sometimes conflated with data mining, although that focuses more on exploratory data analysis. Machine learning and pattern recognition "can be viewed as two facets of the same field."
Selected article
A random forest is an ensemble model for classification or regression, that consists of a multitude of decision trees. The predictions of a random forest are averages of the predictions of the individual trees. Random forests correct for decision trees' habit of overfitting to their training set.
The algorithm for inducing a random forest was developed by Leo Breiman and Adele Cutler. The method combines Breiman's "bagging" idea and random selection of features: each tree gets to see a bootstrap sample of the training set and a random sample of the features, in order to obtain uncorrelated trees.
Selected biography
Geoffrey (Geoff) Everest Hinton FRS (born 6 December 1947) is a British-born cognitive psychologist and computer scientist, most noted for his work on artificial neural networks. He now divides his time working for Google and University of Toronto. He is the co-inventor of the backpropagation and contrastive divergence training algorithms and is an important figure in the deep learning movement.
In the news
Selected picture

Credit: User:Fyedernoggersnodden
An Elman network, one of the simplest types of recurrent neural networks. The network has an input layer, a hidden layer and an output layer. The hidden layer is connected to a context layer (bottom) that remembers its activation at the previous observation, giving the network a memory and making it capable of processing sequences (e.g., sequences of words or of phonemes).
Did you know?
- ... that the kernel perceptron was the first learning algorithm to employ the kernel trick, already in 1964?
- ... that AltaVista was the first web search engine to employ machine-learned ranking of its search results?
- ... that the group method of data handling, invented in the USSR, was one of the first algorithms capable of training deep neural networks (ca. 1971)?
Categories
WikiProjects
Things to do
Wikimedia
- What are portals?
- List of portals
- Featured portals