The decision tree is a popular and effective algorithm used primarily in classification work, but it also serves well in predicting quantitative phenomena.
Regression trees are a very interesting data analysis technique commonly used in tasks related to poststratification, forecasting, and segmentation.
How can we estimate the potential value of consumer shopping, predict the behavior of a visitor to our website, or reduce the risk of credit losses?
In the previous PS blog “Trees that grow out of tables”, Janusz Wachnicki described how a good understanding of the humble crosstab can help us utilise classification trees more...
When stepping outside the domain of distributive and descriptive statistics for individual variables, we usually take interest in correlations between variables.