Author

Sears, Timothy

Date
Description
This thesis identifies and extends techniques that can be linked to the principle of maximum entropy (maxent) and applied to parameter estimation in machine learning and statistics. Entropy functions based on deformed logarithms are used to construct Bregman divergences, and together these represent a generalization of relative entropy. The framework is analyzed using convex analysis to charac- terize generalized forms of exponential family distributions. Various connections to the existing machine learning literature are discussed and the techniques are applied to the problem of non-negative matrix factorization (NMF).
GUID
oai:openresearch-repository.anu.edu.au:1885/49355
Identifier
oai:openresearch-repository.anu.edu.au:1885/49355
Identifiers
b25317040
http://hdl.handle.net/1885/49355
10.25911/5d7a2d1a0f9bb
https://openresearch-repository.anu.edu.au/bitstream/1885/49355/6/01front.pdf.jpg
https://openresearch-repository.anu.edu.au/bitstream/1885/49355/7/02whole.pdf.jpg
Publication Date
Titles
Generalized Maximum Entropy, Convexity and Machine Learning