What is Hidden Representation?
Hidden Representations are part of feature learning and represent the machine-readable data representations learned from a neural network’s hidden layers. The output of an activated hidden node, or neuron, is used for classification or regression at the output layer, but the representation of the input data, regardless of later analysis, is called hidden representation.
Another the way to look at it is that the output from hidden layers in a neural network are just raw data. What the machine learns to do with this data, how they represent these features, is hidden representation.
Why is Hidden Representation Important?
The performance of any deep learning method is dependent on the choice of data feature representation it’s working on. So, much of the effort in deploying machine learning algorithms goes into designing preprocessing pipelines and other data transformations, all just to represent the data in the most efficient way for a machine to learn from. This feature engineering is crucial, yet labor-intensive.
The need for human intervention in the representation framework demonstrates one of the biggest weakness of current learning algorithms: their inability to extract and organize the discriminative information from the data without outside help.
The goal of hidden representations is to teach the algorithm to do it’s own feature engineering, making the learning process more self-reliant. Effective, automated representations would allow for all sorts of novel applications, not to mention putting true “intelligence” into the phrase AI.