Improving language understanding through common sense and anticipation enriched representation learning
Trying to see whether various downstream language understanding tasks can be improved upon using representations of the symbolic words that are 'anticipatory'.
The intended properties of these 'anticipatory representations' are:
- High-level: they should represent some high level semantic concept, e.g. a spatial/temporal relation between words
- Common sense: they should bring 'common sense', which humans get from living in the real world. More specifically, perceptual data will be taken as a proxy for 'experience'
The idea is to use these predict these representations for upcoming data (eg new part of text, or new videoframe if applied to video understanding), and based on that prediction, restrict the space of possible answers to the specific task you are doing (eg if you're specific task is recognising named entities, restricting the possibility space because of this 'common sense' prediction of what should follow.