Channels Resources Recent Items Reading list HomeRegisterLoginSupportContact

Authors: Veselin Stoyanov Jason Eisner
Details: | Google Scholar CiteSeer X DBLP Database
View PDF
Conditional Random Fields (CRFs) are a popular formalism for structured prediction in NLP. It is well known how to train CRFs with certain topologies that admit exact inference, such as linear-chain CRFs. Some NLP phenomena, however, suggest CRFs with more complex topologies. Should such models be used, considering that they make exact inference intractable? Stoyanov et al. (2011) recently argued for training parameters to minimize the task-specific loss of whatever approximate inference and decoding methods will be used at test time. We apply their method to three NLP problems, showing that (i) using more complex CRFs leads to improved performance, and that (ii) minimumrisk training learns more accurate models.
Item Details
Status: updated [Success]
Update: last updated 06/13/2012, 06:30 PM

2137 users, 707 channels, 351 resources, 59992 items