build a model language
Thread poster: cyrine84
I want to buid a 5 gram language model, so i want to know if it is necessary to do these three steps :
Tokenize training data,
Filter out long sentences
Lowercase training data
and why ?
Any one can help me
| | plotinus
Local time: 12:50
Italian to Portuguese
a language model is not a unique solution/tool that you can apply to every single situation. In short, a language model will (and should) only model the characteristics that you want it to model. You should also be aware that the size of the model (you want a 5-gram model) is also subject to these restrictions: for many languages, such a large gram size would either result in an extremely large model in terms of bytes (which can make it slow or even impossible to load in memory) or in a model that needs to be purged.
Anyway, the filtering of long sentences is usually not necessary for language models if you are not concerned about the time it will take to build the model itself (however, it matters a lot if you want candidates for statistical machine translation), because you will usually filter out uncommon grams (i.e., grams whose frequencies are below a given value, usually obtained with an estimator).
The lowercasing of the training data is usually done both to make a short/faster language model and to make it more general and less context-specific; if you are not sure about lowercasing or not (i.e., if what you want to model is not case-significant), you should lowercase it.
Could you be a little more specific about what you want to model and why?
To report site rules violations or get help, contact a site moderator:
You can also contact site staff by submitting a support request »