Java edu.stanford.nlp.tagger.maxent MaxentTagger fields, constructors, methods, implement or subclass

Example usage for Java edu.stanford.nlp.tagger.maxent MaxentTagger fields, constructors, methods, implement or subclass

Introduction

In this page you can find the methods, fields and constructors for edu.stanford.nlp.tagger.maxent MaxentTagger.

The text is from its open source code.

Field

Constructor

MaxentTagger(TaggerConfig config)
MaxentTagger(String modelFile)
Constructor for a tagger, loading a model stored in a particular file, classpath resource, or URL.
MaxentTagger(InputStream modelStream)
Constructor for a tagger, loading a model stored in a particular file, classpath resource, or URL.
MaxentTagger(String modelFile, Properties config, boolean printLoading)
Initializer that loads the tagger.
MaxentTagger(InputStream modelStream, Properties config, boolean printLoading)
Initializer that loads the tagger.
MaxentTagger(String modelFile, Properties config)
Constructor for a tagger using a model stored in a particular file, with options taken from the supplied TaggerConfig.

Method

voidmain(String[] args)
Command-line tagger interface.
List>process(List> sentences)
Tags the Words in each Sentence in the given List with their grammatical part-of-speech.
ListtagSentence(List sentence)
Returns a new Sentence that is a copy of the given sentence with all the words tagged with their part-of-speech.
ListtagSentence(List sentence, boolean reuseTags)
Returns a new Sentence that is a copy of the given sentence with all the words tagged with their part-of-speech.
SettagSet()
StringtagString(String toTag)
Tags the input string and returns the tagged version.
StringtagTokenizedString(String toTag)
Tags the tokenized input string and returns the tagged version.
List>tokenizeText(Reader r)
Reads data from r, tokenizes it with the default (Penn Treebank) tokenizer, and returns a List of Sentence objects, which can then be fed into tagSentence.
List>tokenizeText(Reader r, TokenizerFactory tokenizerFactory)
Reads data from r, tokenizes it with the given tokenizer, and returns a List of Lists of (extends) HasWord objects, which can then be fed into tagSentence.