What is Stanza?
It is a Python natural language analysis package. It contains tools, which can be used in a pipeline, to convert a string containing human language text into lists of sentences and words, to generate base forms of those words, their parts of speech and morphological features, to give a syntactic structure dependency parse, and to recognize named entities. The toolkit is designed to be parallel among more than 70 languages, using the Universal Dependencies formalism.
Stanza is a tool in the NLP / Sentiment Analysis category of a tech stack.
Stanza is an open source tool with 5.6K GitHub stars and 723 GitHub forks. Here’s a link to Stanza's open source repository on GitHub
Who uses Stanza?
5 developers on StackShare have stated that they use Stanza.
- Native Python implementation requiring minimal efforts to set up
- Full neural network pipeline for robust text analytics, including tokenization, multi-word token (MWT) expansion, lemmatization, part-of-speech (POS) and morphological features tagging, dependency parsing, and named entity recognition
- Pretrained neural models supporting 66 (human) languages
- A stable, officially maintained Python interface to CoreNLP
Stanza Alternatives & Comparisons
What are some alternatives to Stanza?
See all alternatives
It is a library for advanced Natural Language Processing in Python and Cython. It's built on the very latest research, and was designed from day one to be used in real products. It comes with pre-trained statistical models and word vectors, and currently supports tokenization for 49+ languages.
prose is a natural language processing library (English only, at the moment) in pure Go. It supports tokenization, segmentation, part-of-speech tagging, and named-entity extraction.
It is a suite of libraries and programs for symbolic and statistical natural language processing for English written in the Python programming language.
rasa NLU (Natural Language Understanding) is a tool for intent classification and entity extraction. You can think of rasa NLU as a set of high level APIs for building your own language parser using existing NLP and ML libraries.
It is a Python library for topic modelling, document indexing and similarity retrieval with large corpora. Target audience is the natural language processing (NLP) and information retrieval (IR) community.