text_complexity_analyzer_cm.coh_metrix_indices.syntactic_pattern_density_indices
index
/home/hans/Proyectos/Python/TextComplexityAnalyzerCM/text_complexity_analyzer_cm/coh_metrix_indices/syntactic_pattern_density_indices.py

 
Modules
       
multiprocessing
spacy

 
Classes
       
builtins.object
SyntacticPatternDensityIndices

 
class SyntacticPatternDensityIndices(builtins.object)
    SyntacticPatternDensityIndices(nlp, language: str = 'es', descriptive_indices: text_complexity_analyzer_cm.coh_metrix_indices.descriptive_indices.DescriptiveIndices = None) -> None
 
This class will handle all operations to find the synthactic pattern density indices of a text according to Coh-Metrix.
 
  Methods defined here:
__init__(self, nlp, language: str = 'es', descriptive_indices: text_complexity_analyzer_cm.coh_metrix_indices.descriptive_indices.DescriptiveIndices = None) -> None
The constructor will initialize this object that calculates the synthactic pattern density indices for a specific language of those that are available.
 
Parameters:
nlp: The spacy model that corresponds to a language.
language(str): The language that the texts to process will have.
descriptive_indices(DescriptiveIndices): The class that calculates the descriptive indices of a text in a certain language.
 
Returns:
None.
get_negation_expressions_density(self, text: str, word_count: int = None, workers: int = -1) -> int
This function obtains the incidence of negation expressions that exist on a text per {self._incidence} words.
 
Parameters:
text(str): The text to be analized.
word_count(int): The amount of words in the text.
workers(int): Amount of threads that will complete this operation. If it's -1 then all cpu cores will be used.
 
Returns:
int: The incidence of negation expressions per {self._incidence} words.
get_noun_phrase_density(self, text: str, word_count: int = None, workers: int = -1) -> int
This function obtains the incidence of noun phrases that exist on a text per {self._incidence} words.
 
Parameters:
text(str): The text to be analized.
word_count(int): The amount of words in the text.
workers(int): Amount of threads that will complete this operation. If it's -1 then all cpu cores will be used.
 
Returns:
int: The incidence of noun phrases per {self._incidence} words.
get_verb_phrase_density(self, text: str, word_count: int = None, workers: int = -1) -> int
This function obtains the incidence of verb phrases that exist on a text per {self._incidence} words.
 
Parameters:
text(str): The text to be analized.
word_count(int): The amount of words in the text.
workers(int): Amount of threads that will complete this operation. If it's -1 then all cpu cores will be used.
 
Returns:
int: The incidence of verb phrases per {self._incidence} words.

Data descriptors defined here:
__dict__
dictionary for instance variables (if defined)
__weakref__
list of weak references to the object (if defined)

 
Data
        ACCEPTED_LANGUAGES = {'es': 'es_core_news_lg'}
Callable = typing.Callable
List = typing.List