Trees | Index | Help |
---|
Package Bio :: Module MarkovModel |
|
This is an implementation of a state-emitting MarkovModel. I am using terminology similar to Manning and Schutze.
Functions: train_bw Train a markov model using the Baum-Welch algorithm. train_visible Train a visible markov model using MLE. find_states Find the a state sequence that explains some observations.
load Load a MarkovModel. save Save a MarkovModel.
Classes: MarkovModel Holds the description of a markov modelClasses | |
---|---|
MarkovModel |
Function Summary | |
---|---|
find_states(markov_model, output) -> list of (states, score) | |
load(handle) -> MarkovModel() | |
save(mm, handle) | |
train_bw(states, alphabet, training_data[, pseudo_initial] [, pseudo_transition][, pseudo_emission][, update_fn]) -> MarkovModel | |
train_visible(states, alphabet, training_data[, pseudo_initial] [, pseudo_transition][, pseudo_emission]) -> MarkovModel | |
_argmaxes(vector,
allowance)
| |
_backward(N,
T,
lp_transition,
lp_emission,
outputs)
| |
_baum_welch(N,
M,
training_outputs,
p_initial,
p_transition,
p_emission,
pseudo_initial,
pseudo_transition,
pseudo_emission,
update_fn)
| |
_baum_welch_one(N,
M,
outputs,
lp_initial,
lp_transition,
lp_emission,
lpseudo_initial,
lpseudo_transition,
lpseudo_emission)
| |
_copy_and_check(matrix,
desired_shape)
| |
_exp_logsum(numbers)
| |
_forward(N,
T,
lp_initial,
lp_transition,
lp_emission,
outputs)
| |
_logadd(logx,
logy)
| |
_logsum(matrix)
| |
_logvecadd(logvec1,
logvec2)
| |
_mle(N,
M,
training_outputs,
training_states,
pseudo_initial,
pseudo_transition,
pseudo_emission)
| |
_normalize(matrix)
| |
_random_norm(shape)
| |
_readline_and_check_start(handle,
start)
| |
_safe_asarray(a,
typecode)
| |
_safe_copy_and_check(matrix,
desired_shape)
| |
_safe_log(n)
| |
_uniform_norm(shape)
| |
_viterbi(N,
lp_initial,
lp_transition,
lp_emission,
output)
|
Function Details |
---|
find_states(markov_model, output)find_states(markov_model, output) -> list of (states, score) |
load(handle)load(handle) -> MarkovModel() |
save(mm, handle)save(mm, handle) |
train_bw(states, alphabet, training_data, pseudo_initial=None, pseudo_transition=None, pseudo_emission=None, update_fn=None)train_bw(states, alphabet, training_data[, pseudo_initial] [, pseudo_transition][, pseudo_emission][, update_fn]) -> MarkovModel Train a MarkovModel using the Baum-Welch algorithm. states is a list of strings that describe the names of each state. alphabet is a list of objects that indicate the allowed outputs. training_data is a list of observations. Each observation is a list of objects from the alphabet. pseudo_initial, pseudo_transition, and pseudo_emission are optional parameters that you can use to assign pseudo-counts to different matrices. They should be matrices of the appropriate size that contain numbers to add to each parameter matrix, before normalization. update_fn is an optional callback that takes parameters (iteration, log_likelihood). It is called once per iteration. |
train_visible(states, alphabet, training_data, pseudo_initial=None, pseudo_transition=None, pseudo_emission=None)train_visible(states, alphabet, training_data[, pseudo_initial] [, pseudo_transition][, pseudo_emission]) -> MarkovModel Train a visible MarkovModel using maximum likelihoood estimates for each of the parameters. states is a list of strings that describe the names of each state. alphabet is a list of objects that indicate the allowed outputs. training_data is a list of (outputs, observed states) where outputs is a list of the emission from the alphabet, and observed states is a list of states from states. pseudo_initial, pseudo_transition, and pseudo_emission are optional parameters that you can use to assign pseudo-counts to different matrices. They should be matrices of the appropriate size that contain numbers to add to each parameter matrix |
Trees | Index | Help |
---|
Generated by Epydoc 2.1 on Mon Aug 27 16:13:08 2007 | http://epydoc.sf.net |