Viterbi algorithm
The Viterbi algorithm is a dynamic programming algorithm for finding the most likely sequence of hidden states – called the Viterbi path – that results in a sequence of observed events, especially in the context of Markov information sources, and more generally, hidden Markov models. The forward algorithm is a closely related algorithm for computing the probability of a sequence of observed events. These algorithms belong to the realm of probability theory.
The algorithm makes a number of assumptions:
- First, both the observed events and hidden events must be in a sequence. The sequence is often temporal, i.e. in time order of occurrence.
- Second, these two sequences need to be aligned: an instance of an observed event needs to correspond to exactly one instance of a hidden event.
- Third, computing the most likely hidden sequence (which leads to a particular state) up to a certain point t must depend only on the observed event at point t, and the most likely sequence which leads to that state at point t − 1.
These assumptions are all satisfied in a first-order hidden Markov model.
The terms "Viterbi path" and "Viterbi algorithm" are also applied to related dynamic programming algorithms that discover the single most likely explanation for an observation. For example, in statistical parsing a dynamic programming algorithm can be used to discover the single most likely context-free derivation (parse) of a string, which is sometimes called the "Viterbi parse".
The Viterbi algorithm was conceived by Andrew Viterbi in 1967 as a decoding algorithm for convolutional codes over noisy digital communication links. For more details on the history of the development of the algorithm see David Forney's article.[1] The algorithm has found universal application in decoding the convolutional codes used in both CDMA and GSM digital cellular, dial-up modems, satellite, deep-space communications, and 802.11 wireless LANs. It is now also commonly used in speech recognition, keyword spotting, computational linguistics, and bioinformatics. For example, in speech-to-text (speech recognition), the acoustic signal is treated as the observed sequence of events, and a string of text is considered to be the "hidden cause" of the acoustic signal. The Viterbi algorithm finds the most likely string of text given the acoustic signal.
Contents |
Overview
The assumptions listed above can be elaborated as follows. The Viterbi algorithm operates on a state machine assumption. That is, at any time the system being modeled is in one of a finite number of states. While multiple sequences of states (paths) can lead to a given state, at least one of them is a most likely path to that state, called the "survivor path". This is a fundamental assumption of the algorithm because the algorithm will examine all possible paths leading to a state and only keep the one most likely. This way the algorithm does not have to keep track of all possible paths, only one per state.
A second key assumption is that a transition from a previous state to a new state is marked by an incremental metric, usually a number. This transition is computed from the event. The third key assumption is that the events are cumulative over a path in some sense, usually additive. So the crux of the algorithm is to keep a number for each state. When an event occurs, the algorithm examines moving forward to a new set of states by combining the metric of a possible previous state with the incremental metric of the transition due to the event and chooses the best. The incremental metric associated with an event depends on the transition possibility from the old state to the new state. For example in data communications, it may be possible to only transmit half the symbols from an odd numbered state and the other half from an even numbered state. Additionally, in many cases the state transition graph is not fully connected. A simple example is a car that has three states — forward, stop and reverse — and is not allowed to undergo a transition from forward to reverse without first entering the stop state. After computing the combinations of incremental metric and state metric, only the best survives and all other paths are discarded. There are modifications to the basic algorithm which allow for a forward search in addition to the backwards one described here.
Path history must be stored. In some cases, the search history is complete because the state machine at the encoder starts in a known state and there is sufficient memory to keep all the paths. In other cases, a programmatic solution must be found for limited resources: one example is convolutional encoding, where the decoder must truncate the history at a depth large enough to keep performance to an acceptable level. Although the Viterbi algorithm is very efficient and there are modifications that reduce the computational load, the memory requirements tend to remain constant.
Algorithm
Suppose we are given a Hidden Markov Model (HMM) with states Y, initial probabilities πi of being in state i and transition probabilities ai,j of transitioning from state i to state j. Say we observe outputs . The state sequence
most likely to have produced the observations is given by the recurrence relations:[2]
Here Vt,k is the probability of the most probable state sequence responsible for the first t + 1 observations (we add one because indexing started at 0) that has k as its final state. The Viterbi path can be retrieved by saving back pointers that remember which state y was used in the second equation. Let Ptr(k,t) be the function that returns the value of y used to compute Vt,k if t > 0, or k if t = 0. Then:
The complexity of this algorithm is .
Example
Consider two friends, Alice and Bob, who live far apart from each other and who talk together daily over the telephone about what they did that day. Bob is only interested in three activities: walking in the park, shopping, and cleaning his apartment. The choice of what to do is determined exclusively by the weather on a given day. Alice has no definite information about the weather where Bob lives, but she knows general trends. Based on what Bob tells her he did each day, Alice tries to guess what the weather must have been like.
Alice believes that the weather operates as a discrete Markov chain. There are two states, "Rainy" and "Sunny", but she cannot observe them directly, that is, they are hidden from her. On each day, there is a certain chance that Bob will perform one of the following activities, depending on the weather: "walk", "shop", or "clean". Since Bob tells Alice about his activities, those are the observations. The entire system is that of a hidden Markov model (HMM).
Alice knows the general weather trends in the area, and what Bob likes to do on average. In other words, the parameters of the HMM are known. They can be represented as follows in the Python programming language:
states = ('Rainy', 'Sunny') observations = ('walk', 'shop', 'clean') start_probability = {'Rainy': 0.6, 'Sunny': 0.4} transition_probability = { 'Rainy' : {'Rainy': 0.7, 'Sunny': 0.3}, 'Sunny' : {'Rainy': 0.4, 'Sunny': 0.6}, } emission_probability = { 'Rainy' : {'walk': 0.1, 'shop': 0.4, 'clean': 0.5}, 'Sunny' : {'walk': 0.6, 'shop': 0.3, 'clean': 0.1}, }
In this piece of code, start_probability
represents Alice's belief about which state the HMM is in when Bob first calls her (all she knows is that it tends to be rainy on average). The particular probability distribution used here is not the equilibrium one, which is (given the transition probabilities) approximately {'Rainy': 0.57, 'Sunny': 0.43}
. The transition_probability
represents the change of the weather in the underlying Markov chain. In this example, there is only a 30% chance that tomorrow will be sunny if today is rainy. The emission_probability
represents how likely Bob is to perform a certain activity on each day. If it is rainy, there is a 50% chance that he is cleaning his apartment; if it is sunny, there is a 60% chance that he is outside for a walk.
Alice talks to Bob three days in a row and discovers that on the first day he went for a walk, on the second day he went shopping, and on the third day he cleaned his apartment. Alice has a question: what is the most likely sequence of rainy/sunny days that would explain these observations? This is answered by the Viterbi algorithm.
# Helps visualize the steps of Viterbi. def print_dptable(V): print " ", for i in range(len(V)): print "%7s" % ("%d" % i), print for y in V[0].keys(): print "%.5s: " % y, for t in range(len(V)): print "%.7s" % ("%f" % V[t][y]), print def viterbi(obs, states, start_p, trans_p, emit_p): V = [{}] path = {} # Initialize base cases (t == 0) for y in states: V[0][y] = start_p[y] * emit_p[y][obs[0]] path[y] = [y] # Run Viterbi for t > 0 for t in range(1,len(obs)): V.append({}) newpath = {} for y in states: (prob, state) = max([(V[t-1][y0] * trans_p[y0][y] * emit_p[y][obs[t]], y0) for y0 in states]) V[t][y] = prob newpath[y] = path[state] + [y] # Don't need to remember the old paths path = newpath print_dptable(V) (prob, state) = max([(V[len(obs) - 1][y], y) for y in states]) return (prob, path[state])
The function viterbi
takes the following arguments: obs
is the sequence of observations, e.g. ['walk', 'shop', 'clean']
; states
is the set of hidden states; start_p
is the start probability; trans_p
are the transition probabilities; and emit_p
are the emission probabilities. For simplicity of code, we assume that the observation sequence obs
is non-empty and that trans_p[i][j]
and emit_p[i][j]
is defined for all states i,j.
In the running example, the forward/Viterbi algorithm is used as follows:
def example(): return viterbi(observations, states, start_probability, transition_probability, emission_probability) print example()
This reveals that the observations ['walk', 'shop', 'clean']
were most likely generated by states ['Sunny', 'Rainy', 'Rainy']
, with score 0.01344 (to be normalized). In other words, given the observed activities, it was most likely sunny when Bob went for a walk and then it started to rain the next day and kept on raining.
The operation of Viterbi's algorithm can be visualized by means of a trellis diagram. The Viterbi path is essentially the shortest path through this trellis. The trellis for the weather example is shown below; the corresponding Viterbi path is in bold:
When implementing Viterbi's algorithm, it should be noted that many languages use Floating Point arithmetic - as p is small, this may lead to underflow in the results. A common technique to avoid this is to take the logarithm of the probabilities and use it throughout the computation, the same technique used in the Logarithmic Number System. Once the algorithm has terminated, an accurate value can be obtained by performing the appropriate exponentiation.
Java implementation
import java.util.Hashtable; public class Viterbi { static final String RAINY = "Rainy"; static final String SUNNY = "Sunny"; static final String WALK = "walk"; static final String SHOP = "shop"; static final String CLEAN = "clean"; public static void main(String[] args) { String[] states = new String[] {RAINY, SUNNY}; String[] observations = new String[] {WALK, SHOP, CLEAN}; Hashtable<String, Float> start_probability = new Hashtable<String, Float>(); start_probability.put(RAINY, 0.6f); start_probability.put(SUNNY, 0.4f); // transition_probability Hashtable<String, Hashtable<String, Float>> transition_probability = new Hashtable<String, Hashtable<String, Float>>(); Hashtable<String, Float> t1 = new Hashtable<String, Float>(); t1.put(RAINY, 0.7f); t1.put(SUNNY, 0.3f); Hashtable<String, Float> t2 = new Hashtable<String, Float>(); t2.put(RAINY, 0.4f); t2.put(SUNNY, 0.6f); transition_probability.put(RAINY, t1); transition_probability.put(SUNNY, t2); // emission_probability Hashtable<String, Hashtable<String, Float>> emission_probability = new Hashtable<String, Hashtable<String, Float>>(); Hashtable<String, Float> e1 = new Hashtable<String, Float>(); e1.put(WALK, 0.1f); e1.put(SHOP, 0.4f); e1.put(CLEAN, 0.5f); Hashtable<String, Float> e2 = new Hashtable<String, Float>(); e2.put(WALK, 0.6f); e2.put(SHOP, 0.3f); e2.put(CLEAN, 0.1f); emission_probability.put(RAINY, e1); emission_probability.put(SUNNY, e2); Object[] ret = forward_viterbi(observations, states, start_probability, transition_probability, emission_probability); System.out.println(((Float) ret[0]).floatValue()); System.out.println((String) ret[1]); System.out.println(((Float) ret[2]).floatValue()); } public static Object[] forward_viterbi(String[] obs, String[] states, Hashtable<String, Float> start_p, Hashtable<String, Hashtable<String, Float>> trans_p, Hashtable<String, Hashtable<String, Float>> emit_p) { Hashtable<String, Object[]> T = new Hashtable<String, Object[]>(); for (String state : states) T.put(state, new Object[] {start_p.get(state), state, start_p.get(state)}); for (String output : obs) { Hashtable<String, Object[]> U = new Hashtable<String, Object[]>(); for (String next_state : states) { float total = 0; String argmax = ""; float valmax = 0; float prob = 1; String v_path = ""; float v_prob = 1; for (String source_state : states) { Object[] objs = T.get(source_state); prob = ((Float) objs[0]).floatValue(); v_path = (String) objs[1]; v_prob = ((Float) objs[2]).floatValue(); float p = emit_p.get(source_state).get(output) * trans_p.get(source_state).get(next_state); prob *= p; v_prob *= p; total += prob; if (v_prob > valmax) { argmax = v_path + "," + next_state; valmax = v_prob; } } U.put(next_state, new Object[] {total, argmax, valmax}); } T = U; } float total = 0; String argmax = ""; float valmax = 0; float prob; String v_path; float v_prob; for (String state : states) { Object[] objs = T.get(state); prob = ((Float) objs[0]).floatValue(); v_path = (String) objs[1]; v_prob = ((Float) objs[2]).floatValue(); total += prob; if (v_prob > valmax) { argmax = v_path; valmax = v_prob; } } return new Object[]{total, argmax, valmax}; } }
C# implementation
using System; using System.Collections.Generic; using System.Linq; using System.Text; namespace Viterbi { class Program { //Weather states static String RAINY = "Rainy"; static String SUNNY = "Sunny"; //Dependable actions (observations) static String WALK = "walk"; static String SHOP = "shop"; static String CLEAN = "clean"; static void Main(string[] args) { //initialize our arrays of states and observations String[] states = { RAINY, SUNNY }; String[] observations = { WALK, SHOP, CLEAN }; var start_probability = new Dictionary<String, float>(); start_probability.Add(RAINY, 0.6f); start_probability.Add(SUNNY, 0.4f); //Transition probability var transition_probability = new Dictionary<String, Dictionary<String, float>>(); var t1 = new Dictionary<String, float>(); t1.Add(RAINY, 0.7f); t1.Add(SUNNY, 0.3f); Dictionary<String, float> t2 = new Dictionary<String, float>(); t2.Add(RAINY, 0.4f); t2.Add(SUNNY, 0.6f); transition_probability.Add(RAINY, t1); transition_probability.Add(SUNNY, t2); //emission_probability var emission_probability = new Dictionary<String, Dictionary<String, float>>(); var e1 = new Dictionary<String, float>(); e1.Add(WALK, 0.1f); e1.Add(SHOP, 0.4f); e1.Add(CLEAN, 0.5f); Dictionary<String, float> e2 = new Dictionary<String, float>(); e2.Add(WALK, 0.6f); e2.Add(SHOP, 0.3f); e2.Add(CLEAN, 0.1f); emission_probability.Add(RAINY, e1); emission_probability.Add(SUNNY, e2); Object[] ret = forward_viterbi(observations, states, start_probability, transition_probability, emission_probability); Console.WriteLine((float)ret[0]); Console.WriteLine((String)ret[1]); Console.WriteLine((float)ret[2]); Console.ReadLine(); } public static Object[] forward_viterbi(String[] obs, String[] states, Dictionary<String, float> start_p, Dictionary<String, Dictionary<String, float>> trans_p, Dictionary<String, Dictionary<String, float>> emit_p) { var T = new Dictionary<String, Object[]>(); foreach (String state in states) { T.Add(state, new Object[] { start_p[state], state, start_p[state] }); } foreach (String output in obs) { var U = new Dictionary<String, Object[]>(); foreach (String next_state in states) { float total = 0; String argmax = ""; float valmax = 0; float prob = 1; String v_path = ""; float v_prob = 1; foreach (String source_state in states) { Object[] objs = T[source_state]; prob = ((float)objs[0]); v_path = (String)objs[1]; v_prob = ((float)objs[2]); float p = emit_p[source_state][output] * trans_p[source_state][next_state]; prob *= p; v_prob *= p; total += prob; if (v_prob > valmax) { argmax = v_path + "," + next_state; valmax = v_prob; } } U.Add(next_state, new Object[] { total, argmax, valmax }); } T = U; } float xtotal = 0; String xargmax = ""; float xvalmax = 0; float xprob; String xv_path; float xv_prob; foreach (String state in states) { Object[] objs = T[state]; xprob = ((float)objs[0]); xv_path = ((String)objs[1]); xv_prob = ((float)objs[2]); xtotal += xprob; if (xv_prob > xvalmax) { xargmax = xv_path; xvalmax = xv_prob; } } return new Object[] { xtotal, xargmax, xvalmax }; } } }
Extensions
With the algorithm called iterative Viterbi decoding one can find the subsequence of an observation that matches best (on average) to a given HMM. Iterative Viterbi decoding works by iteratively invoking a modified Viterbi algorithm, reestimating the score for a filler until convergence.
An alternative algorithm, the Lazy Viterbi algorithm, has been proposed recently.[3] This works by not expanding any nodes until it really needs to, and usually manages to get away with doing a lot less work (in software) than the ordinary Viterbi algorithm for the same result - however, it is not so easy to parallelize in hardware.
The Viterbi algorithm [4] has been extended to operate with a deterministic finite automaton in order to quickly generate the trellis with state transitions pointing back at variable amount of history.
See also
- Baum–Welch algorithm
- Forward-backward algorithm
- Error-correcting code
- Soft output Viterbi algorithm
- Viterbi decoder
- Hidden Markov model
- Part-of-speech tagging
Notes
- ^ 29 Apr 2005, G. David Forney Jr: The Viterbi Algorithm: A Personal History
- ^ Xing E, slide 11
- ^ "A fast maximum-likelihood decoder for convolutional codes" (PDF). Vehicular Technology Conference. December 2002. pp. 371–375. doi:10.1109/VETECF.2002.1040367. http://people.csail.mit.edu/jonfeld/pubs/lazyviterbi.pdf.
- ^ Luk, R.W.P.; R.I. Damper (1998). "Computational complexity of a fast Viterbi decoding algorithm for stochastic letter-phoneme transduction". IEEE Trans. Speech and Audio Processing 6 (3): 217–225. doi:10.1109/89.668816.
References
- Viterbi AJ (April 1967). "Error bounds for convolutional codes and an asymptotically optimum decoding algorithm". IEEE Transactions on Information Theory 13 (2): 260–269. doi:10.1109/TIT.1967.1054010. http://ieeexplore.ieee.org/xpls/abs_all.jsp?arnumber=1054010. (note: the Viterbi decoding algorithm is described in section IV.) Subscription required.
- Feldman J, Abou-Faycal I, Frigo M (2002). "A Fast Maximum-Likelihood Decoder for Convolutional Codes". Vehicular Technology Conference 1: 371–375. doi:10.1109/VETECF.2002.1040367.
- Forney GD (March 1973). "The Viterbi algorithm". Proceedings of the IEEE 61 (3): 268–278. doi:10.1109/PROC.1973.9030. http://ieeexplore.ieee.org/xpls/abs_all.jsp?arnumber=1450960. Subscription required.
- Press, WH; Teukolsky, SA; Vetterling, WT; Flannery, BP (2007). "Section 16.2. Viterbi Decoding". Numerical Recipes: The Art of Scientific Computing (3rd ed.). New York: Cambridge University Press. ISBN 978-0-521-88068-8. http://apps.nrbook.com/empanel/index.html#pg=850.
- Rabiner LR (February 1989). "A tutorial on hidden Markov models and selected applications in speech recognition". Proceedings of the IEEE 77 (2): 257–286. doi:10.1109/5.18626. (Describes the forward algorithm and Viterbi algorithm for HMMs).
- Shinghal, R. and Godfried T. Toussaint, "Experiments in text recognition with the modified Viterbi algorithm," IEEE Transactions on Pattern Analysis and Machine Intelligence, Vol. PAMI-l, April 1979, pp. 184-193.
- Shinghal, R. and Godfried T. Toussaint, "The sensitivity of the modified Viterbi algorithm to the source statistics," IEEE Transactions on Pattern Analysis and Machine Intelligence, vol. PAMI-2, March 1980, pp. 181-185.
Implementations
- C and assembly
- C
- C++
- C++ and Boost by Antonio Gulli
- C#
- Java
- Perl
- Prolog
- Python