Article Preview
Top2. The Bss/Mbd Problems
The blind source separation task. Assume that there exist
zero-mean source signals,
, that are scalar valued and mutually (spatially) statistically independent (or as independent as possible) at each time instant or index value
number
of sources (Amari, Douglas, Cichocki, & Yang, 1997; Cichocki & Amari, 2002). Denote
the m-dimensional
-th mixture data vector, at discrete index value (time) t. The blind source separation (BSS) mixing model is equal to:
(1) where
N is noise signal. A well-known iterative optimization method is the stochastic gradient (or gradient descent) search (Zeckhauser & Thompson, 1970). In this method the basic task is to define a criterion
![ijsda.2013040104.m08](https://igiprodst.blob.core.windows.net:443/source-content/9781466632899_71484/ijsda.2013040104.m08.png?sv=2015-12-11&sr=c&sig=aQ26mxk8Syaq0caFWbfMiK%2FPOjaHex0un9YKUK4T4bk%3D&se=2019-11-17T05%3A40%3A22Z&sp=r)
, which obtains its minimum for some
![ijsda.2013040104.m09](https://igiprodst.blob.core.windows.net:443/source-content/9781466632899_71484/ijsda.2013040104.m09.png?sv=2015-12-11&sr=c&sig=aQ26mxk8Syaq0caFWbfMiK%2FPOjaHex0un9YKUK4T4bk%3D&se=2019-11-17T05%3A40%3A22Z&sp=r)
if this
![ijsda.2013040104.m10](https://igiprodst.blob.core.windows.net:443/source-content/9781466632899_71484/ijsda.2013040104.m10.png?sv=2015-12-11&sr=c&sig=aQ26mxk8Syaq0caFWbfMiK%2FPOjaHex0un9YKUK4T4bk%3D&se=2019-11-17T05%3A40%3A22Z&sp=r)
is the expected optimum solution. Applying the natural gradient descent approach (Amari, Douglas, Cichocki, & Yang, 1997; Cichocki & Amari, 2002) with the cost function, based on Kullback-Leibler divergence, we may derive the learning rule for BSS: