GLASSO-SF
GLASSO-SF
is the reweighted $ℓ_1$ regularization method of GLASSO
to improve the
performance of the estimation for the scale-free network. GLASSO-SF
changes the $ℓ_1$ norm penalty
in the existing methods to the power law regularization
$$p_{\lambda, \gamma }(\Omega) = \lambda \sum_{i=1}^{p}log(\left\| \omega_{-i}\right\|_{1} + \epsilon_{i} ) + \gamma\sum_{i=1}^{p}\left | \omega_{ii} \right |,$$
where $\lambda$ and $\gamma$ are nonnegative tuning parameters,
$\omega_{-i} = \{\omega_{ij} | j \ne i\}, \left\| \omega_{-i}\right\|_{1} = \sum_{j\neq i}\left| \omega_{ij}\right|$,
and $\epsilon_i$ is a small positive number for $i=1,2,...,p$.
The following objective function will be optimized
$$f(\Omega;X,\lambda, \gamma) = L(X, \Omega) + u_{L} . p_{\lambda,\gamma}(\Omega),$$
where $L(X, \Omega)$ denotes the objective function of the existing method without its penalty terms, $u_L = 1$
if $L$ is convex and $u_L = -1$ if $L$ is concave for $\Omega$. The choice of $L$ is flexible. For instance,
$L(X, \Omega)$ can be the log-likelihood function of $\Omega$ as in the graphical lasso or the squared loss function
as in the NS
and the SPACE
.
To obtain the maximizer of $f(\Omega; X, \lambda, \gamma)$, GLASSO-SF
employs iteratively reweighted $ℓ_1$
regularization procedure based on the minorization-maximization (MM) algorithm, which solve the following problem:
$$\Omega^{(k+1)} = \underset{\Omega}{arg max}L(X, \Omega) - \sum_{i=1}^p\sum_{j \ne i}\eta^{(k)}_{ij}|\omega_{ij}|-\gamma\sum_{i=1}^p|\omega_{ii}|,$$
where $\Omega^{(k)} = (\omega^{(k)}_{ij})$ is the estimate at the $k$th iteration,
${∥\omega^{(k)}_{-i}∥}_1 = \sum_{l \ne i}|\omega^{(k)}_{il}|,$ and
$\eta^{(k)}_{ij} = \lambda(1/({∥\omega^{(k)}_{i}∥}_1 + \epsilon_i) + 1/({∥\omega^{(k)}_{-j}∥}_1 + \epsilon_j))$.
Reference:
1. Donghyeon Yu, Johan Lim, Xinlei Wang, Faming Liang, and Guanghua Xiao. "Enhanced construction of gene regulatory networks using hub gene information." BMC bioinformatics 18.1 (2017): 186.
2. Liu, Qiang, and Alexander Ihler. "Learning scale free networks by reweighted l1 regularization." In Proceedings of the Fourteenth International Conference on Artificial Intelligence and Statistics, pp. 40-48. 2011.
Note:
Change the $\lambda$ value $(\lambda > 0)$ to control the sparsity of network. The larger the $\lambda$, the more
sparse the constructed network. If you don't know how to choose a value, use the default one.