Abstract

We suggest a method for estimating a covariance matrix on the basis of a sample of vectors drawn from a multivariate normal distribution. In particular, we penalize the likelihood with a lasso penalty on the entries of the covariance matrix. This penalty plays two important roles: it reduces the effective number of parameters, which is important even when the dimension of the vectors is smaller than the sample size since the number of parameters grows quadratically in the number of variables, and it produces an estimate which is sparse. In contrast to sparse inverse covariance estimation, our method’s close relative, the sparsity attained here is in the covariance matrix itself rather than in the inverse matrix. Zeros in the covariance matrix correspond to marginal independencies; thus, our method performs model selection while providing a positive definite estimate of the covariance. The proposed penalized maximum likelihood problem is not convex, so we use a majorize-minimize approach in which we iteratively solve convex approximations to the original nonconvex problem. We discuss tuning parameter selection and demonstrate on a flow-cytometry dataset how our method produces an interpretable graphical display of the relationship between variables. We perform simulations that suggest that simple elementwise thresholding of the empirical covariance matrix is competitive with our method for identifying the sparsity structure. Additionally, we show how our method can be used to solve a previously studied special case in which a desired sparsity pattern is prespecified.

References

An
L.
Tao
P.
The dc (difference of convex functions) programming and dca revisited with dc models of real world nonconvex optimization problems
Ann. Oper. Res.
2005
, vol. 
133
 (pg. 
23
-
46
)
Argyriou
A.
Hauser
R.
Micchelli
C.
Pontil
M.
A dc-programming algorithm for kernel selection
In Proc. 23rd Int. Conf. Mach. Learn.
2006
New York
Association for Computing Machinery
Banerjee
O.
El
Ghaoui L. E.
d’Aspremont
A.
Model selection through sparse maximum likelihood estimation for multivariate Gaussian or binary data
J. Mach. Learn. Res.
2008
, vol. 
9
 (pg. 
485
-
516
)
Beck
A.
Teboulle
M.
A fast iterative shrinkage-thresholding algorithm for linear inverse problems
SIAM J. Imag. Sci.
2009
, vol. 
2
 (pg. 
183
-
202
)
Boyd
S.
Vandenberghe
L.
Convex Optimization
2004
Cambridge
Cambridge University Press
Boyd
S.
Parikh
N.
Chu
E.
Peleato
B.
Eckstein
J.
Distributed optimization and statistical learning via the alternating direction method of multipliers
Foundat. Trends Mach. Learn
2011
, vol. 
3
 (pg. 
1
-
124
)
Butte
A. J.
Tamayo
P.
Slonim
D.
Golub
T. R.
Kohane
I. S.
Discovering functional relationships between RNA expression and chemotherapeutic susceptibility using relevance networks
Proc. Nat. Acad. Sci. U.S.A.
2000
, vol. 
97
 (pg. 
12182
-
6
)
Chaudhuri
S.
Drton
M.
Richardson
T. S.
Estimation of a covariance matrix with zeros
Biometrika
2007
, vol. 
94
 (pg. 
199
-
216
)
de
Leeuw J.
Mair
P.
Multidimensional scaling using majorization: SMACOF in R
J. Statist. Software
2009
, vol. 
31
 (pg. 
1
-
30
)
Dempster
A. P.
Covariance selection
Biometrics
1972
, vol. 
28
 (pg. 
157
-
75
)
Drton
M.
Richardson
T. S.
Graphical methods for efficient likelihood inference in Gaussian covariance models
J. Mach. Learn. Res.
2008
, vol. 
9
 (pg. 
893
-
914
)
Fazel
M.
Hindi
H.
Boyd
S.
Log-det heuristic for matrix rank minimization with applications to Hankel and Euclidean distance matrices
In Am. Contr. Conf., 2003. Proc. 2003
2003
 
vol. 3. Institute of Electrical and Electronics Engineers
Friedman
J.
Hastie
T. J.
Tibshirani
R. J.
Sparse inverse covariance estimation with the graphical lasso
Biostatistics
2007
, vol. 
9
 (pg. 
432
-
41
)
Horst
R.
Thoai
N. V.
Dc programming: Overview
J. Optimiz. Theory Appl.
1999
, vol. 
103
 (pg. 
1
-
43
)
Huang
J.
Liu
N.
Pourahmadi
M.
Liu
L.
Covariance matrix selection and estimation via penalised normal likelihood
Biometrika
2006
, vol. 
93
 pg. 
85
 
Hunter
D. R.
Li
R.
Variable selection using MM algorithms
Ann. Statist.
2005
, vol. 
33
 (pg. 
1617
-
42
)
Khare
K.
Rajaratnam
B.
Wishart distributions for decomposable covariance graph models
Ann. Statist.
2011
, vol. 
39
 (pg. 
514
-
55
)
Lam
C.
Fan
J.
Sparsistency and rates of convergence in large covariance matrix estimation
Ann. Statist.
2009
, vol. 
37
 (pg. 
4254
-
78
)
Lange
K.
Optimization
2004
New York
Springer
Meinshausen
N.
Bühlmann
P.
High dimensional graphs and variable selection with the lasso
Ann. Statist.
2006
, vol. 
34
 (pg. 
1436
-
62
)
Nesterov
Y.
Smooth minimization of non-smooth functions
Math. Prog.
2005
, vol. 
103
 (pg. 
127
-
52
)
Rothman
A.
Levina
E.
Zhu
J.
Sparse permutation invariant covariance estimation
Electron. J. Statist.
2008
, vol. 
2
 (pg. 
494
-
515
)
Rothman
A.
Levina
E.
Zhu
J.
A new approach to Cholesky-based covariance regularization in high dimensions
Biometrika
2010
, vol. 
97
 pg. 
539
 
Rothman
A. J.
Levina
E.
Zhu
J.
Generalized thresholding of large covariance matrices
J. Am. Statist. Assoc.
2009
, vol. 
104
 (pg. 
177
-
86
)
Sachs
K.
Perez
O.
Pe’er
D.
Lauffenburger
D.
Nolan
G.
Causal protein-signaling networks derived from multiparameter single-cell data
Science
2005
, vol. 
308
 (pg. 
523
-
9
)
Silverstein
J.
The smallest eigenvalue of a large dimensional Wishart matrix
Ann. Prob.
1985
, vol. 
13
 (pg. 
1364
-
8
)
Sriperumbudur
B.
Lanckriet
G.
On the convergence of the concave-convex procedure
Advances in Neural Information Processing Systems
2009
, vol. 
22
 (pg. 
1759
-
67
Y. Bengio and D. Schuurmans and J. Lafferty and C. K. I. Williams and A. Culotta (eds)
Tibshirani
R.
Regression shrinkage and selection via the lasso
J. R. Statist. Soc.
1996
, vol. 
58
 (pg. 
267
-
88
)
Yuan
M.
Lin
Y.
Model selection and estimation in the Gaussian graphical model
Biometrika
2007
, vol. 
94
 (pg. 
19
-
35
)
Yuille
A. L.
Rangarajan
A.
The concave–convex procedure
Neural Comp.
2003
, vol. 
15
 (pg. 
915
-
36
)
Zhang
T.
Analysis of multi-stage convex relaxation for sparse regularization
J. Mach. Learn. Res.
2010
, vol. 
11
 (pg. 
1081
-
107
)
Zou
H.
The adaptive lasso and its oracle properties
J. Am. Statist. Assoc.
2006
, vol. 
101
 (pg. 
1418
-
29
)
This content is only available as a PDF.

Supplementary data