### Citations

12375 | Elements of Information Theory - Cover, Thomas - 1991 |

7413 | Convex Optimization
- Boyd, Vandenberghe
- 2004
(Show Context)
Citation Context ... 2. Suppose each q (j) i is a concave, non-decreasing, and positive function. Then the objective function ∑d j=1 1∑ i∈Ij q (j) i (ri) of (11) is convex. 9 Proof We use well-known rules of composition =-=[BV04]-=-. The functions 1∑ i∈Ij q (j) i (ri) are convex, as the function y 7→ 1/y is non-increasing and convex on R+, and the sum of the q(j)i ’s is concave (as they are individually concave). The objective f... |

3645 | Bagging predictors - Breiman - 1996 |

2094 | Information Theory and Reliable Communication - Gallager - 1968 |

1257 | The Hungarian method for the assignment problem
- Kuhn
- 1955
(Show Context)
Citation Context ... assignment problem (and it is also a special case of the optimal transport problem), and it can be solved efficiently using several methods, e.g., by linear programming or by the Hungarian algorithm =-=[Kuh55]-=-. In the linear programming approach, one considers the convex hull of the set of N ×N permutation matrices, which gives an equivalent optimization problem to (7) in terms of the Birkhoff polytope BN ... |

841 | Privacy-Preserving Data Mining - Agrawal, Srikant |

204 | Adaptive estimation of a quadratic functional by model selection - Laurent, Massart |

166 |
On the likelihood that one unknown probability exceeds another in the view of the evidence of the two samples
- Thompson
(Show Context)
Citation Context ... our case, the sources) is unknown and the processing / aggregation is done in an online fashion as the data are acquired (see [BCB12] for more information on this problem, which was first studied by =-=[Tho33]-=-). In comparison, in our setting the quality of a source as a function of resources allocated to the source is assumed to be known in advance, and the resource allocation optimization problem (1) is s... |

142 | Aggregation for Gaussian regression - Bunea, Tsybakov, et al. |

128 |
Regret Analysis of Stochastic and Nonstochastic Multiarmed Bandit
- Bubeck, Cesa-Bianchi
- 2012
(Show Context)
Citation Context ...earning, in which the quality / performance of each “arm” of a bandit (in our case, the sources) is unknown and the processing / aggregation is done in an online fashion as the data are acquired (see =-=[BCB12]-=- for more information on this problem, which was first studied by [Tho33]). In comparison, in our setting the quality of a source as a function of resources allocated to the source is assumed to be kn... |

76 | Stacked generalization, Neural Networks 5 - Wolpert - 1992 |

45 | Computational and statistical tradeoffs via convex relaxation. - Chandrasekaran, Jordan - 2013 |

36 |
Digital communication over fixed time-continuous channels with memory, with special application to telephone channels
- Holsinger
- 1964
(Show Context)
Citation Context ...varying capacities for the purpose of maximizing overall throughput [Gal68, CT91]. In this case, the optimal allocation of power to the different channels is given by the famous water-filling formula =-=[Hol64]-=-. In the area of sensor resource management, the problem of optimal sensor placement can also be viewed from the perspective of resource allocation [HCCK07]. Our setup is different from that of bandit... |

31 | Complexity theoretic lower bounds for sparse principal component detection. - Berthet, Rigollet - 2013 |

27 | Communication-efficient algorithms for statistical optimization. - Zhang, Duchi, et al. - 2013 |

24 |
Nonsmooth analysis of singular values.
- Lewis, Sendov
- 2005
(Show Context)
Citation Context ...an squared error E[‖θ̂ − θ‖22] is equivalent to solving the following convex optimization problem: minimize Tr((X>P X)−1) subject to P ∈ P . (16) Proof The map M 7→ Tr(M−1) is convex over the set SN+ =-=[LS05]-=-, and the map P 7→ X>PX is a linear map. Consequently, the map P 7→ Tr((X>P X)−1) is convex. Other measures of the performance of the estimator (15) may also be of interest. For instance, if the focus... |

18 | Computational sample complexity and attribute-efficient learning. - Servedio - 2000 |

16 | Incoherence-optimal matrix completion. - Chen - 2015 |

15 | Kullback-Leibler aggregation and misspecified generalized linear models - Rigollet |

14 | Computational barriers in minimax submatrix detection, arXiv preprint arXiv:1309.5914 - Ma, Wu - 2013 |

13 | On the complexity of random satisfiability problems with planted solutions. - Feldman, Perkins, et al. - 2015 |

13 |
Foundations and Applications of Sensor Management.
- Hero, Castan, et al.
- 2007
(Show Context)
Citation Context ...is given by the famous water-filling formula [Hol64]. In the area of sensor resource management, the problem of optimal sensor placement can also be viewed from the perspective of resource allocation =-=[HCCK07]-=-. Our setup is different from that of bandit problems in online learning, in which the quality / performance of each “arm” of a bandit (in our case, the sources) is unknown and the processing / aggreg... |

11 | Computational sample complexity. - Decatur, Goldreich, et al. - 1997 |

9 | Approaches to Learning - JORDAN, CARLILE, et al. - 2008 |

9 | Statistical algorithms and a lower bound for planted clique - Feldman, Grigorescu, et al. - 2013 |

8 | Using more data to speed-up training time. - Shalev-Shwartz, Shamir, et al. - 2012 |

6 | A deterministic polynomial-time approximation scheme for counting knapsack solutions.
- Stefankovic, Vempala, et al.
- 2012
(Show Context)
Citation Context ...ficult problem in general – specifically, this question is related to the well-known intractable problem of counting the number of vertices of the hypercube that lie on one side of a given hyperplane =-=[SVV12]-=-. However, it is possible to obtain further useful upper bounds through Bernstein’s inequality, which yields Pr1,...,rd (〈θ − θ̂, c〉 ≥ t) ≤ exp(− (t− β(r))2/2∑d i=1 c 2 i `i(ri) + ‖c‖∞(t− β(r))/3 ) . ... |

6 | Statistical and computational trade-offs in estimation of sparse principal components - Wang, Berthet, et al. - 2015 |

3 | Support recovery via weighted maximum-contrast subagging. Arxiv preprint arXiv:1306.3494v3 - Bradić - 2014 |

2 | Magging: maximin aggregation for inhomogeneous large-scale data - Bühlmann, Meinshausen - 2014 |

1 | regressions, Machine Learning 24 - Stacked - 1996 |