Cookies Policy
The website need some cookies and similar means to function. If you permit us, we will use those means to collect data on your visits for aggregated statistics to improve our service. Find out More
Accept Reject
  • Menu
Publications

Publications by Bruno Serra Loff

2013

Towards a Reverse Newman's Theorem in Interactive Information Complexity

Authors
Brody, J; Buhrman, H; Koucký, M; Loff, B; Speelman, F; Vereshchagin, NK;

Publication
Proceedings of the 28th Conference on Computational Complexity, CCC 2013, K.lo Alto, California, USA, 5-7 June, 2013

Abstract
Newman's theorem states that we can take any public-coin communication protocol and convert it into one that uses only private randomness with only a little increase in communication complexity. We consider a reversed scenario in the context of information complexity: can we take a protocol that uses private randomness and convert it into one that only uses public randomness while preserving the information revealed to each player? We prove that the answer is yes, at least for protocols that use a bounded number of rounds. As an application, we prove new direct sum theorems through the compression of interactive communication in the bounded-round setting. Furthermore, we show that if a Reverse Newman's Theorem can be proven in full generality, then full compression of interactive communication and fully-general direct-sum theorems will result. © 2013 IEEE.

2013

Learning Reductions to Sparse Sets

Authors
Buhrman, H; Fortnow, L; Hitchcock, JM; Loff, B;

Publication
Mathematical Foundations of Computer Science 2013 - 38th International Symposium, MFCS 2013, Klosterneuburg, Austria, August 26-30, 2013. Proceedings

Abstract
We study the consequences of NP having non-uniform polynomial size circuits of various types. We continue the work of Agrawal and Arvind [1] who study the consequences of Sat being many-one reducible to functions computable by non-uniform circuits consisting of a single weighted threshold gate. (SAT =m p LT1). They claim that P = NP follows as a consequence, but unfortunately their proof was incorrect. We take up this question and use results from computational learning theory to show that if SAT =m p LT1 then PH = PNP. We furthermore show that if Sat disjunctive truth-table (or majority truth-table) reduces to a sparse set then SAT =m p LT1 and hence a collapse of PH to PNP also follows. Lastly we show several interesting consequences of SAT =dtt p SPARSE. © 2013 Springer-Verlag.

2017

Lower Bounds for Elimination via Weak Regularity

Authors
Chattopadhyay, A; Dvorák, P; Koucký, M; Loff, B; Mukhopadhyay, S;

Publication
34th Symposium on Theoretical Aspects of Computer Science, STACS 2017, March 8-11, 2017, Hannover, Germany

Abstract
We consider the problem of elimination in communication complexity, that was first raised by Ambainis et al. [1] and later studied by Beimel et al. [4] for its connection to the famous direct sum question. In this problem, let f: {0, 1}2n ? {0,1} be any boolean function. Alice and Bob get k inputs x1,?, xk and y1,?, yk respectively, with xi, yi ? {0, 1}n. They want to output a k-bit vector v, such that there exists one index i for which vi = f(xi,yi). We prove a general result lower bounding the randomized communication complexity of the elimination problem for f using its discrepancy. Consequently, we obtain strong lower bounds for the functions Inner-Product and Greater-Than, that work for exponentially larger values of k than the best previous bounds. To prove our result, we use a pseudo-random notion called regularity that was first used by Raz and Wigderson [19]. We show that functions with small discrepancy are regular. We also observe that a weaker notion, that we call weak-regularity, already implies hardness of elimination. Finally, we give a different proof, borrowing ideas from Viola [23], to show that Greater-Than is weakly regular. © Arkadev Chattopadhyay, Pavel Dvorák, Michal Koucký, Bruno Loff, and Sagnik Mukhopadhyay.

2014

Reductions to the set of random strings: The resource-bounded case

Authors
Allender, E; Buhrman, H; Friedman, L; Loff, B;

Publication
Logical Methods in Computer Science

Abstract
This paper is motivated by a conjecture [1,5] that BPP can be characterized in terms of polynomial-time nonadaptive reductions to the set of Kolmogorov-random strings. In this paper we show that an approach laid out in [5] to settle this conjecture cannot succeed without significant alteration, but that it does bear fruit if we consider time-bounded Kolmogorov complexity instead. We show that if a set A is reducible in polynomial time to the set of time-t-bounded Kolmogorov-random strings (for all large enough time bounds t), then A is in P/poly, and that if in addition such a reduction exists for any universal Turing machine one uses in the definition of Kolmogorov complexity, then A is in PSPACE. © 2012 Springer-Verlag.

2015

Hardness of Approximation for Knapsack Problems

Authors
Buhrman, H; Loff, B; Torenvliet, L;

Publication
Theory Comput. Syst.

Abstract
We show various hardness results for knapsack and related problems; in particular we will show that unless the Exponential-Time Hypothesis is false, subset-sum cannot be approximated any better than with an FPTAS. We also provide new unconditional lower bounds for approximating knapsack in Ketan Mulmuley’s parallel PRAM model. Furthermore, we give a simple new algorithm for approximating knapsack and subset-sum, that can be adapted to work for small space, or in small parallel time. © 2014, Springer Science+Business Media New York.

2014

Computing with a full memory: catalytic space

Authors
Buhrman, H; Cleve, R; Koucký, M; Loff, B; Speelman, F;

Publication
Symposium on Theory of Computing, STOC 2014, New York, NY, USA, May 31 - June 03, 2014

Abstract
We define the notion of a catalytic-space computation. This is a computation that has a small amount of clean space available and is equipped with additional auxiliary space, with the caveat that the additional space is initially in an arbitrary, possibly incompressible, state and must be returned to this state when the computation is finished. We show that the extra space can be used in a nontrivial way, to compute uniform TC1-circuits with just a logarithmic amount of clean space. The extra space thus works analogously to a catalyst in a chemical reaction. TC1-circuits can compute for example the determinant of a matrix, which is not known to be computable in logspace. In order to obtain our results we study an algebraic model of computation, a variant of straight-line programs. We employ register machines with input registers x1, : : :, xn and work registers r1, : : :, rm. The instructions available are of the form ri?ri±u×?, with u, ? registers (distinct from ri) or constants. We wish to compute a function f(x1, : : :, xn) through a sequence of such instructions. The working registers have some arbitrary initial value ri = ti, and they may be altered throughout the computation, but by the end all registers must be returned to their initial value ti, except for, say, r1 which must hold t1 + f(x 1, : : :, xn). We show that all of Valiant's class VP, and more, can be computed in this model. This significantly extends the framework and techniques of Ben-Or and Cleve [6]. © 2014 ACM.

  • 1
  • 6