*Cristopher Moore and Stephan Mertens*

- Published in print:
- 2011
- Published Online:
- December 2013
- ISBN:
- 9780199233212
- eISBN:
- 9780191775079
- Item type:
- chapter

- Publisher:
- Oxford University Press
- DOI:
- 10.1093/acprof:oso/9780199233212.003.0004
- Subject:
- Physics, Theoretical, Computational, and Statistical Physics

NP refers to a class of decision problems in which yes-instances are easy to verify. That is: a decision problem is in NP if, whenever the answer for a particular instance is ‘yes’, there is a simple ...
More

NP refers to a class of decision problems in which yes-instances are easy to verify. That is: a decision problem is in NP if, whenever the answer for a particular instance is ‘yes’, there is a simple proof of this fact. Given a graph, a Hamiltonian path is a path that visits each vertex exactly once. This chapter examines the NP class of problems, first by considering the proverbial task of finding a needle in a haystack. It then looks at several definitions of NP and discusses proofs, witnesses, and lucky guesses. It shows that NP encompasses a wide range of fundamental problems, from coloring maps to untying knots and satisfying systems of constraints. It describes some of the classic problems in NP and highlights some reductions between these problems by transforming questions about graphs into questions about Boolean formulas or vice versa. The chapter also analyzes primality in NP.Less

NP refers to a class of decision problems in which yes-instances are easy to verify. That is: a decision problem is in NP if, whenever the answer for a particular instance is ‘yes’, there is a simple proof of this fact. Given a graph, a Hamiltonian path is a path that visits each vertex exactly once. This chapter examines the NP class of problems, first by considering the proverbial task of finding a needle in a haystack. It then looks at several definitions of NP and discusses proofs, witnesses, and lucky guesses. It shows that NP encompasses a wide range of fundamental problems, from coloring maps to untying knots and satisfying systems of constraints. It describes some of the classic problems in NP and highlights some reductions between these problems by transforming questions about graphs into questions about Boolean formulas or vice versa. The chapter also analyzes primality in NP.

*Allon G. Percus, Gabriel Istrate, and Cristopher Moore*

- Published in print:
- 2005
- Published Online:
- November 2020
- ISBN:
- 9780195177374
- eISBN:
- 9780197562260
- Item type:
- chapter

- Publisher:
- Oxford University Press
- DOI:
- 10.1093/oso/9780195177374.003.0007
- Subject:
- Computer Science, Mathematical Theory of Computation

Computer science and physics have been closely linked since the birth of modern computing. This book is about that link. John von Neumann’s original design for digital computing in the 1940s was ...
More

Computer science and physics have been closely linked since the birth of modern computing. This book is about that link. John von Neumann’s original design for digital computing in the 1940s was motivated by applications in ballistics and hydrodynamics, and his model still underlies today’s hardware architectures. Within several years of the invention of the first digital computers, the Monte Carlo method was developed, putting these devices to work simulating natural processes using the principles of statistical physics. It is difficult to imagine how computing might have evolved without the physical insights that nurtured it. It is impossible to imagine how physics would have evolved without computation. While digital computers quickly became indispensable, a true theoretical understanding of the efficiency of the computation process did not occur until twenty years later. In 1965, Hartmanis and Stearns [227] as well as Edmonds [139, 140] articulated the notion of computational complexity, categorizing algorithms according to how rapidly their time and space requirements grow with input size. The qualitative distinctions that computational complexity draws between algorithms form the foundation of theoretical computer science. Chief among these distinctions is that of polynomial versus exponential time. A combinatorial problem belongs in the complexity class P (polynomial time) if there exists an algorithm guaranteeing a solution in a computation time, or number of elementary steps of the algorithm, that grows at most polynomially with input size. Loosely speaking, such problems are considered computationally feasible. An example might be sorting a list of n numbers: even a particularly naive and inefficient algorithm for this will run in a number of steps that grows as O(n2), and so sorting is in the class P. A problem belongs in the complexity class NP (non-deterministic polynomial time) if it is merely possible to test, in polynomial time, whether a specific presumed solution is correct. Of course, P ⊆ NP: for any problem whose solution can be found in polynomial time, one can surely verify the validity of a presumed solution in polynomial time.
Less

Computer science and physics have been closely linked since the birth of modern computing. This book is about that link. John von Neumann’s original design for digital computing in the 1940s was motivated by applications in ballistics and hydrodynamics, and his model still underlies today’s hardware architectures. Within several years of the invention of the first digital computers, the Monte Carlo method was developed, putting these devices to work simulating natural processes using the principles of statistical physics. It is difficult to imagine how computing might have evolved without the physical insights that nurtured it. It is impossible to imagine how physics would have evolved without computation. While digital computers quickly became indispensable, a true theoretical understanding of the efficiency of the computation process did not occur until twenty years later. In 1965, Hartmanis and Stearns [227] as well as Edmonds [139, 140] articulated the notion of computational complexity, categorizing algorithms according to how rapidly their time and space requirements grow with input size. The qualitative distinctions that computational complexity draws between algorithms form the foundation of theoretical computer science. Chief among these distinctions is that of polynomial versus exponential time. A combinatorial problem belongs in the complexity class P (polynomial time) if there exists an algorithm guaranteeing a solution in a computation time, or number of elementary steps of the algorithm, that grows at most polynomially with input size. Loosely speaking, such problems are considered computationally feasible. An example might be sorting a list of n numbers: even a particularly naive and inefficient algorithm for this will run in a number of steps that grows as O(n2), and so sorting is in the class P. A problem belongs in the complexity class NP (non-deterministic polynomial time) if it is merely possible to test, in polynomial time, whether a specific presumed solution is correct. Of course, P ⊆ NP: for any problem whose solution can be found in polynomial time, one can surely verify the validity of a presumed solution in polynomial time.

*Alexis C. Kaporis and Lefteris M. Kirousis*

- Published in print:
- 2005
- Published Online:
- November 2020
- ISBN:
- 9780195177374
- eISBN:
- 9780197562260
- Item type:
- chapter

- Publisher:
- Oxford University Press
- DOI:
- 10.1093/oso/9780195177374.003.0016
- Subject:
- Computer Science, Mathematical Theory of Computation

In order to prove that a certain property holds asymptotically for a restricted class of objects such as formulas or graphs, one may apply a heuristic on a random element of the class, and then ...
More

In order to prove that a certain property holds asymptotically for a restricted class of objects such as formulas or graphs, one may apply a heuristic on a random element of the class, and then prove by probabilistic analysis that the heuristic succeeds with high probability. This method has been used to establish lower bounds on thresholds for desirable properties such as satisfiability and colorability: lower bounds for the 3-SAT threshold were discussed briefly in the previous chapter. The probabilistic analysis depends on analyzing the mean trajectory of the heuristic—as we have seen in chapter 3—and in parallel, showing that in the asymptotic limit the trajectory’s properties are strongly concentrated about their mean. However, the mean trajectory analysis requires that certain random characteristics of the heuristic’s starting sample are retained throughout the trajectory. We propose a methodology in this chapter to determine the conditional that should be imposed on a random object, such as a conjunctive normal form (CNF) formula or a graph, so that conditional randomness is retained when we run a given algorithm. The methodology is based on the principle of deferred decisions. The essential idea is to consider information about the object as being stored in “small pieces,” in separate registers. The contents of the registers pertaining to the conditional are exposed, while the rest remain unexposed. Having separate registers for different types of information prevents exposing information unnecessarily. We use this methodology to prove various randomness invariance results, one of which answers a question posed by Molloy [402].
Less

In order to prove that a certain property holds asymptotically for a restricted class of objects such as formulas or graphs, one may apply a heuristic on a random element of the class, and then prove by probabilistic analysis that the heuristic succeeds with high probability. This method has been used to establish lower bounds on thresholds for desirable properties such as satisfiability and colorability: lower bounds for the 3-SAT threshold were discussed briefly in the previous chapter. The probabilistic analysis depends on analyzing the mean trajectory of the heuristic—as we have seen in chapter 3—and in parallel, showing that in the asymptotic limit the trajectory’s properties are strongly concentrated about their mean. However, the mean trajectory analysis requires that certain random characteristics of the heuristic’s starting sample are retained throughout the trajectory. We propose a methodology in this chapter to determine the conditional that should be imposed on a random object, such as a conjunctive normal form (CNF) formula or a graph, so that conditional randomness is retained when we run a given algorithm. The methodology is based on the principle of deferred decisions. The essential idea is to consider information about the object as being stored in “small pieces,” in separate registers. The contents of the registers pertaining to the conditional are exposed, while the rest remain unexposed. Having separate registers for different types of information prevents exposing information unnecessarily. We use this methodology to prove various randomness invariance results, one of which answers a question posed by Molloy [402].