How are problems classified in Complexity Theory? The Next CEO of Stack OverflowWhy are decision problems commonly used in complexity theory?Negligible Function in CryptographyComputational complexity theory booksHow does “language” relate to “problem” in complexity theory?Open problems in complexity theoryProblem in Papadimitriou's “Computational Complexity” seems oddProblems in Complexity TheoryVerifier - Complexity TheoryProblems that feel exponential but are PMany-One Reducibility of decision problems for complexity theory?
How to count occurrences of text in a file?
Example of a Mathematician/Physicist whose Other Publications during their PhD eclipsed their PhD Thesis
Find non-case sensitive string in a mixed list of elements?
How to check if all elements of 1 list are in the *same quantity* and in any order, in the list2?
Poetry, calligrams and TikZ/PStricks challenge
Why isn't the Mueller report being released completely and unredacted?
Bartok - Syncopation (1): Meaning of notes in between Grand Staff
How to avoid supervisors with prejudiced views?
If Nick Fury and Coulson already knew about aliens (Kree and Skrull) why did they wait until Thor's appearance to start making weapons?
Can we say or write : "No, it'sn't"?
How to delete every two lines after 3rd lines in a file contains very large number of lines?
How to edit “Name” property in GCI output?
Can MTA send mail via a relay without being told so?
How to get from Geneva Airport to Metabief, Doubs, France by public transport?
TikZ: How to reverse arrow direction without switching start/end point?
When you upcast Blindness/Deafness, do all targets suffer the same effect?
How do I align (1) and (2)?
Does soap repel water?
How I can get glyphs from a fraktur font and use them as identifiers?
The exact meaning of 'Mom made me a sandwich'
Are police here, aren't itthey?
A small doubt about the dominated convergence theorem
Won the lottery - how do I keep the money?
Why do remote US companies require working in the US?
How are problems classified in Complexity Theory?
The Next CEO of Stack OverflowWhy are decision problems commonly used in complexity theory?Negligible Function in CryptographyComputational complexity theory booksHow does “language” relate to “problem” in complexity theory?Open problems in complexity theoryProblem in Papadimitriou's “Computational Complexity” seems oddProblems in Complexity TheoryVerifier - Complexity TheoryProblems that feel exponential but are PMany-One Reducibility of decision problems for complexity theory?
$begingroup$
I'm reading Sipser's Introduction to the Theory of Computation (3rd edition). In chapter 0 (pg. 2), he says we don't know the answer to "what makes some problems computationally hard and others easy," however, he then states that "researchers have
discovered an elegant scheme for classifying problems according to their computational difficulty. Using this scheme, we can demonstrate
a method for giving evidence that certain problems are computationally hard,
even if we are unable to prove that they are."
So my question is: HOW is it possible to classify problems according to their computational difficulty, if we don't even know what makes a problem computationally easy/hard in the first place?
Also, what/where is this "scheme" that does this classifying. (I did some googling and couldn't find anything)
complexity-theory
New contributor
$endgroup$
add a comment |
$begingroup$
I'm reading Sipser's Introduction to the Theory of Computation (3rd edition). In chapter 0 (pg. 2), he says we don't know the answer to "what makes some problems computationally hard and others easy," however, he then states that "researchers have
discovered an elegant scheme for classifying problems according to their computational difficulty. Using this scheme, we can demonstrate
a method for giving evidence that certain problems are computationally hard,
even if we are unable to prove that they are."
So my question is: HOW is it possible to classify problems according to their computational difficulty, if we don't even know what makes a problem computationally easy/hard in the first place?
Also, what/where is this "scheme" that does this classifying. (I did some googling and couldn't find anything)
complexity-theory
New contributor
$endgroup$
add a comment |
$begingroup$
I'm reading Sipser's Introduction to the Theory of Computation (3rd edition). In chapter 0 (pg. 2), he says we don't know the answer to "what makes some problems computationally hard and others easy," however, he then states that "researchers have
discovered an elegant scheme for classifying problems according to their computational difficulty. Using this scheme, we can demonstrate
a method for giving evidence that certain problems are computationally hard,
even if we are unable to prove that they are."
So my question is: HOW is it possible to classify problems according to their computational difficulty, if we don't even know what makes a problem computationally easy/hard in the first place?
Also, what/where is this "scheme" that does this classifying. (I did some googling and couldn't find anything)
complexity-theory
New contributor
$endgroup$
I'm reading Sipser's Introduction to the Theory of Computation (3rd edition). In chapter 0 (pg. 2), he says we don't know the answer to "what makes some problems computationally hard and others easy," however, he then states that "researchers have
discovered an elegant scheme for classifying problems according to their computational difficulty. Using this scheme, we can demonstrate
a method for giving evidence that certain problems are computationally hard,
even if we are unable to prove that they are."
So my question is: HOW is it possible to classify problems according to their computational difficulty, if we don't even know what makes a problem computationally easy/hard in the first place?
Also, what/where is this "scheme" that does this classifying. (I did some googling and couldn't find anything)
complexity-theory
complexity-theory
New contributor
New contributor
New contributor
asked yesterday
Johan von AddenJohan von Adden
82
82
New contributor
New contributor
add a comment |
add a comment |
3 Answers
3
active
oldest
votes
$begingroup$
That's what you get when you distill a whole bunch of theory to a wider audience.
In his book, Sipser addresses a general audience at the undergraduate level, possibly with no notion of computability theory; hence, he can only hint at concepts which are to be given a more formal treatment later on in the book. The part you cite is from chapter 0 (i.e., not really a chapter), whereas the material for complexity theory only appears at the end (i.e., part three). This is why the passage is so fuzzy. Most likely it is intended only as motivation and to give a broad overview for the topics to be covered in the book.
The "scheme" Sipser is talking about are reductions. If we know a problem $A$ is reducible to a problem $B$, then we know $B$ is at least as hard as $A$. (Incidentally, this is also why it is common practice to denote reductions with a "$le$" sign.) This gives us a way of ordering problems according to their difficulty, at least for those having reductions we are aware of. As Sipser states, though, by using only reductions "we are unable to prove" whether the problems are really hard or not; reductions only give us relative, not absolute notions of hardness. This is why separation results are still rare in modern complexity theory: We have a bunch of reduction (e.g., NP-completeness) results, but only a handful of separation results (e.g., the time and space hierarchy theorems).
$endgroup$
$begingroup$
I appreciate the thorough answer. Vincenzo (one of the commentors) mentioned that Sipser discusses this in Ch 5 & 7, which I'll hopefully get to eventually!
$endgroup$
– Johan von Adden
yesterday
add a comment |
$begingroup$
HOW is it possible to classify problems according to their computational difficulty, if we don't even know what makes a problem computationally easy/hard in the first place?
I think the point that the piece is trying to make is that we know how to determine whether individual problems are easy or hard, even though we don't have an over-arching theory of why the hard ones are hard and the easy ones are easy. Just like you can classify people according to their weight, even though you don't know why they have the weight they do.
I should emphasise that in most cases, "hard" means "seem to be hard". You've probably heard of NP-complete problems. We don't know for certain that these problems have no efficient algorithm (by the standard definition of "efficient") but nobody has been able find an efficient algorithm for any of them in nearly 50 years of trying, and finding an efficient algorithm for just one of them would give efficient algorithms for all of them.
Also, what/where is this "scheme"
Complexity classes, the relationships between them, and the concept of reductions for transforming one problem into another.
$endgroup$
add a comment |
$begingroup$
The "scheme" is based on the ideas of reductions among problems and completeness of problems, which are described in Chapters 5 and 7 of Sipser's book.
$endgroup$
add a comment |
Your Answer
StackExchange.ifUsing("editor", function ()
return StackExchange.using("mathjaxEditing", function ()
StackExchange.MarkdownEditor.creationCallbacks.add(function (editor, postfix)
StackExchange.mathjaxEditing.prepareWmdForMathJax(editor, postfix, [["$", "$"], ["\\(","\\)"]]);
);
);
, "mathjax-editing");
StackExchange.ready(function()
var channelOptions =
tags: "".split(" "),
id: "419"
;
initTagRenderer("".split(" "), "".split(" "), channelOptions);
StackExchange.using("externalEditor", function()
// Have to fire editor after snippets, if snippets enabled
if (StackExchange.settings.snippets.snippetsEnabled)
StackExchange.using("snippets", function()
createEditor();
);
else
createEditor();
);
function createEditor()
StackExchange.prepareEditor(
heartbeatType: 'answer',
autoActivateHeartbeat: false,
convertImagesToLinks: false,
noModals: true,
showLowRepImageUploadWarning: true,
reputationToPostImages: null,
bindNavPrevention: true,
postfix: "",
imageUploader:
brandingHtml: "Powered by u003ca class="icon-imgur-white" href="https://imgur.com/"u003eu003c/au003e",
contentPolicyHtml: "User contributions licensed under u003ca href="https://creativecommons.org/licenses/by-sa/3.0/"u003ecc by-sa 3.0 with attribution requiredu003c/au003e u003ca href="https://stackoverflow.com/legal/content-policy"u003e(content policy)u003c/au003e",
allowUrls: true
,
onDemand: true,
discardSelector: ".discard-answer"
,immediatelyShowMarkdownHelp:true
);
);
Johan von Adden is a new contributor. Be nice, and check out our Code of Conduct.
Sign up or log in
StackExchange.ready(function ()
StackExchange.helpers.onClickDraftSave('#login-link');
);
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Required, but never shown
StackExchange.ready(
function ()
StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fcs.stackexchange.com%2fquestions%2f106207%2fhow-are-problems-classified-in-complexity-theory%23new-answer', 'question_page');
);
Post as a guest
Required, but never shown
3 Answers
3
active
oldest
votes
3 Answers
3
active
oldest
votes
active
oldest
votes
active
oldest
votes
$begingroup$
That's what you get when you distill a whole bunch of theory to a wider audience.
In his book, Sipser addresses a general audience at the undergraduate level, possibly with no notion of computability theory; hence, he can only hint at concepts which are to be given a more formal treatment later on in the book. The part you cite is from chapter 0 (i.e., not really a chapter), whereas the material for complexity theory only appears at the end (i.e., part three). This is why the passage is so fuzzy. Most likely it is intended only as motivation and to give a broad overview for the topics to be covered in the book.
The "scheme" Sipser is talking about are reductions. If we know a problem $A$ is reducible to a problem $B$, then we know $B$ is at least as hard as $A$. (Incidentally, this is also why it is common practice to denote reductions with a "$le$" sign.) This gives us a way of ordering problems according to their difficulty, at least for those having reductions we are aware of. As Sipser states, though, by using only reductions "we are unable to prove" whether the problems are really hard or not; reductions only give us relative, not absolute notions of hardness. This is why separation results are still rare in modern complexity theory: We have a bunch of reduction (e.g., NP-completeness) results, but only a handful of separation results (e.g., the time and space hierarchy theorems).
$endgroup$
$begingroup$
I appreciate the thorough answer. Vincenzo (one of the commentors) mentioned that Sipser discusses this in Ch 5 & 7, which I'll hopefully get to eventually!
$endgroup$
– Johan von Adden
yesterday
add a comment |
$begingroup$
That's what you get when you distill a whole bunch of theory to a wider audience.
In his book, Sipser addresses a general audience at the undergraduate level, possibly with no notion of computability theory; hence, he can only hint at concepts which are to be given a more formal treatment later on in the book. The part you cite is from chapter 0 (i.e., not really a chapter), whereas the material for complexity theory only appears at the end (i.e., part three). This is why the passage is so fuzzy. Most likely it is intended only as motivation and to give a broad overview for the topics to be covered in the book.
The "scheme" Sipser is talking about are reductions. If we know a problem $A$ is reducible to a problem $B$, then we know $B$ is at least as hard as $A$. (Incidentally, this is also why it is common practice to denote reductions with a "$le$" sign.) This gives us a way of ordering problems according to their difficulty, at least for those having reductions we are aware of. As Sipser states, though, by using only reductions "we are unable to prove" whether the problems are really hard or not; reductions only give us relative, not absolute notions of hardness. This is why separation results are still rare in modern complexity theory: We have a bunch of reduction (e.g., NP-completeness) results, but only a handful of separation results (e.g., the time and space hierarchy theorems).
$endgroup$
$begingroup$
I appreciate the thorough answer. Vincenzo (one of the commentors) mentioned that Sipser discusses this in Ch 5 & 7, which I'll hopefully get to eventually!
$endgroup$
– Johan von Adden
yesterday
add a comment |
$begingroup$
That's what you get when you distill a whole bunch of theory to a wider audience.
In his book, Sipser addresses a general audience at the undergraduate level, possibly with no notion of computability theory; hence, he can only hint at concepts which are to be given a more formal treatment later on in the book. The part you cite is from chapter 0 (i.e., not really a chapter), whereas the material for complexity theory only appears at the end (i.e., part three). This is why the passage is so fuzzy. Most likely it is intended only as motivation and to give a broad overview for the topics to be covered in the book.
The "scheme" Sipser is talking about are reductions. If we know a problem $A$ is reducible to a problem $B$, then we know $B$ is at least as hard as $A$. (Incidentally, this is also why it is common practice to denote reductions with a "$le$" sign.) This gives us a way of ordering problems according to their difficulty, at least for those having reductions we are aware of. As Sipser states, though, by using only reductions "we are unable to prove" whether the problems are really hard or not; reductions only give us relative, not absolute notions of hardness. This is why separation results are still rare in modern complexity theory: We have a bunch of reduction (e.g., NP-completeness) results, but only a handful of separation results (e.g., the time and space hierarchy theorems).
$endgroup$
That's what you get when you distill a whole bunch of theory to a wider audience.
In his book, Sipser addresses a general audience at the undergraduate level, possibly with no notion of computability theory; hence, he can only hint at concepts which are to be given a more formal treatment later on in the book. The part you cite is from chapter 0 (i.e., not really a chapter), whereas the material for complexity theory only appears at the end (i.e., part three). This is why the passage is so fuzzy. Most likely it is intended only as motivation and to give a broad overview for the topics to be covered in the book.
The "scheme" Sipser is talking about are reductions. If we know a problem $A$ is reducible to a problem $B$, then we know $B$ is at least as hard as $A$. (Incidentally, this is also why it is common practice to denote reductions with a "$le$" sign.) This gives us a way of ordering problems according to their difficulty, at least for those having reductions we are aware of. As Sipser states, though, by using only reductions "we are unable to prove" whether the problems are really hard or not; reductions only give us relative, not absolute notions of hardness. This is why separation results are still rare in modern complexity theory: We have a bunch of reduction (e.g., NP-completeness) results, but only a handful of separation results (e.g., the time and space hierarchy theorems).
edited yesterday
answered yesterday
dkaeaedkaeae
2,2071922
2,2071922
$begingroup$
I appreciate the thorough answer. Vincenzo (one of the commentors) mentioned that Sipser discusses this in Ch 5 & 7, which I'll hopefully get to eventually!
$endgroup$
– Johan von Adden
yesterday
add a comment |
$begingroup$
I appreciate the thorough answer. Vincenzo (one of the commentors) mentioned that Sipser discusses this in Ch 5 & 7, which I'll hopefully get to eventually!
$endgroup$
– Johan von Adden
yesterday
$begingroup$
I appreciate the thorough answer. Vincenzo (one of the commentors) mentioned that Sipser discusses this in Ch 5 & 7, which I'll hopefully get to eventually!
$endgroup$
– Johan von Adden
yesterday
$begingroup$
I appreciate the thorough answer. Vincenzo (one of the commentors) mentioned that Sipser discusses this in Ch 5 & 7, which I'll hopefully get to eventually!
$endgroup$
– Johan von Adden
yesterday
add a comment |
$begingroup$
HOW is it possible to classify problems according to their computational difficulty, if we don't even know what makes a problem computationally easy/hard in the first place?
I think the point that the piece is trying to make is that we know how to determine whether individual problems are easy or hard, even though we don't have an over-arching theory of why the hard ones are hard and the easy ones are easy. Just like you can classify people according to their weight, even though you don't know why they have the weight they do.
I should emphasise that in most cases, "hard" means "seem to be hard". You've probably heard of NP-complete problems. We don't know for certain that these problems have no efficient algorithm (by the standard definition of "efficient") but nobody has been able find an efficient algorithm for any of them in nearly 50 years of trying, and finding an efficient algorithm for just one of them would give efficient algorithms for all of them.
Also, what/where is this "scheme"
Complexity classes, the relationships between them, and the concept of reductions for transforming one problem into another.
$endgroup$
add a comment |
$begingroup$
HOW is it possible to classify problems according to their computational difficulty, if we don't even know what makes a problem computationally easy/hard in the first place?
I think the point that the piece is trying to make is that we know how to determine whether individual problems are easy or hard, even though we don't have an over-arching theory of why the hard ones are hard and the easy ones are easy. Just like you can classify people according to their weight, even though you don't know why they have the weight they do.
I should emphasise that in most cases, "hard" means "seem to be hard". You've probably heard of NP-complete problems. We don't know for certain that these problems have no efficient algorithm (by the standard definition of "efficient") but nobody has been able find an efficient algorithm for any of them in nearly 50 years of trying, and finding an efficient algorithm for just one of them would give efficient algorithms for all of them.
Also, what/where is this "scheme"
Complexity classes, the relationships between them, and the concept of reductions for transforming one problem into another.
$endgroup$
add a comment |
$begingroup$
HOW is it possible to classify problems according to their computational difficulty, if we don't even know what makes a problem computationally easy/hard in the first place?
I think the point that the piece is trying to make is that we know how to determine whether individual problems are easy or hard, even though we don't have an over-arching theory of why the hard ones are hard and the easy ones are easy. Just like you can classify people according to their weight, even though you don't know why they have the weight they do.
I should emphasise that in most cases, "hard" means "seem to be hard". You've probably heard of NP-complete problems. We don't know for certain that these problems have no efficient algorithm (by the standard definition of "efficient") but nobody has been able find an efficient algorithm for any of them in nearly 50 years of trying, and finding an efficient algorithm for just one of them would give efficient algorithms for all of them.
Also, what/where is this "scheme"
Complexity classes, the relationships between them, and the concept of reductions for transforming one problem into another.
$endgroup$
HOW is it possible to classify problems according to their computational difficulty, if we don't even know what makes a problem computationally easy/hard in the first place?
I think the point that the piece is trying to make is that we know how to determine whether individual problems are easy or hard, even though we don't have an over-arching theory of why the hard ones are hard and the easy ones are easy. Just like you can classify people according to their weight, even though you don't know why they have the weight they do.
I should emphasise that in most cases, "hard" means "seem to be hard". You've probably heard of NP-complete problems. We don't know for certain that these problems have no efficient algorithm (by the standard definition of "efficient") but nobody has been able find an efficient algorithm for any of them in nearly 50 years of trying, and finding an efficient algorithm for just one of them would give efficient algorithms for all of them.
Also, what/where is this "scheme"
Complexity classes, the relationships between them, and the concept of reductions for transforming one problem into another.
edited yesterday
answered yesterday
David RicherbyDavid Richerby
69.3k15106195
69.3k15106195
add a comment |
add a comment |
$begingroup$
The "scheme" is based on the ideas of reductions among problems and completeness of problems, which are described in Chapters 5 and 7 of Sipser's book.
$endgroup$
add a comment |
$begingroup$
The "scheme" is based on the ideas of reductions among problems and completeness of problems, which are described in Chapters 5 and 7 of Sipser's book.
$endgroup$
add a comment |
$begingroup$
The "scheme" is based on the ideas of reductions among problems and completeness of problems, which are described in Chapters 5 and 7 of Sipser's book.
$endgroup$
The "scheme" is based on the ideas of reductions among problems and completeness of problems, which are described in Chapters 5 and 7 of Sipser's book.
answered yesterday
VincenzoVincenzo
2,0651614
2,0651614
add a comment |
add a comment |
Johan von Adden is a new contributor. Be nice, and check out our Code of Conduct.
Johan von Adden is a new contributor. Be nice, and check out our Code of Conduct.
Johan von Adden is a new contributor. Be nice, and check out our Code of Conduct.
Johan von Adden is a new contributor. Be nice, and check out our Code of Conduct.
Thanks for contributing an answer to Computer Science Stack Exchange!
- Please be sure to answer the question. Provide details and share your research!
But avoid …
- Asking for help, clarification, or responding to other answers.
- Making statements based on opinion; back them up with references or personal experience.
Use MathJax to format equations. MathJax reference.
To learn more, see our tips on writing great answers.
Sign up or log in
StackExchange.ready(function ()
StackExchange.helpers.onClickDraftSave('#login-link');
);
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Required, but never shown
StackExchange.ready(
function ()
StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fcs.stackexchange.com%2fquestions%2f106207%2fhow-are-problems-classified-in-complexity-theory%23new-answer', 'question_page');
);
Post as a guest
Required, but never shown
Sign up or log in
StackExchange.ready(function ()
StackExchange.helpers.onClickDraftSave('#login-link');
);
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Required, but never shown
Sign up or log in
StackExchange.ready(function ()
StackExchange.helpers.onClickDraftSave('#login-link');
);
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Required, but never shown
Sign up or log in
StackExchange.ready(function ()
StackExchange.helpers.onClickDraftSave('#login-link');
);
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown