What did Turing mean when saying that “machines cannot give rise to surprises” is due to a fallacy?What does sublinear space mean for Turing machines?Simulate a regular Turing Machine with one that cannot write blanksExamples of processes / problems that cannot be tackled by Turing MachinesWhat is the limit for Turing machines with 2 states and 3 symbols that halt?Disprove that a function exists that counts the turing machines that halt on $epsilon$Language of Turing machines that never visit some given stateWhat does it mean when its said that most Turing Machines are not programmable?What does “effective enumeration” in Turing machines mean?Proof that Turing machines and computers have same powerWhy cannot we enumerate all Turing machines that have no fixed point?

How to set the font color of quantity objects (Version 11.3 vs version 12)?

Will tsunami waves travel forever if there was no land?

What does 「再々起」mean?

Need help understanding harmonic series and intervals

Help, my Death Star suffers from Kessler syndrome!

Do I have an "anti-research" personality?

How to determine the actual or "true" resolution of a digital photograph?

Is creating your own "experiment" considered cheating during a physics exam?

Were there two appearances of Stan Lee?

Why do Ichisongas hate elephants and hippos?

Any examples of headwear for races with animal ears?

Airbnb - host wants to reduce rooms, can we get refund?

Why does nature favour the Laplacian?

Was it really necessary for the Lunar Module to have 2 stages?

Reverse the word in a string with the same order in javascript

Are Boeing 737-800’s grounded?

How to create an ad-hoc wireless network in Ubuntu

Transfer over $10k

In the time of the mishna, were there Jewish cities without courts?

Why is the origin of “threshold” uncertain?

How to replace the "space symbol" (squat-u) in listings?

A question regarding using the definite article

What are the spoon bit of a spoon and fork bit of a fork called?

Why was Germany not as successful as other Europeans in establishing overseas colonies?



What did Turing mean when saying that “machines cannot give rise to surprises” is due to a fallacy?


What does sublinear space mean for Turing machines?Simulate a regular Turing Machine with one that cannot write blanksExamples of processes / problems that cannot be tackled by Turing MachinesWhat is the limit for Turing machines with 2 states and 3 symbols that halt?Disprove that a function exists that counts the turing machines that halt on $epsilon$Language of Turing machines that never visit some given stateWhat does it mean when its said that most Turing Machines are not programmable?What does “effective enumeration” in Turing machines mean?Proof that Turing machines and computers have same powerWhy cannot we enumerate all Turing machines that have no fixed point?













27












$begingroup$


I encountered below statement by Alan M. Turing here:




"The view that machines cannot give rise to surprises is due, I
believe, to a fallacy to which philosophers and mathematicians are
particularly subject. This is the assumption that as soon as a fact is
presented to a mind all consequences of that fact spring into the mind
simultaneously with it. It is a very useful assumption under many
circumstances, but one too easily forgets that it is false."




I am not a native English speaker. Could anyone explain it in plain English?










share|cite|improve this question











$endgroup$


We're looking for long answers that provide some explanation and context. Don't just give a one-line answer; explain why your answer is right, ideally with citations. Answers that don't include explanations may be removed.









  • 2




    $begingroup$
    perhaps, it's better suited for philosophy portal rather to hard science like CS
    $endgroup$
    – Bulat
    Apr 19 at 9:43






  • 3




    $begingroup$
    @Bulat I was going to say the same -- and redirect to English Language Learners -- but then I realised that there is some CS-related content that can be explained in an answer, which probably wouldn't be picked up on, in other parts of Stack Exchange.
    $endgroup$
    – David Richerby
    Apr 19 at 9:45






  • 7




    $begingroup$
    A good example is iteration of the transformation z := z² + c, where z and c are complex numbers. What happens if I take any starting point on the plane z and iterate, will the number go to infinity or not? An ordinary fellow would say, yeah, this will give you two regions or maybe a few more where the value goes to zero and for the rest it goes to infinity. Relatively unsurprising. Then Mandelbrot comes along and actually plots the regions on the the plane defined by this simple "machine". As the result comes out of the dotmatrix printer, this simple "machine" proves itself ... weird.
    $endgroup$
    – David Tonhofer
    Apr 19 at 16:56











  • $begingroup$
    Facebook and other social media are a great example of this... A lot of the consequences of their algorithms are not something that was expected by the creators (or anyone really).
    $endgroup$
    – aslum
    Apr 19 at 20:52










  • $begingroup$
    A rather quirky individual once referred to this using a fire metaphor: "The bigger you build your bonfire of knowledge, the more darkness is revealed to your startled eye"
    $endgroup$
    – JacobIRR
    Apr 19 at 22:05















27












$begingroup$


I encountered below statement by Alan M. Turing here:




"The view that machines cannot give rise to surprises is due, I
believe, to a fallacy to which philosophers and mathematicians are
particularly subject. This is the assumption that as soon as a fact is
presented to a mind all consequences of that fact spring into the mind
simultaneously with it. It is a very useful assumption under many
circumstances, but one too easily forgets that it is false."




I am not a native English speaker. Could anyone explain it in plain English?










share|cite|improve this question











$endgroup$


We're looking for long answers that provide some explanation and context. Don't just give a one-line answer; explain why your answer is right, ideally with citations. Answers that don't include explanations may be removed.









  • 2




    $begingroup$
    perhaps, it's better suited for philosophy portal rather to hard science like CS
    $endgroup$
    – Bulat
    Apr 19 at 9:43






  • 3




    $begingroup$
    @Bulat I was going to say the same -- and redirect to English Language Learners -- but then I realised that there is some CS-related content that can be explained in an answer, which probably wouldn't be picked up on, in other parts of Stack Exchange.
    $endgroup$
    – David Richerby
    Apr 19 at 9:45






  • 7




    $begingroup$
    A good example is iteration of the transformation z := z² + c, where z and c are complex numbers. What happens if I take any starting point on the plane z and iterate, will the number go to infinity or not? An ordinary fellow would say, yeah, this will give you two regions or maybe a few more where the value goes to zero and for the rest it goes to infinity. Relatively unsurprising. Then Mandelbrot comes along and actually plots the regions on the the plane defined by this simple "machine". As the result comes out of the dotmatrix printer, this simple "machine" proves itself ... weird.
    $endgroup$
    – David Tonhofer
    Apr 19 at 16:56











  • $begingroup$
    Facebook and other social media are a great example of this... A lot of the consequences of their algorithms are not something that was expected by the creators (or anyone really).
    $endgroup$
    – aslum
    Apr 19 at 20:52










  • $begingroup$
    A rather quirky individual once referred to this using a fire metaphor: "The bigger you build your bonfire of knowledge, the more darkness is revealed to your startled eye"
    $endgroup$
    – JacobIRR
    Apr 19 at 22:05













27












27








27


7



$begingroup$


I encountered below statement by Alan M. Turing here:




"The view that machines cannot give rise to surprises is due, I
believe, to a fallacy to which philosophers and mathematicians are
particularly subject. This is the assumption that as soon as a fact is
presented to a mind all consequences of that fact spring into the mind
simultaneously with it. It is a very useful assumption under many
circumstances, but one too easily forgets that it is false."




I am not a native English speaker. Could anyone explain it in plain English?










share|cite|improve this question











$endgroup$




I encountered below statement by Alan M. Turing here:




"The view that machines cannot give rise to surprises is due, I
believe, to a fallacy to which philosophers and mathematicians are
particularly subject. This is the assumption that as soon as a fact is
presented to a mind all consequences of that fact spring into the mind
simultaneously with it. It is a very useful assumption under many
circumstances, but one too easily forgets that it is false."




I am not a native English speaker. Could anyone explain it in plain English?







turing-machines computability computation-models






share|cite|improve this question















share|cite|improve this question













share|cite|improve this question




share|cite|improve this question








edited Apr 21 at 20:51









Discrete lizard

4,85311540




4,85311540










asked Apr 19 at 8:04









smwikipediasmwikipedia

23235




23235



We're looking for long answers that provide some explanation and context. Don't just give a one-line answer; explain why your answer is right, ideally with citations. Answers that don't include explanations may be removed.




We're looking for long answers that provide some explanation and context. Don't just give a one-line answer; explain why your answer is right, ideally with citations. Answers that don't include explanations may be removed.








  • 2




    $begingroup$
    perhaps, it's better suited for philosophy portal rather to hard science like CS
    $endgroup$
    – Bulat
    Apr 19 at 9:43






  • 3




    $begingroup$
    @Bulat I was going to say the same -- and redirect to English Language Learners -- but then I realised that there is some CS-related content that can be explained in an answer, which probably wouldn't be picked up on, in other parts of Stack Exchange.
    $endgroup$
    – David Richerby
    Apr 19 at 9:45






  • 7




    $begingroup$
    A good example is iteration of the transformation z := z² + c, where z and c are complex numbers. What happens if I take any starting point on the plane z and iterate, will the number go to infinity or not? An ordinary fellow would say, yeah, this will give you two regions or maybe a few more where the value goes to zero and for the rest it goes to infinity. Relatively unsurprising. Then Mandelbrot comes along and actually plots the regions on the the plane defined by this simple "machine". As the result comes out of the dotmatrix printer, this simple "machine" proves itself ... weird.
    $endgroup$
    – David Tonhofer
    Apr 19 at 16:56











  • $begingroup$
    Facebook and other social media are a great example of this... A lot of the consequences of their algorithms are not something that was expected by the creators (or anyone really).
    $endgroup$
    – aslum
    Apr 19 at 20:52










  • $begingroup$
    A rather quirky individual once referred to this using a fire metaphor: "The bigger you build your bonfire of knowledge, the more darkness is revealed to your startled eye"
    $endgroup$
    – JacobIRR
    Apr 19 at 22:05












  • 2




    $begingroup$
    perhaps, it's better suited for philosophy portal rather to hard science like CS
    $endgroup$
    – Bulat
    Apr 19 at 9:43






  • 3




    $begingroup$
    @Bulat I was going to say the same -- and redirect to English Language Learners -- but then I realised that there is some CS-related content that can be explained in an answer, which probably wouldn't be picked up on, in other parts of Stack Exchange.
    $endgroup$
    – David Richerby
    Apr 19 at 9:45






  • 7




    $begingroup$
    A good example is iteration of the transformation z := z² + c, where z and c are complex numbers. What happens if I take any starting point on the plane z and iterate, will the number go to infinity or not? An ordinary fellow would say, yeah, this will give you two regions or maybe a few more where the value goes to zero and for the rest it goes to infinity. Relatively unsurprising. Then Mandelbrot comes along and actually plots the regions on the the plane defined by this simple "machine". As the result comes out of the dotmatrix printer, this simple "machine" proves itself ... weird.
    $endgroup$
    – David Tonhofer
    Apr 19 at 16:56











  • $begingroup$
    Facebook and other social media are a great example of this... A lot of the consequences of their algorithms are not something that was expected by the creators (or anyone really).
    $endgroup$
    – aslum
    Apr 19 at 20:52










  • $begingroup$
    A rather quirky individual once referred to this using a fire metaphor: "The bigger you build your bonfire of knowledge, the more darkness is revealed to your startled eye"
    $endgroup$
    – JacobIRR
    Apr 19 at 22:05







2




2




$begingroup$
perhaps, it's better suited for philosophy portal rather to hard science like CS
$endgroup$
– Bulat
Apr 19 at 9:43




$begingroup$
perhaps, it's better suited for philosophy portal rather to hard science like CS
$endgroup$
– Bulat
Apr 19 at 9:43




3




3




$begingroup$
@Bulat I was going to say the same -- and redirect to English Language Learners -- but then I realised that there is some CS-related content that can be explained in an answer, which probably wouldn't be picked up on, in other parts of Stack Exchange.
$endgroup$
– David Richerby
Apr 19 at 9:45




$begingroup$
@Bulat I was going to say the same -- and redirect to English Language Learners -- but then I realised that there is some CS-related content that can be explained in an answer, which probably wouldn't be picked up on, in other parts of Stack Exchange.
$endgroup$
– David Richerby
Apr 19 at 9:45




7




7




$begingroup$
A good example is iteration of the transformation z := z² + c, where z and c are complex numbers. What happens if I take any starting point on the plane z and iterate, will the number go to infinity or not? An ordinary fellow would say, yeah, this will give you two regions or maybe a few more where the value goes to zero and for the rest it goes to infinity. Relatively unsurprising. Then Mandelbrot comes along and actually plots the regions on the the plane defined by this simple "machine". As the result comes out of the dotmatrix printer, this simple "machine" proves itself ... weird.
$endgroup$
– David Tonhofer
Apr 19 at 16:56





$begingroup$
A good example is iteration of the transformation z := z² + c, where z and c are complex numbers. What happens if I take any starting point on the plane z and iterate, will the number go to infinity or not? An ordinary fellow would say, yeah, this will give you two regions or maybe a few more where the value goes to zero and for the rest it goes to infinity. Relatively unsurprising. Then Mandelbrot comes along and actually plots the regions on the the plane defined by this simple "machine". As the result comes out of the dotmatrix printer, this simple "machine" proves itself ... weird.
$endgroup$
– David Tonhofer
Apr 19 at 16:56













$begingroup$
Facebook and other social media are a great example of this... A lot of the consequences of their algorithms are not something that was expected by the creators (or anyone really).
$endgroup$
– aslum
Apr 19 at 20:52




$begingroup$
Facebook and other social media are a great example of this... A lot of the consequences of their algorithms are not something that was expected by the creators (or anyone really).
$endgroup$
– aslum
Apr 19 at 20:52












$begingroup$
A rather quirky individual once referred to this using a fire metaphor: "The bigger you build your bonfire of knowledge, the more darkness is revealed to your startled eye"
$endgroup$
– JacobIRR
Apr 19 at 22:05




$begingroup$
A rather quirky individual once referred to this using a fire metaphor: "The bigger you build your bonfire of knowledge, the more darkness is revealed to your startled eye"
$endgroup$
– JacobIRR
Apr 19 at 22:05










4 Answers
4






active

oldest

votes


















28












$begingroup$


Mathematicians and philosophers often assume that machines (and here, he probably means "computers") cannot surprise us. This is because they assume that once we learn some fact, we immediately understand every consequence of this fact. This is often a useful assumption, but it's easy to forget that it's false.




He's saying that systems with simple, finite descriptions (e.g., Turing machines) can exhibit very complicated behaviour and that this surprises some people. We can easily understand the concept of Turing machines but then we realise that they have complicated consequences, such as the undecidability of the halting problem and so on. The technical term here is that "knowledge is not closed under deduction". That is, we can know some fact $A$, but not know $B$, even though $A$ implies $B$.



Honestly, though, I'm not sure that Turing's argument is very good. Perhaps I have the benefit of writing nearly 70 years after Turing, and my understanding is that the typical mathematician knows much more about mathematical logic than they did in Turing's time. But it seems to me that mathematicians are mostly quite familiar with the idea of simple systems having complex behaviour. For example, every mathematician knows the definition of a group, which consists of just four simple axioms. But nobody – today or then – would think, "Aha. I know the four axioms, therefore I know every fact about groups." Similarly, Peano's axioms give a very short description of the natural numbers but nobody who reads them thinks "Right, I know every theorem about the natural numbers, now. Let's move on to something else."






share|cite|improve this answer











$endgroup$








  • 22




    $begingroup$
    Historically, the early 20th century had a strong academic belief in "solving" mathematics. E.g., Hilbert's program, and Whitehead+Russel's Principia Mathematica. Godel's work resolved that quest negatively, but I imagine it took some time for academia to fully embrace this notion; even fully acknowledging the correctness of Godel, people would still remember the grand ideas of Hilbert. I think Turing writing only two decades after Godel would be addressing his audience with this context in mind.
    $endgroup$
    – BurnsBA
    Apr 19 at 13:13






  • 7




    $begingroup$
    I would question whether most mathematicians know "much more about mathematical logic" than Turing did. But it is obvious that almost all contemporary humans have vastly more practical experience of what machines (and particularly computers) can do than he did.
    $endgroup$
    – alephzero
    Apr 19 at 18:48







  • 4




    $begingroup$
    @alephzero That's not what I said! I said that the average mathematician today knows more about mathematical logic than the average mathematician during Turing's time.
    $endgroup$
    – David Richerby
    Apr 19 at 19:28






  • 14




    $begingroup$
    Your argument seems to be not that Turing's argument isn't good, but that it is unnecessary or directed at a strawman. I strongly suspect Turing had real people make arguments like this to him, so I don't think he's making a strawman out of nothing. As Discrete lizard states in a comment, Turing is only saying that a particular argument against machines surprising us is bad. Your answer just says that that this argument is bad has become even more obvious over time. That said, people (though usually not experts) still make arguments in this vein today.
    $endgroup$
    – Derek Elkins
    Apr 20 at 0:11











  • $begingroup$
    It is the absence of epistemic closure.
    $endgroup$
    – Dan D.
    Apr 20 at 1:00


















19












$begingroup$

Just an example - given chess rules, anyone should immediately figure the best strategy to play chess.



Of course, it doesn't work. Even people aren't equal, and computers may outperform us due to their better abilities to make conclusions from the facts.






share|cite|improve this answer









$endgroup$








  • 1




    $begingroup$
    Not sure that's a good example. People do readily come up with chess strategies, as soon as they properly grasp the rules, and though these strategies are obviously flawed and useless against more experienced players and modern engines, they would have been good enough against early computer chess engines.
    $endgroup$
    – leftaroundabout
    Apr 20 at 21:58






  • 1




    $begingroup$
    My point exactly that not only people are different, but computers are different too, so stupid computers of Turing era doesn't mean that they always will be stupid. You may need to know, though, that Turing died long before computers started playing chess.
    $endgroup$
    – Bulat
    Apr 21 at 4:54






  • 1




    $begingroup$
    I think this is a good example, and captures the essence of Turing's paragraph.
    $endgroup$
    – copper.hat
    Apr 22 at 0:40










  • $begingroup$
    @leftaroundabout So ..., is chess a draw when optimally played or a win by white, or by black? More to the point: A relatively recent discovery that extremely long endgames are possibly lead to a revision of the 50-move-draw rules - such a discovery would count as a "surprise" in the sence of the quote
    $endgroup$
    – Hagen von Eitzen
    Apr 22 at 10:09



















11












$begingroup$

This is the idea of emergence, which is when complex behavior results from the interaction of relatively simple rules. There are lots of examples of this in nature, as that link points out. Insect colonies, bird flocks, schools of fish, and of course, consciousness. In a flock of birds or school of fish, each individual in the swarm is only making decisions based on the others immediately surrounding them, but when you put a bunch of those individuals together all following those rules, you start to see more coordinated behavior than you'd expect without a higher level plan. If you go on Youtube and watch demonstrations of robot swarms, you see that they all avoid hitting each other and work in unison. Surprisingly this doesn't need to be accomplished by having a single central computer coordinate the behavior of each individual robot but can instead be done using swarm robotics where, like the insects or the birds or the fish, each robot is making local decisions which leads to emergent coordination.



Another interesting demonstration of emergent behavior is Conway's Game of Life. The rules for the game are extremely simple, but can lead to very fascinating results



A tempting argument against the ability of computers to gain human-intelligence is to say that since they can only do precisely what they're programmed to do, they must only exhibit the intelligence that we program them with. If this were true, then we would also not expect the relatively simple behavior of neurons to give rise to human intelligence. Yet as far as we can tell, this IS the case and consciousness is an emergent property of neural processing. I'm sure Turing would have loved to see what's become possible today with the use of artificial neural networks






share|cite|improve this answer











$endgroup$








  • 2




    $begingroup$
    Thanks for mentioning the emergence. You add some optimism to my pessimism about A.I through computation.
    $endgroup$
    – smwikipedia
    Apr 20 at 13:04



















9












$begingroup$

People might assume that if I write a program, and I understand the algorithm completely, and there are no bugs, then I should know what the output of that program would be, and that it should not surprise me.



Turing says (and I agree) that this is not the case: The output can be surprising. The solution to a travelling salesman problem can be surprising. The best way to build a full adder can be surprising. The best move in a chess game can be surprising.






share|cite|improve this answer









$endgroup$












  • $begingroup$
    This does explain why computers could be surprising which is the first half of the quote, but you do not address the part of the quote that explains why a particular argument that machines cannot surprise is fallacious.
    $endgroup$
    – Discrete lizard
    Apr 19 at 15:08









protected by Gilles Apr 20 at 22:16



Thank you for your interest in this question.
Because it has attracted low-quality or spam answers that had to be removed, posting an answer now requires 10 reputation on this site (the association bonus does not count).



Would you like to answer one of these unanswered questions instead?














4 Answers
4






active

oldest

votes








4 Answers
4






active

oldest

votes









active

oldest

votes






active

oldest

votes









28












$begingroup$


Mathematicians and philosophers often assume that machines (and here, he probably means "computers") cannot surprise us. This is because they assume that once we learn some fact, we immediately understand every consequence of this fact. This is often a useful assumption, but it's easy to forget that it's false.




He's saying that systems with simple, finite descriptions (e.g., Turing machines) can exhibit very complicated behaviour and that this surprises some people. We can easily understand the concept of Turing machines but then we realise that they have complicated consequences, such as the undecidability of the halting problem and so on. The technical term here is that "knowledge is not closed under deduction". That is, we can know some fact $A$, but not know $B$, even though $A$ implies $B$.



Honestly, though, I'm not sure that Turing's argument is very good. Perhaps I have the benefit of writing nearly 70 years after Turing, and my understanding is that the typical mathematician knows much more about mathematical logic than they did in Turing's time. But it seems to me that mathematicians are mostly quite familiar with the idea of simple systems having complex behaviour. For example, every mathematician knows the definition of a group, which consists of just four simple axioms. But nobody – today or then – would think, "Aha. I know the four axioms, therefore I know every fact about groups." Similarly, Peano's axioms give a very short description of the natural numbers but nobody who reads them thinks "Right, I know every theorem about the natural numbers, now. Let's move on to something else."






share|cite|improve this answer











$endgroup$








  • 22




    $begingroup$
    Historically, the early 20th century had a strong academic belief in "solving" mathematics. E.g., Hilbert's program, and Whitehead+Russel's Principia Mathematica. Godel's work resolved that quest negatively, but I imagine it took some time for academia to fully embrace this notion; even fully acknowledging the correctness of Godel, people would still remember the grand ideas of Hilbert. I think Turing writing only two decades after Godel would be addressing his audience with this context in mind.
    $endgroup$
    – BurnsBA
    Apr 19 at 13:13






  • 7




    $begingroup$
    I would question whether most mathematicians know "much more about mathematical logic" than Turing did. But it is obvious that almost all contemporary humans have vastly more practical experience of what machines (and particularly computers) can do than he did.
    $endgroup$
    – alephzero
    Apr 19 at 18:48







  • 4




    $begingroup$
    @alephzero That's not what I said! I said that the average mathematician today knows more about mathematical logic than the average mathematician during Turing's time.
    $endgroup$
    – David Richerby
    Apr 19 at 19:28






  • 14




    $begingroup$
    Your argument seems to be not that Turing's argument isn't good, but that it is unnecessary or directed at a strawman. I strongly suspect Turing had real people make arguments like this to him, so I don't think he's making a strawman out of nothing. As Discrete lizard states in a comment, Turing is only saying that a particular argument against machines surprising us is bad. Your answer just says that that this argument is bad has become even more obvious over time. That said, people (though usually not experts) still make arguments in this vein today.
    $endgroup$
    – Derek Elkins
    Apr 20 at 0:11











  • $begingroup$
    It is the absence of epistemic closure.
    $endgroup$
    – Dan D.
    Apr 20 at 1:00















28












$begingroup$


Mathematicians and philosophers often assume that machines (and here, he probably means "computers") cannot surprise us. This is because they assume that once we learn some fact, we immediately understand every consequence of this fact. This is often a useful assumption, but it's easy to forget that it's false.




He's saying that systems with simple, finite descriptions (e.g., Turing machines) can exhibit very complicated behaviour and that this surprises some people. We can easily understand the concept of Turing machines but then we realise that they have complicated consequences, such as the undecidability of the halting problem and so on. The technical term here is that "knowledge is not closed under deduction". That is, we can know some fact $A$, but not know $B$, even though $A$ implies $B$.



Honestly, though, I'm not sure that Turing's argument is very good. Perhaps I have the benefit of writing nearly 70 years after Turing, and my understanding is that the typical mathematician knows much more about mathematical logic than they did in Turing's time. But it seems to me that mathematicians are mostly quite familiar with the idea of simple systems having complex behaviour. For example, every mathematician knows the definition of a group, which consists of just four simple axioms. But nobody – today or then – would think, "Aha. I know the four axioms, therefore I know every fact about groups." Similarly, Peano's axioms give a very short description of the natural numbers but nobody who reads them thinks "Right, I know every theorem about the natural numbers, now. Let's move on to something else."






share|cite|improve this answer











$endgroup$








  • 22




    $begingroup$
    Historically, the early 20th century had a strong academic belief in "solving" mathematics. E.g., Hilbert's program, and Whitehead+Russel's Principia Mathematica. Godel's work resolved that quest negatively, but I imagine it took some time for academia to fully embrace this notion; even fully acknowledging the correctness of Godel, people would still remember the grand ideas of Hilbert. I think Turing writing only two decades after Godel would be addressing his audience with this context in mind.
    $endgroup$
    – BurnsBA
    Apr 19 at 13:13






  • 7




    $begingroup$
    I would question whether most mathematicians know "much more about mathematical logic" than Turing did. But it is obvious that almost all contemporary humans have vastly more practical experience of what machines (and particularly computers) can do than he did.
    $endgroup$
    – alephzero
    Apr 19 at 18:48







  • 4




    $begingroup$
    @alephzero That's not what I said! I said that the average mathematician today knows more about mathematical logic than the average mathematician during Turing's time.
    $endgroup$
    – David Richerby
    Apr 19 at 19:28






  • 14




    $begingroup$
    Your argument seems to be not that Turing's argument isn't good, but that it is unnecessary or directed at a strawman. I strongly suspect Turing had real people make arguments like this to him, so I don't think he's making a strawman out of nothing. As Discrete lizard states in a comment, Turing is only saying that a particular argument against machines surprising us is bad. Your answer just says that that this argument is bad has become even more obvious over time. That said, people (though usually not experts) still make arguments in this vein today.
    $endgroup$
    – Derek Elkins
    Apr 20 at 0:11











  • $begingroup$
    It is the absence of epistemic closure.
    $endgroup$
    – Dan D.
    Apr 20 at 1:00













28












28








28





$begingroup$


Mathematicians and philosophers often assume that machines (and here, he probably means "computers") cannot surprise us. This is because they assume that once we learn some fact, we immediately understand every consequence of this fact. This is often a useful assumption, but it's easy to forget that it's false.




He's saying that systems with simple, finite descriptions (e.g., Turing machines) can exhibit very complicated behaviour and that this surprises some people. We can easily understand the concept of Turing machines but then we realise that they have complicated consequences, such as the undecidability of the halting problem and so on. The technical term here is that "knowledge is not closed under deduction". That is, we can know some fact $A$, but not know $B$, even though $A$ implies $B$.



Honestly, though, I'm not sure that Turing's argument is very good. Perhaps I have the benefit of writing nearly 70 years after Turing, and my understanding is that the typical mathematician knows much more about mathematical logic than they did in Turing's time. But it seems to me that mathematicians are mostly quite familiar with the idea of simple systems having complex behaviour. For example, every mathematician knows the definition of a group, which consists of just four simple axioms. But nobody – today or then – would think, "Aha. I know the four axioms, therefore I know every fact about groups." Similarly, Peano's axioms give a very short description of the natural numbers but nobody who reads them thinks "Right, I know every theorem about the natural numbers, now. Let's move on to something else."






share|cite|improve this answer











$endgroup$




Mathematicians and philosophers often assume that machines (and here, he probably means "computers") cannot surprise us. This is because they assume that once we learn some fact, we immediately understand every consequence of this fact. This is often a useful assumption, but it's easy to forget that it's false.




He's saying that systems with simple, finite descriptions (e.g., Turing machines) can exhibit very complicated behaviour and that this surprises some people. We can easily understand the concept of Turing machines but then we realise that they have complicated consequences, such as the undecidability of the halting problem and so on. The technical term here is that "knowledge is not closed under deduction". That is, we can know some fact $A$, but not know $B$, even though $A$ implies $B$.



Honestly, though, I'm not sure that Turing's argument is very good. Perhaps I have the benefit of writing nearly 70 years after Turing, and my understanding is that the typical mathematician knows much more about mathematical logic than they did in Turing's time. But it seems to me that mathematicians are mostly quite familiar with the idea of simple systems having complex behaviour. For example, every mathematician knows the definition of a group, which consists of just four simple axioms. But nobody – today or then – would think, "Aha. I know the four axioms, therefore I know every fact about groups." Similarly, Peano's axioms give a very short description of the natural numbers but nobody who reads them thinks "Right, I know every theorem about the natural numbers, now. Let's move on to something else."







share|cite|improve this answer














share|cite|improve this answer



share|cite|improve this answer








edited Apr 19 at 13:15

























answered Apr 19 at 9:44









David RicherbyDavid Richerby

71k16109199




71k16109199







  • 22




    $begingroup$
    Historically, the early 20th century had a strong academic belief in "solving" mathematics. E.g., Hilbert's program, and Whitehead+Russel's Principia Mathematica. Godel's work resolved that quest negatively, but I imagine it took some time for academia to fully embrace this notion; even fully acknowledging the correctness of Godel, people would still remember the grand ideas of Hilbert. I think Turing writing only two decades after Godel would be addressing his audience with this context in mind.
    $endgroup$
    – BurnsBA
    Apr 19 at 13:13






  • 7




    $begingroup$
    I would question whether most mathematicians know "much more about mathematical logic" than Turing did. But it is obvious that almost all contemporary humans have vastly more practical experience of what machines (and particularly computers) can do than he did.
    $endgroup$
    – alephzero
    Apr 19 at 18:48







  • 4




    $begingroup$
    @alephzero That's not what I said! I said that the average mathematician today knows more about mathematical logic than the average mathematician during Turing's time.
    $endgroup$
    – David Richerby
    Apr 19 at 19:28






  • 14




    $begingroup$
    Your argument seems to be not that Turing's argument isn't good, but that it is unnecessary or directed at a strawman. I strongly suspect Turing had real people make arguments like this to him, so I don't think he's making a strawman out of nothing. As Discrete lizard states in a comment, Turing is only saying that a particular argument against machines surprising us is bad. Your answer just says that that this argument is bad has become even more obvious over time. That said, people (though usually not experts) still make arguments in this vein today.
    $endgroup$
    – Derek Elkins
    Apr 20 at 0:11











  • $begingroup$
    It is the absence of epistemic closure.
    $endgroup$
    – Dan D.
    Apr 20 at 1:00












  • 22




    $begingroup$
    Historically, the early 20th century had a strong academic belief in "solving" mathematics. E.g., Hilbert's program, and Whitehead+Russel's Principia Mathematica. Godel's work resolved that quest negatively, but I imagine it took some time for academia to fully embrace this notion; even fully acknowledging the correctness of Godel, people would still remember the grand ideas of Hilbert. I think Turing writing only two decades after Godel would be addressing his audience with this context in mind.
    $endgroup$
    – BurnsBA
    Apr 19 at 13:13






  • 7




    $begingroup$
    I would question whether most mathematicians know "much more about mathematical logic" than Turing did. But it is obvious that almost all contemporary humans have vastly more practical experience of what machines (and particularly computers) can do than he did.
    $endgroup$
    – alephzero
    Apr 19 at 18:48







  • 4




    $begingroup$
    @alephzero That's not what I said! I said that the average mathematician today knows more about mathematical logic than the average mathematician during Turing's time.
    $endgroup$
    – David Richerby
    Apr 19 at 19:28






  • 14




    $begingroup$
    Your argument seems to be not that Turing's argument isn't good, but that it is unnecessary or directed at a strawman. I strongly suspect Turing had real people make arguments like this to him, so I don't think he's making a strawman out of nothing. As Discrete lizard states in a comment, Turing is only saying that a particular argument against machines surprising us is bad. Your answer just says that that this argument is bad has become even more obvious over time. That said, people (though usually not experts) still make arguments in this vein today.
    $endgroup$
    – Derek Elkins
    Apr 20 at 0:11











  • $begingroup$
    It is the absence of epistemic closure.
    $endgroup$
    – Dan D.
    Apr 20 at 1:00







22




22




$begingroup$
Historically, the early 20th century had a strong academic belief in "solving" mathematics. E.g., Hilbert's program, and Whitehead+Russel's Principia Mathematica. Godel's work resolved that quest negatively, but I imagine it took some time for academia to fully embrace this notion; even fully acknowledging the correctness of Godel, people would still remember the grand ideas of Hilbert. I think Turing writing only two decades after Godel would be addressing his audience with this context in mind.
$endgroup$
– BurnsBA
Apr 19 at 13:13




$begingroup$
Historically, the early 20th century had a strong academic belief in "solving" mathematics. E.g., Hilbert's program, and Whitehead+Russel's Principia Mathematica. Godel's work resolved that quest negatively, but I imagine it took some time for academia to fully embrace this notion; even fully acknowledging the correctness of Godel, people would still remember the grand ideas of Hilbert. I think Turing writing only two decades after Godel would be addressing his audience with this context in mind.
$endgroup$
– BurnsBA
Apr 19 at 13:13




7




7




$begingroup$
I would question whether most mathematicians know "much more about mathematical logic" than Turing did. But it is obvious that almost all contemporary humans have vastly more practical experience of what machines (and particularly computers) can do than he did.
$endgroup$
– alephzero
Apr 19 at 18:48





$begingroup$
I would question whether most mathematicians know "much more about mathematical logic" than Turing did. But it is obvious that almost all contemporary humans have vastly more practical experience of what machines (and particularly computers) can do than he did.
$endgroup$
– alephzero
Apr 19 at 18:48





4




4




$begingroup$
@alephzero That's not what I said! I said that the average mathematician today knows more about mathematical logic than the average mathematician during Turing's time.
$endgroup$
– David Richerby
Apr 19 at 19:28




$begingroup$
@alephzero That's not what I said! I said that the average mathematician today knows more about mathematical logic than the average mathematician during Turing's time.
$endgroup$
– David Richerby
Apr 19 at 19:28




14




14




$begingroup$
Your argument seems to be not that Turing's argument isn't good, but that it is unnecessary or directed at a strawman. I strongly suspect Turing had real people make arguments like this to him, so I don't think he's making a strawman out of nothing. As Discrete lizard states in a comment, Turing is only saying that a particular argument against machines surprising us is bad. Your answer just says that that this argument is bad has become even more obvious over time. That said, people (though usually not experts) still make arguments in this vein today.
$endgroup$
– Derek Elkins
Apr 20 at 0:11





$begingroup$
Your argument seems to be not that Turing's argument isn't good, but that it is unnecessary or directed at a strawman. I strongly suspect Turing had real people make arguments like this to him, so I don't think he's making a strawman out of nothing. As Discrete lizard states in a comment, Turing is only saying that a particular argument against machines surprising us is bad. Your answer just says that that this argument is bad has become even more obvious over time. That said, people (though usually not experts) still make arguments in this vein today.
$endgroup$
– Derek Elkins
Apr 20 at 0:11













$begingroup$
It is the absence of epistemic closure.
$endgroup$
– Dan D.
Apr 20 at 1:00




$begingroup$
It is the absence of epistemic closure.
$endgroup$
– Dan D.
Apr 20 at 1:00











19












$begingroup$

Just an example - given chess rules, anyone should immediately figure the best strategy to play chess.



Of course, it doesn't work. Even people aren't equal, and computers may outperform us due to their better abilities to make conclusions from the facts.






share|cite|improve this answer









$endgroup$








  • 1




    $begingroup$
    Not sure that's a good example. People do readily come up with chess strategies, as soon as they properly grasp the rules, and though these strategies are obviously flawed and useless against more experienced players and modern engines, they would have been good enough against early computer chess engines.
    $endgroup$
    – leftaroundabout
    Apr 20 at 21:58






  • 1




    $begingroup$
    My point exactly that not only people are different, but computers are different too, so stupid computers of Turing era doesn't mean that they always will be stupid. You may need to know, though, that Turing died long before computers started playing chess.
    $endgroup$
    – Bulat
    Apr 21 at 4:54






  • 1




    $begingroup$
    I think this is a good example, and captures the essence of Turing's paragraph.
    $endgroup$
    – copper.hat
    Apr 22 at 0:40










  • $begingroup$
    @leftaroundabout So ..., is chess a draw when optimally played or a win by white, or by black? More to the point: A relatively recent discovery that extremely long endgames are possibly lead to a revision of the 50-move-draw rules - such a discovery would count as a "surprise" in the sence of the quote
    $endgroup$
    – Hagen von Eitzen
    Apr 22 at 10:09
















19












$begingroup$

Just an example - given chess rules, anyone should immediately figure the best strategy to play chess.



Of course, it doesn't work. Even people aren't equal, and computers may outperform us due to their better abilities to make conclusions from the facts.






share|cite|improve this answer









$endgroup$








  • 1




    $begingroup$
    Not sure that's a good example. People do readily come up with chess strategies, as soon as they properly grasp the rules, and though these strategies are obviously flawed and useless against more experienced players and modern engines, they would have been good enough against early computer chess engines.
    $endgroup$
    – leftaroundabout
    Apr 20 at 21:58






  • 1




    $begingroup$
    My point exactly that not only people are different, but computers are different too, so stupid computers of Turing era doesn't mean that they always will be stupid. You may need to know, though, that Turing died long before computers started playing chess.
    $endgroup$
    – Bulat
    Apr 21 at 4:54






  • 1




    $begingroup$
    I think this is a good example, and captures the essence of Turing's paragraph.
    $endgroup$
    – copper.hat
    Apr 22 at 0:40










  • $begingroup$
    @leftaroundabout So ..., is chess a draw when optimally played or a win by white, or by black? More to the point: A relatively recent discovery that extremely long endgames are possibly lead to a revision of the 50-move-draw rules - such a discovery would count as a "surprise" in the sence of the quote
    $endgroup$
    – Hagen von Eitzen
    Apr 22 at 10:09














19












19








19





$begingroup$

Just an example - given chess rules, anyone should immediately figure the best strategy to play chess.



Of course, it doesn't work. Even people aren't equal, and computers may outperform us due to their better abilities to make conclusions from the facts.






share|cite|improve this answer









$endgroup$



Just an example - given chess rules, anyone should immediately figure the best strategy to play chess.



Of course, it doesn't work. Even people aren't equal, and computers may outperform us due to their better abilities to make conclusions from the facts.







share|cite|improve this answer












share|cite|improve this answer



share|cite|improve this answer










answered Apr 19 at 9:42









BulatBulat

1,077612




1,077612







  • 1




    $begingroup$
    Not sure that's a good example. People do readily come up with chess strategies, as soon as they properly grasp the rules, and though these strategies are obviously flawed and useless against more experienced players and modern engines, they would have been good enough against early computer chess engines.
    $endgroup$
    – leftaroundabout
    Apr 20 at 21:58






  • 1




    $begingroup$
    My point exactly that not only people are different, but computers are different too, so stupid computers of Turing era doesn't mean that they always will be stupid. You may need to know, though, that Turing died long before computers started playing chess.
    $endgroup$
    – Bulat
    Apr 21 at 4:54






  • 1




    $begingroup$
    I think this is a good example, and captures the essence of Turing's paragraph.
    $endgroup$
    – copper.hat
    Apr 22 at 0:40










  • $begingroup$
    @leftaroundabout So ..., is chess a draw when optimally played or a win by white, or by black? More to the point: A relatively recent discovery that extremely long endgames are possibly lead to a revision of the 50-move-draw rules - such a discovery would count as a "surprise" in the sence of the quote
    $endgroup$
    – Hagen von Eitzen
    Apr 22 at 10:09













  • 1




    $begingroup$
    Not sure that's a good example. People do readily come up with chess strategies, as soon as they properly grasp the rules, and though these strategies are obviously flawed and useless against more experienced players and modern engines, they would have been good enough against early computer chess engines.
    $endgroup$
    – leftaroundabout
    Apr 20 at 21:58






  • 1




    $begingroup$
    My point exactly that not only people are different, but computers are different too, so stupid computers of Turing era doesn't mean that they always will be stupid. You may need to know, though, that Turing died long before computers started playing chess.
    $endgroup$
    – Bulat
    Apr 21 at 4:54






  • 1




    $begingroup$
    I think this is a good example, and captures the essence of Turing's paragraph.
    $endgroup$
    – copper.hat
    Apr 22 at 0:40










  • $begingroup$
    @leftaroundabout So ..., is chess a draw when optimally played or a win by white, or by black? More to the point: A relatively recent discovery that extremely long endgames are possibly lead to a revision of the 50-move-draw rules - such a discovery would count as a "surprise" in the sence of the quote
    $endgroup$
    – Hagen von Eitzen
    Apr 22 at 10:09








1




1




$begingroup$
Not sure that's a good example. People do readily come up with chess strategies, as soon as they properly grasp the rules, and though these strategies are obviously flawed and useless against more experienced players and modern engines, they would have been good enough against early computer chess engines.
$endgroup$
– leftaroundabout
Apr 20 at 21:58




$begingroup$
Not sure that's a good example. People do readily come up with chess strategies, as soon as they properly grasp the rules, and though these strategies are obviously flawed and useless against more experienced players and modern engines, they would have been good enough against early computer chess engines.
$endgroup$
– leftaroundabout
Apr 20 at 21:58




1




1




$begingroup$
My point exactly that not only people are different, but computers are different too, so stupid computers of Turing era doesn't mean that they always will be stupid. You may need to know, though, that Turing died long before computers started playing chess.
$endgroup$
– Bulat
Apr 21 at 4:54




$begingroup$
My point exactly that not only people are different, but computers are different too, so stupid computers of Turing era doesn't mean that they always will be stupid. You may need to know, though, that Turing died long before computers started playing chess.
$endgroup$
– Bulat
Apr 21 at 4:54




1




1




$begingroup$
I think this is a good example, and captures the essence of Turing's paragraph.
$endgroup$
– copper.hat
Apr 22 at 0:40




$begingroup$
I think this is a good example, and captures the essence of Turing's paragraph.
$endgroup$
– copper.hat
Apr 22 at 0:40












$begingroup$
@leftaroundabout So ..., is chess a draw when optimally played or a win by white, or by black? More to the point: A relatively recent discovery that extremely long endgames are possibly lead to a revision of the 50-move-draw rules - such a discovery would count as a "surprise" in the sence of the quote
$endgroup$
– Hagen von Eitzen
Apr 22 at 10:09





$begingroup$
@leftaroundabout So ..., is chess a draw when optimally played or a win by white, or by black? More to the point: A relatively recent discovery that extremely long endgames are possibly lead to a revision of the 50-move-draw rules - such a discovery would count as a "surprise" in the sence of the quote
$endgroup$
– Hagen von Eitzen
Apr 22 at 10:09












11












$begingroup$

This is the idea of emergence, which is when complex behavior results from the interaction of relatively simple rules. There are lots of examples of this in nature, as that link points out. Insect colonies, bird flocks, schools of fish, and of course, consciousness. In a flock of birds or school of fish, each individual in the swarm is only making decisions based on the others immediately surrounding them, but when you put a bunch of those individuals together all following those rules, you start to see more coordinated behavior than you'd expect without a higher level plan. If you go on Youtube and watch demonstrations of robot swarms, you see that they all avoid hitting each other and work in unison. Surprisingly this doesn't need to be accomplished by having a single central computer coordinate the behavior of each individual robot but can instead be done using swarm robotics where, like the insects or the birds or the fish, each robot is making local decisions which leads to emergent coordination.



Another interesting demonstration of emergent behavior is Conway's Game of Life. The rules for the game are extremely simple, but can lead to very fascinating results



A tempting argument against the ability of computers to gain human-intelligence is to say that since they can only do precisely what they're programmed to do, they must only exhibit the intelligence that we program them with. If this were true, then we would also not expect the relatively simple behavior of neurons to give rise to human intelligence. Yet as far as we can tell, this IS the case and consciousness is an emergent property of neural processing. I'm sure Turing would have loved to see what's become possible today with the use of artificial neural networks






share|cite|improve this answer











$endgroup$








  • 2




    $begingroup$
    Thanks for mentioning the emergence. You add some optimism to my pessimism about A.I through computation.
    $endgroup$
    – smwikipedia
    Apr 20 at 13:04
















11












$begingroup$

This is the idea of emergence, which is when complex behavior results from the interaction of relatively simple rules. There are lots of examples of this in nature, as that link points out. Insect colonies, bird flocks, schools of fish, and of course, consciousness. In a flock of birds or school of fish, each individual in the swarm is only making decisions based on the others immediately surrounding them, but when you put a bunch of those individuals together all following those rules, you start to see more coordinated behavior than you'd expect without a higher level plan. If you go on Youtube and watch demonstrations of robot swarms, you see that they all avoid hitting each other and work in unison. Surprisingly this doesn't need to be accomplished by having a single central computer coordinate the behavior of each individual robot but can instead be done using swarm robotics where, like the insects or the birds or the fish, each robot is making local decisions which leads to emergent coordination.



Another interesting demonstration of emergent behavior is Conway's Game of Life. The rules for the game are extremely simple, but can lead to very fascinating results



A tempting argument against the ability of computers to gain human-intelligence is to say that since they can only do precisely what they're programmed to do, they must only exhibit the intelligence that we program them with. If this were true, then we would also not expect the relatively simple behavior of neurons to give rise to human intelligence. Yet as far as we can tell, this IS the case and consciousness is an emergent property of neural processing. I'm sure Turing would have loved to see what's become possible today with the use of artificial neural networks






share|cite|improve this answer











$endgroup$








  • 2




    $begingroup$
    Thanks for mentioning the emergence. You add some optimism to my pessimism about A.I through computation.
    $endgroup$
    – smwikipedia
    Apr 20 at 13:04














11












11








11





$begingroup$

This is the idea of emergence, which is when complex behavior results from the interaction of relatively simple rules. There are lots of examples of this in nature, as that link points out. Insect colonies, bird flocks, schools of fish, and of course, consciousness. In a flock of birds or school of fish, each individual in the swarm is only making decisions based on the others immediately surrounding them, but when you put a bunch of those individuals together all following those rules, you start to see more coordinated behavior than you'd expect without a higher level plan. If you go on Youtube and watch demonstrations of robot swarms, you see that they all avoid hitting each other and work in unison. Surprisingly this doesn't need to be accomplished by having a single central computer coordinate the behavior of each individual robot but can instead be done using swarm robotics where, like the insects or the birds or the fish, each robot is making local decisions which leads to emergent coordination.



Another interesting demonstration of emergent behavior is Conway's Game of Life. The rules for the game are extremely simple, but can lead to very fascinating results



A tempting argument against the ability of computers to gain human-intelligence is to say that since they can only do precisely what they're programmed to do, they must only exhibit the intelligence that we program them with. If this were true, then we would also not expect the relatively simple behavior of neurons to give rise to human intelligence. Yet as far as we can tell, this IS the case and consciousness is an emergent property of neural processing. I'm sure Turing would have loved to see what's become possible today with the use of artificial neural networks






share|cite|improve this answer











$endgroup$



This is the idea of emergence, which is when complex behavior results from the interaction of relatively simple rules. There are lots of examples of this in nature, as that link points out. Insect colonies, bird flocks, schools of fish, and of course, consciousness. In a flock of birds or school of fish, each individual in the swarm is only making decisions based on the others immediately surrounding them, but when you put a bunch of those individuals together all following those rules, you start to see more coordinated behavior than you'd expect without a higher level plan. If you go on Youtube and watch demonstrations of robot swarms, you see that they all avoid hitting each other and work in unison. Surprisingly this doesn't need to be accomplished by having a single central computer coordinate the behavior of each individual robot but can instead be done using swarm robotics where, like the insects or the birds or the fish, each robot is making local decisions which leads to emergent coordination.



Another interesting demonstration of emergent behavior is Conway's Game of Life. The rules for the game are extremely simple, but can lead to very fascinating results



A tempting argument against the ability of computers to gain human-intelligence is to say that since they can only do precisely what they're programmed to do, they must only exhibit the intelligence that we program them with. If this were true, then we would also not expect the relatively simple behavior of neurons to give rise to human intelligence. Yet as far as we can tell, this IS the case and consciousness is an emergent property of neural processing. I'm sure Turing would have loved to see what's become possible today with the use of artificial neural networks







share|cite|improve this answer














share|cite|improve this answer



share|cite|improve this answer








edited Apr 22 at 15:47

























answered Apr 19 at 17:18









mowwwalkermowwwalker

2193




2193







  • 2




    $begingroup$
    Thanks for mentioning the emergence. You add some optimism to my pessimism about A.I through computation.
    $endgroup$
    – smwikipedia
    Apr 20 at 13:04













  • 2




    $begingroup$
    Thanks for mentioning the emergence. You add some optimism to my pessimism about A.I through computation.
    $endgroup$
    – smwikipedia
    Apr 20 at 13:04








2




2




$begingroup$
Thanks for mentioning the emergence. You add some optimism to my pessimism about A.I through computation.
$endgroup$
– smwikipedia
Apr 20 at 13:04





$begingroup$
Thanks for mentioning the emergence. You add some optimism to my pessimism about A.I through computation.
$endgroup$
– smwikipedia
Apr 20 at 13:04












9












$begingroup$

People might assume that if I write a program, and I understand the algorithm completely, and there are no bugs, then I should know what the output of that program would be, and that it should not surprise me.



Turing says (and I agree) that this is not the case: The output can be surprising. The solution to a travelling salesman problem can be surprising. The best way to build a full adder can be surprising. The best move in a chess game can be surprising.






share|cite|improve this answer









$endgroup$












  • $begingroup$
    This does explain why computers could be surprising which is the first half of the quote, but you do not address the part of the quote that explains why a particular argument that machines cannot surprise is fallacious.
    $endgroup$
    – Discrete lizard
    Apr 19 at 15:08















9












$begingroup$

People might assume that if I write a program, and I understand the algorithm completely, and there are no bugs, then I should know what the output of that program would be, and that it should not surprise me.



Turing says (and I agree) that this is not the case: The output can be surprising. The solution to a travelling salesman problem can be surprising. The best way to build a full adder can be surprising. The best move in a chess game can be surprising.






share|cite|improve this answer









$endgroup$












  • $begingroup$
    This does explain why computers could be surprising which is the first half of the quote, but you do not address the part of the quote that explains why a particular argument that machines cannot surprise is fallacious.
    $endgroup$
    – Discrete lizard
    Apr 19 at 15:08













9












9








9





$begingroup$

People might assume that if I write a program, and I understand the algorithm completely, and there are no bugs, then I should know what the output of that program would be, and that it should not surprise me.



Turing says (and I agree) that this is not the case: The output can be surprising. The solution to a travelling salesman problem can be surprising. The best way to build a full adder can be surprising. The best move in a chess game can be surprising.






share|cite|improve this answer









$endgroup$



People might assume that if I write a program, and I understand the algorithm completely, and there are no bugs, then I should know what the output of that program would be, and that it should not surprise me.



Turing says (and I agree) that this is not the case: The output can be surprising. The solution to a travelling salesman problem can be surprising. The best way to build a full adder can be surprising. The best move in a chess game can be surprising.







share|cite|improve this answer












share|cite|improve this answer



share|cite|improve this answer










answered Apr 19 at 14:42









gnasher729gnasher729

12.3k1318




12.3k1318











  • $begingroup$
    This does explain why computers could be surprising which is the first half of the quote, but you do not address the part of the quote that explains why a particular argument that machines cannot surprise is fallacious.
    $endgroup$
    – Discrete lizard
    Apr 19 at 15:08
















  • $begingroup$
    This does explain why computers could be surprising which is the first half of the quote, but you do not address the part of the quote that explains why a particular argument that machines cannot surprise is fallacious.
    $endgroup$
    – Discrete lizard
    Apr 19 at 15:08















$begingroup$
This does explain why computers could be surprising which is the first half of the quote, but you do not address the part of the quote that explains why a particular argument that machines cannot surprise is fallacious.
$endgroup$
– Discrete lizard
Apr 19 at 15:08




$begingroup$
This does explain why computers could be surprising which is the first half of the quote, but you do not address the part of the quote that explains why a particular argument that machines cannot surprise is fallacious.
$endgroup$
– Discrete lizard
Apr 19 at 15:08





protected by Gilles Apr 20 at 22:16



Thank you for your interest in this question.
Because it has attracted low-quality or spam answers that had to be removed, posting an answer now requires 10 reputation on this site (the association bonus does not count).



Would you like to answer one of these unanswered questions instead?



Popular posts from this blog

Wikipedia:Vital articles Мазмуну Biography - Өмүр баян Philosophy and psychology - Философия жана психология Religion - Дин Social sciences - Коомдук илимдер Language and literature - Тил жана адабият Science - Илим Technology - Технология Arts and recreation - Искусство жана эс алуу History and geography - Тарых жана география Навигация менюсу

Club Baloncesto Breogán Índice Historia | Pavillón | Nome | O Breogán na cultura popular | Xogadores | Adestradores | Presidentes | Palmarés | Historial | Líderes | Notas | Véxase tamén | Menú de navegacióncbbreogan.galCadroGuía oficial da ACB 2009-10, páxina 201Guía oficial ACB 1992, páxina 183. Editorial DB.É de 6.500 espectadores sentados axeitándose á última normativa"Estudiantes Junior, entre as mellores canteiras"o orixinalHemeroteca El Mundo Deportivo, 16 setembro de 1970, páxina 12Historia do BreogánAlfredo Pérez, o último canoneiroHistoria C.B. BreogánHemeroteca de El Mundo DeportivoJimmy Wright, norteamericano do Breogán deixará Lugo por ameazas de morteResultados de Breogán en 1986-87Resultados de Breogán en 1990-91Ficha de Velimir Perasović en acb.comResultados de Breogán en 1994-95Breogán arrasa al Barça. "El Mundo Deportivo", 27 de setembro de 1999, páxina 58CB Breogán - FC BarcelonaA FEB invita a participar nunha nova Liga EuropeaCharlie Bell na prensa estatalMáximos anotadores 2005Tempada 2005-06 : Tódolos Xogadores da Xornada""Non quero pensar nunha man negra, mais pregúntome que está a pasar""o orixinalRaúl López, orgulloso dos xogadores, presume da boa saúde económica do BreogánJulio González confirma que cesa como presidente del BreogánHomenaxe a Lisardo GómezA tempada do rexurdimento celesteEntrevista a Lisardo GómezEl COB dinamita el Pazo para forzar el quinto (69-73)Cafés Candelas, patrocinador del CB Breogán"Suso Lázare, novo presidente do Breogán"o orixinalCafés Candelas Breogán firma el mayor triunfo de la historiaEl Breogán realizará 17 homenajes por su cincuenta aniversario"O Breogán honra ao seu fundador e primeiro presidente"o orixinalMiguel Giao recibiu a homenaxe do PazoHomenaxe aos primeiros gladiadores celestesO home que nos amosa como ver o Breo co corazónTita Franco será homenaxeada polos #50anosdeBreoJulio Vila recibirá unha homenaxe in memoriam polos #50anosdeBreo"O Breogán homenaxeará aos seus aboados máis veteráns"Pechada ovación a «Capi» Sanmartín e Ricardo «Corazón de González»Homenaxe por décadas de informaciónPaco García volve ao Pazo con motivo do 50 aniversario"Resultados y clasificaciones""O Cafés Candelas Breogán, campión da Copa Princesa""O Cafés Candelas Breogán, equipo ACB"C.B. Breogán"Proxecto social"o orixinal"Centros asociados"o orixinalFicha en imdb.comMario Camus trata la recuperación del amor en 'La vieja música', su última película"Páxina web oficial""Club Baloncesto Breogán""C. B. Breogán S.A.D."eehttp://www.fegaba.com

Vilaño, A Laracha Índice Patrimonio | Lugares e parroquias | Véxase tamén | Menú de navegación43°14′52″N 8°36′03″O / 43.24775, -8.60070