Is it nonsense to say B -> [A -> B]?Suspending some of the usual laws of logicWhat are the dialetheist semantics for logical negation?Is something creating itself nonsense?How do disjunctive antecedents work in Marc Lange's stability concept of laws of nature?Implication Introduction formulated as a theorem?Is there something wrong in breaking the symmetry of Natural Deduction?Proof for the Rule of Absorption in Propositional Logic?Why do contradictions imply anything?Making 'sense' of Wittgenstein's senselessness / nonsense distinction in the TractatusWhat CAN a sentence say about itself? Can a sentence say about itself that it is false?
On San Andreas Speedruns, why do players blow up the Picador in the mission Ryder?
Why isn't 'chemically-strengthened glass' made with potassium carbonate? To begin with?
Why would a rational buyer offer to buy with no conditions precedent?
Why sampling a periodic signal doesn't yield a periodic discrete signal?
Why is unzipped directory exactly 4.0k (much smaller than zipped file)?
Expected maximum number of unpaired socks
...And they were stumped for a long time
Heat lost in ideal capacitor charging
Can a UK national work as a paid shop assistant in the USA?
How can I properly write this equation in Latex?
Why is 'additive' EQ more difficult to use than 'subtractive'?
Why did other houses not demand this?
Count all vowels in string
Why does splatting create a tuple on the rhs but a list on the lhs?
Why did Jon Snow do this immoral act if he is so honorable?
Should I split timestamp parts into separate columns?
What did the 'turbo' button actually do?
Storing voxels for a voxel Engine in C++
Can you still travel to America on the ESTA waiver program if you have been to Iran in transit?
How to deceive the MC
Are cells guaranteed to get at least one mitochondrion when they divide?
Would Buddhists help non-Buddhists continuing their attachments?
Is this homebrew "Cactus Grenade" cantrip balanced?
Is superuser the same as root?
Is it nonsense to say B -> [A -> B]?
Suspending some of the usual laws of logicWhat are the dialetheist semantics for logical negation?Is something creating itself nonsense?How do disjunctive antecedents work in Marc Lange's stability concept of laws of nature?Implication Introduction formulated as a theorem?Is there something wrong in breaking the symmetry of Natural Deduction?Proof for the Rule of Absorption in Propositional Logic?Why do contradictions imply anything?Making 'sense' of Wittgenstein's senselessness / nonsense distinction in the TractatusWhat CAN a sentence say about itself? Can a sentence say about itself that it is false?
Is it nonsense to say B → [A → B]? I would have thought so, it seems to say that every fact deductively follows from every other. But I was looking at a recent closed question, on natural deduction, and my thinking about it (not as natural deduction) got be wondering.
It seems to me that B with the law of excluded middle and disjunction introduction and conjunction introduction ⇒ [[~A ∧ B] v [A ∧ B]] which with law of non contradiction and disjunctive syllogism ⇒ [A → [A ∧ B]] which with conjunction elimination ⇒ [A → B].
B → [A → B]
Sorry for being so naive, but, assuming I can be understood here, what have I worked out wrong?
logic
|
show 2 more comments
Is it nonsense to say B → [A → B]? I would have thought so, it seems to say that every fact deductively follows from every other. But I was looking at a recent closed question, on natural deduction, and my thinking about it (not as natural deduction) got be wondering.
It seems to me that B with the law of excluded middle and disjunction introduction and conjunction introduction ⇒ [[~A ∧ B] v [A ∧ B]] which with law of non contradiction and disjunctive syllogism ⇒ [A → [A ∧ B]] which with conjunction elimination ⇒ [A → B].
B → [A → B]
Sorry for being so naive, but, assuming I can be understood here, what have I worked out wrong?
logic
12
It doesn't say that "every fact deductively follows from every other", it says that if you have already deduced B, you can deduce it from anything else.
– Eliran
May 9 at 17:17
is there a phrase for that. one that i can google @Eliran ?
– another_name
May 9 at 17:38
2
I do not think this tautology has an established name. If you rewrite it as B→(¬A∨B) it is just an instance of disjunction introduction. Alternatively, if B is derivable from nothing then it surely is derivable from some A, whatever A is. In particular, it is even intuitionistically valid.
– Conifold
May 9 at 21:29
2
Do not google, read a textbook.
– Jishin Noben
May 10 at 8:08
@rexkogitans Embarrassing to say the least. Sensibility/nonsensibility is a semantic notion, not a formal one. Where did you get your "degree" in analytic philosophy again?
– Bertrand Wittgenstein's Ghost
May 10 at 10:31
|
show 2 more comments
Is it nonsense to say B → [A → B]? I would have thought so, it seems to say that every fact deductively follows from every other. But I was looking at a recent closed question, on natural deduction, and my thinking about it (not as natural deduction) got be wondering.
It seems to me that B with the law of excluded middle and disjunction introduction and conjunction introduction ⇒ [[~A ∧ B] v [A ∧ B]] which with law of non contradiction and disjunctive syllogism ⇒ [A → [A ∧ B]] which with conjunction elimination ⇒ [A → B].
B → [A → B]
Sorry for being so naive, but, assuming I can be understood here, what have I worked out wrong?
logic
Is it nonsense to say B → [A → B]? I would have thought so, it seems to say that every fact deductively follows from every other. But I was looking at a recent closed question, on natural deduction, and my thinking about it (not as natural deduction) got be wondering.
It seems to me that B with the law of excluded middle and disjunction introduction and conjunction introduction ⇒ [[~A ∧ B] v [A ∧ B]] which with law of non contradiction and disjunctive syllogism ⇒ [A → [A ∧ B]] which with conjunction elimination ⇒ [A → B].
B → [A → B]
Sorry for being so naive, but, assuming I can be understood here, what have I worked out wrong?
logic
logic
edited May 9 at 23:19
Joshua
48829
48829
asked May 9 at 16:24
another_nameanother_name
347216
347216
12
It doesn't say that "every fact deductively follows from every other", it says that if you have already deduced B, you can deduce it from anything else.
– Eliran
May 9 at 17:17
is there a phrase for that. one that i can google @Eliran ?
– another_name
May 9 at 17:38
2
I do not think this tautology has an established name. If you rewrite it as B→(¬A∨B) it is just an instance of disjunction introduction. Alternatively, if B is derivable from nothing then it surely is derivable from some A, whatever A is. In particular, it is even intuitionistically valid.
– Conifold
May 9 at 21:29
2
Do not google, read a textbook.
– Jishin Noben
May 10 at 8:08
@rexkogitans Embarrassing to say the least. Sensibility/nonsensibility is a semantic notion, not a formal one. Where did you get your "degree" in analytic philosophy again?
– Bertrand Wittgenstein's Ghost
May 10 at 10:31
|
show 2 more comments
12
It doesn't say that "every fact deductively follows from every other", it says that if you have already deduced B, you can deduce it from anything else.
– Eliran
May 9 at 17:17
is there a phrase for that. one that i can google @Eliran ?
– another_name
May 9 at 17:38
2
I do not think this tautology has an established name. If you rewrite it as B→(¬A∨B) it is just an instance of disjunction introduction. Alternatively, if B is derivable from nothing then it surely is derivable from some A, whatever A is. In particular, it is even intuitionistically valid.
– Conifold
May 9 at 21:29
2
Do not google, read a textbook.
– Jishin Noben
May 10 at 8:08
@rexkogitans Embarrassing to say the least. Sensibility/nonsensibility is a semantic notion, not a formal one. Where did you get your "degree" in analytic philosophy again?
– Bertrand Wittgenstein's Ghost
May 10 at 10:31
12
12
It doesn't say that "every fact deductively follows from every other", it says that if you have already deduced B, you can deduce it from anything else.
– Eliran
May 9 at 17:17
It doesn't say that "every fact deductively follows from every other", it says that if you have already deduced B, you can deduce it from anything else.
– Eliran
May 9 at 17:17
is there a phrase for that. one that i can google @Eliran ?
– another_name
May 9 at 17:38
is there a phrase for that. one that i can google @Eliran ?
– another_name
May 9 at 17:38
2
2
I do not think this tautology has an established name. If you rewrite it as B→(¬A∨B) it is just an instance of disjunction introduction. Alternatively, if B is derivable from nothing then it surely is derivable from some A, whatever A is. In particular, it is even intuitionistically valid.
– Conifold
May 9 at 21:29
I do not think this tautology has an established name. If you rewrite it as B→(¬A∨B) it is just an instance of disjunction introduction. Alternatively, if B is derivable from nothing then it surely is derivable from some A, whatever A is. In particular, it is even intuitionistically valid.
– Conifold
May 9 at 21:29
2
2
Do not google, read a textbook.
– Jishin Noben
May 10 at 8:08
Do not google, read a textbook.
– Jishin Noben
May 10 at 8:08
@rexkogitans Embarrassing to say the least. Sensibility/nonsensibility is a semantic notion, not a formal one. Where did you get your "degree" in analytic philosophy again?
– Bertrand Wittgenstein's Ghost
May 10 at 10:31
@rexkogitans Embarrassing to say the least. Sensibility/nonsensibility is a semantic notion, not a formal one. Where did you get your "degree" in analytic philosophy again?
– Bertrand Wittgenstein's Ghost
May 10 at 10:31
|
show 2 more comments
5 Answers
5
active
oldest
votes
The topic you want to research is 'paradoxes of material implication.' That is, you are right to think there's something odd about this formula. But it does not mean that every fact deductively follows from every other. Consider your target string:
B-->(A-->B)
Now note that the convention is to define the connectives such that the conditional 'P-->Q' is logically equivalent to '~PvQ'. Hence, replacing each instance of the conditional with the disjunctive equivalent, your target string is logically equivalent to:
~B v (~AvB).
i.e., Either not-B, or else not-A or B. On this formulation, it is clear that this formula does not mean what you suggest it means. Furthermore, this disjunction is a trivial logical truth . There are two ways the world could be with respect to B: either B is true or it is not true. If B is true, this formula is satisfied by the second disjunct. If B is not true, this formula is satisfied by the first disjunct. Hence, it is logically true.
This formula is one among many reasons why philosophers have thought that the material conditional of first-order logic does not capture any interesting epistemically-rich notion of implication. When formulated as a conditional, this leads to valid implications that are counterintuitive, pragmatically odd, or prehaps even false. That is, we usually expect implications to carry some kind of epistemic or pragmatic force, to deliver some information beyond the initial terms. So carrying out this kind of inference, though valid, holds no water in ordinary discourse. Depending on your views about the relationship between semantics and pragmatics, then, there is actually an argument to be made that such "implications" are a kind of nonsense.
Monotonicity is another feature of logic that you may want to look into. Entailment in first-order logic is a monotonic operation, which in this setting means, intuitively, that once something B is proven, the addition of new assumptions will never make B false. One way to think about your formula is through the monotonicity of entailment. Suppose we have some string of background assumptions T, such that T entails B:
T |- B
The monotonicity of entailment ensures that the addition of new assumptions will never prohibit concluding B. Hence, the extension of T by arbitrary new assumption A will not alter the result:
T, A |- B
Many systems are intuitively non-monotonic, such as when we reason in a default way or when we assume that implications should have some relevance to one another. If you're in the United States, then you do default reasoning whenever you pay your taxes. For example, by default you owe the standard amount of taxes for your income level. But the addition of other assumptions --- such as itemized deductions, being the owner of a business, the head of a household, of retirement age, capital gains, student loan interest payments, etc. --- can make those default truths false. We also usually demand that the terms of an inference be relevant to one another. This is, at least arguably, one point of departure between traditional Aristotelian logic and modern mathematical logic. Aristotle's logic exhibits features of a relevance logic, i.e., the assumption that the premises in an argument must have some relevance to one another by virtue of which the terms of the premises can be combined or separated. This is one of the many ways in whcih Aristotle's understanding of syllogisms and deduction differs from our modern notions of validity. How to pin down 'relevance' is an entirely other can of worms. In a Wittgensteinian vein, we might also complain that someone who asserted a conditional like yours in conversation would be failing to make a move in the language game.
First-order logic validates all kinds of at best pragmatically useless and---again, depending on your background philosophical views---perhaps false entailments.
Consider:
Priest, Graham (2001). Introduction to Non-Classical Logic. Cambridge University Press, pp.12-13 (on material implication)
https://projecteuclid.org/download/pdf_1/euclid.ndjfl/1093883397
https://en.wikipedia.org/wiki/Monotonicity_of_entailment
https://plato.stanford.edu/entries/logic-relevance/
https://plato.stanford.edu/entries/logic-nonmonotonic/
@OP Note: this is, for example, part of Hilbert's positive implicational calculus. I have seen it referred to as the "Positive Paradox" for this reason. Whether or not you accept this rule depends a lot on how strong you desire your notion of implication to be; for example, many relevant logics distinguish between implication and material implication, so this is not so trivial a rule (and may not even hold).
– Brevan Ellefsen
May 10 at 5:12
It is important to remember that the material conditional is a mathematical construct, designed for doing basic first order logic and (in historical retrospect) for building a solid axiomatic set theory. It "plays nicely" with the universal quantifier, and can very naturally be used to quantify over individual sets instead of the entire domain of discourse. It "makes sense" that we can say "For all X, if X is an integer, then..." without having to think about what happens when X is not an integer. If you think of logic as a foundation of math, you probably aren't bothered by this at all.
– Kevin
May 10 at 5:53
add a comment |
When proving a conditional one assumes the antecedent, B. The goal is not to derive this, but from this assumption to derive the consequent which happens to be A > B.
But A > B is another conditional. Since it is a conditional one derives that in the same way. First assume the antecedent, A. Can one derive B? Yes, one can, because in this derivation we already assumed B in the first line. We can "reiterate" it in the next line as B.
Since from A one can derive B in a subproof, one can introduce a conditional (→I), A > B. This is the same as saying: if I have A, I can derive B. It is a shorthand for that subproof.
But if one has A > B one can introduce the conditional again and write B > [A > B].
Here's the result using the proof checker linked to below:
What this shows is that the argument is valid. The argument might be nonsense depending on what A and B are. Suppose A is Unicorns are white and B is The sky is blue. What this says is If the sky is blue then if unicorns are white then the sky is blue. Suppose the sky is blue. It doesn't matter whether unicorns are white or not. The sky is still blue.
For the reiteration rule (R) see section 16.1 of the forallx text below.
Kevin Klement's JavaScript/PHP Fitch-style natural deduction proof editor and checker http://proofs.openlogicproject.org/
P. D. Magnus, Tim Button with additions by J. Robert Loftis remixed and revised by Aaron Thomas-Bolduc, Richard Zach, forallx Calgary Remix: An Introduction to Formal Logic, Winter 2018. http://forallx.openlogicproject.org/
thanks, i think i followed the answer, though it's not really in terms i feel comfortable with. you did lose me a bit when talking about unicorns. do you mean that if it's true that the sky is blue then it's blue whatever else is? what's the phrase for that?
– another_name
May 9 at 17:41
@another_name I put the unicorns in there to give an example of an argument that might be considered nonsense. It could be any statement.
– Frank Hubeny
May 9 at 17:43
sure, i'm not complaining about the 'unicorn'. i'm just still not sure what it means to say that B > [A > B]. does it have a name?
– another_name
May 9 at 17:43
@another_name It could be thought of as reiteration. One could remove the A entirely and write B > B. It could be viewed as a circular argument or begging the question. This would be fallacious unless all sides in an argument accepted B.
– Frank Hubeny
May 9 at 18:59
Very good answer, the drawings are widely understood by any philosopher. You may also note that even A may be equivalent to non-B, so B -> (~B -> B). Sure, if a scientific theory holds B and ~B, it is pretty useless (which is a pragmatic point of view), but nevertheless, B holds.
– rexkogitans
May 10 at 11:26
|
show 2 more comments
I take a statement to be ridiculous iff it contains or implies a contradiction. That being said, B --> (A-->B) is not ridiculous, but it is a tautology, which makes it vacuous.
I feel the previous answers over complicated things, but here are two short arguments showing that it is a tautology.
1. B --> (A --> B) Assumption
2. B --> (~A v B) 1, Implication
3. ~B v (~A v B) 2, Implication
4. ~B v (B v ~A) 3, Commutation
5. (~B v B) v ~A 4, Association
You get the same solution using the rule of Exportation
1. B --> (A --> B) Assumption
2. (B ^ A) --> B 1, Exportation
3. ~(B ^ A) v B 2, Implication
4. (~B v ~A) v B 3, De Morgan
5. (~A v ~B) v B 4, Commutation
6. ~A v (~B v B) 5, Association
7. (~B v B) v ~A 6, Commutation
~B v B is logically true. It is trivial to show that (~B v ~B) v ~A is also logically true.
(@FrankHubeny Thanks for keeping me honest and reminding me to include the Commutation steps in both proofs).
There is also a commutative operation after line 3 in the first proof:. (~A v B) rewritten as (B v ~A). In the second, shouldn't the second line be ~B v (A --> B)? Regardless, this is another way to show the result. +1
– Frank Hubeny
May 9 at 23:24
You are correct, I did skip the commutation move in the first proof. I'll fix that. In the second proof at line 2 I used a rule called Exportation which says P--> (Q-->R) <---> (P ^ Q)-->R. Thanks for the vote!
– Rob
May 9 at 23:34
What does "contain a contradiction" mean?
– Jishin Noben
May 10 at 8:10
add a comment |
A → (B → C) is basically another way to write (A ∧ B) → C. So B → (A → B) is better not described as every fact deductively follows from every other, but every fact deductively follows from itself and every other. It is tautology, as the "every other" part is redundant.
The original statement would be wrong if you mean every statement follows from every other, without involving itself. If you actually mean fact, note that facts are true, and don't need to follow from anything after you already knew they are facts.
thanks, this was helpful
– another_name
May 10 at 13:57
add a comment |
It is a tautology.
•If B is true, then A->B is true. Then B->(A->B) is B->T. B->T is T.
•If B is false, then B->(A->B) is F->(A->B). F-> Something is always True. That's why B->(A->B) is true.
add a comment |
Your Answer
StackExchange.ready(function()
var channelOptions =
tags: "".split(" "),
id: "265"
;
initTagRenderer("".split(" "), "".split(" "), channelOptions);
StackExchange.using("externalEditor", function()
// Have to fire editor after snippets, if snippets enabled
if (StackExchange.settings.snippets.snippetsEnabled)
StackExchange.using("snippets", function()
createEditor();
);
else
createEditor();
);
function createEditor()
StackExchange.prepareEditor(
heartbeatType: 'answer',
autoActivateHeartbeat: false,
convertImagesToLinks: false,
noModals: true,
showLowRepImageUploadWarning: true,
reputationToPostImages: null,
bindNavPrevention: true,
postfix: "",
imageUploader:
brandingHtml: "Powered by u003ca class="icon-imgur-white" href="https://imgur.com/"u003eu003c/au003e",
contentPolicyHtml: "User contributions licensed under u003ca href="https://creativecommons.org/licenses/by-sa/3.0/"u003ecc by-sa 3.0 with attribution requiredu003c/au003e u003ca href="https://stackoverflow.com/legal/content-policy"u003e(content policy)u003c/au003e",
allowUrls: true
,
noCode: true, onDemand: true,
discardSelector: ".discard-answer"
,immediatelyShowMarkdownHelp:true
);
);
Sign up or log in
StackExchange.ready(function ()
StackExchange.helpers.onClickDraftSave('#login-link');
);
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Required, but never shown
StackExchange.ready(
function ()
StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fphilosophy.stackexchange.com%2fquestions%2f63354%2fis-it-nonsense-to-say-b-a-b%23new-answer', 'question_page');
);
Post as a guest
Required, but never shown
5 Answers
5
active
oldest
votes
5 Answers
5
active
oldest
votes
active
oldest
votes
active
oldest
votes
The topic you want to research is 'paradoxes of material implication.' That is, you are right to think there's something odd about this formula. But it does not mean that every fact deductively follows from every other. Consider your target string:
B-->(A-->B)
Now note that the convention is to define the connectives such that the conditional 'P-->Q' is logically equivalent to '~PvQ'. Hence, replacing each instance of the conditional with the disjunctive equivalent, your target string is logically equivalent to:
~B v (~AvB).
i.e., Either not-B, or else not-A or B. On this formulation, it is clear that this formula does not mean what you suggest it means. Furthermore, this disjunction is a trivial logical truth . There are two ways the world could be with respect to B: either B is true or it is not true. If B is true, this formula is satisfied by the second disjunct. If B is not true, this formula is satisfied by the first disjunct. Hence, it is logically true.
This formula is one among many reasons why philosophers have thought that the material conditional of first-order logic does not capture any interesting epistemically-rich notion of implication. When formulated as a conditional, this leads to valid implications that are counterintuitive, pragmatically odd, or prehaps even false. That is, we usually expect implications to carry some kind of epistemic or pragmatic force, to deliver some information beyond the initial terms. So carrying out this kind of inference, though valid, holds no water in ordinary discourse. Depending on your views about the relationship between semantics and pragmatics, then, there is actually an argument to be made that such "implications" are a kind of nonsense.
Monotonicity is another feature of logic that you may want to look into. Entailment in first-order logic is a monotonic operation, which in this setting means, intuitively, that once something B is proven, the addition of new assumptions will never make B false. One way to think about your formula is through the monotonicity of entailment. Suppose we have some string of background assumptions T, such that T entails B:
T |- B
The monotonicity of entailment ensures that the addition of new assumptions will never prohibit concluding B. Hence, the extension of T by arbitrary new assumption A will not alter the result:
T, A |- B
Many systems are intuitively non-monotonic, such as when we reason in a default way or when we assume that implications should have some relevance to one another. If you're in the United States, then you do default reasoning whenever you pay your taxes. For example, by default you owe the standard amount of taxes for your income level. But the addition of other assumptions --- such as itemized deductions, being the owner of a business, the head of a household, of retirement age, capital gains, student loan interest payments, etc. --- can make those default truths false. We also usually demand that the terms of an inference be relevant to one another. This is, at least arguably, one point of departure between traditional Aristotelian logic and modern mathematical logic. Aristotle's logic exhibits features of a relevance logic, i.e., the assumption that the premises in an argument must have some relevance to one another by virtue of which the terms of the premises can be combined or separated. This is one of the many ways in whcih Aristotle's understanding of syllogisms and deduction differs from our modern notions of validity. How to pin down 'relevance' is an entirely other can of worms. In a Wittgensteinian vein, we might also complain that someone who asserted a conditional like yours in conversation would be failing to make a move in the language game.
First-order logic validates all kinds of at best pragmatically useless and---again, depending on your background philosophical views---perhaps false entailments.
Consider:
Priest, Graham (2001). Introduction to Non-Classical Logic. Cambridge University Press, pp.12-13 (on material implication)
https://projecteuclid.org/download/pdf_1/euclid.ndjfl/1093883397
https://en.wikipedia.org/wiki/Monotonicity_of_entailment
https://plato.stanford.edu/entries/logic-relevance/
https://plato.stanford.edu/entries/logic-nonmonotonic/
@OP Note: this is, for example, part of Hilbert's positive implicational calculus. I have seen it referred to as the "Positive Paradox" for this reason. Whether or not you accept this rule depends a lot on how strong you desire your notion of implication to be; for example, many relevant logics distinguish between implication and material implication, so this is not so trivial a rule (and may not even hold).
– Brevan Ellefsen
May 10 at 5:12
It is important to remember that the material conditional is a mathematical construct, designed for doing basic first order logic and (in historical retrospect) for building a solid axiomatic set theory. It "plays nicely" with the universal quantifier, and can very naturally be used to quantify over individual sets instead of the entire domain of discourse. It "makes sense" that we can say "For all X, if X is an integer, then..." without having to think about what happens when X is not an integer. If you think of logic as a foundation of math, you probably aren't bothered by this at all.
– Kevin
May 10 at 5:53
add a comment |
The topic you want to research is 'paradoxes of material implication.' That is, you are right to think there's something odd about this formula. But it does not mean that every fact deductively follows from every other. Consider your target string:
B-->(A-->B)
Now note that the convention is to define the connectives such that the conditional 'P-->Q' is logically equivalent to '~PvQ'. Hence, replacing each instance of the conditional with the disjunctive equivalent, your target string is logically equivalent to:
~B v (~AvB).
i.e., Either not-B, or else not-A or B. On this formulation, it is clear that this formula does not mean what you suggest it means. Furthermore, this disjunction is a trivial logical truth . There are two ways the world could be with respect to B: either B is true or it is not true. If B is true, this formula is satisfied by the second disjunct. If B is not true, this formula is satisfied by the first disjunct. Hence, it is logically true.
This formula is one among many reasons why philosophers have thought that the material conditional of first-order logic does not capture any interesting epistemically-rich notion of implication. When formulated as a conditional, this leads to valid implications that are counterintuitive, pragmatically odd, or prehaps even false. That is, we usually expect implications to carry some kind of epistemic or pragmatic force, to deliver some information beyond the initial terms. So carrying out this kind of inference, though valid, holds no water in ordinary discourse. Depending on your views about the relationship between semantics and pragmatics, then, there is actually an argument to be made that such "implications" are a kind of nonsense.
Monotonicity is another feature of logic that you may want to look into. Entailment in first-order logic is a monotonic operation, which in this setting means, intuitively, that once something B is proven, the addition of new assumptions will never make B false. One way to think about your formula is through the monotonicity of entailment. Suppose we have some string of background assumptions T, such that T entails B:
T |- B
The monotonicity of entailment ensures that the addition of new assumptions will never prohibit concluding B. Hence, the extension of T by arbitrary new assumption A will not alter the result:
T, A |- B
Many systems are intuitively non-monotonic, such as when we reason in a default way or when we assume that implications should have some relevance to one another. If you're in the United States, then you do default reasoning whenever you pay your taxes. For example, by default you owe the standard amount of taxes for your income level. But the addition of other assumptions --- such as itemized deductions, being the owner of a business, the head of a household, of retirement age, capital gains, student loan interest payments, etc. --- can make those default truths false. We also usually demand that the terms of an inference be relevant to one another. This is, at least arguably, one point of departure between traditional Aristotelian logic and modern mathematical logic. Aristotle's logic exhibits features of a relevance logic, i.e., the assumption that the premises in an argument must have some relevance to one another by virtue of which the terms of the premises can be combined or separated. This is one of the many ways in whcih Aristotle's understanding of syllogisms and deduction differs from our modern notions of validity. How to pin down 'relevance' is an entirely other can of worms. In a Wittgensteinian vein, we might also complain that someone who asserted a conditional like yours in conversation would be failing to make a move in the language game.
First-order logic validates all kinds of at best pragmatically useless and---again, depending on your background philosophical views---perhaps false entailments.
Consider:
Priest, Graham (2001). Introduction to Non-Classical Logic. Cambridge University Press, pp.12-13 (on material implication)
https://projecteuclid.org/download/pdf_1/euclid.ndjfl/1093883397
https://en.wikipedia.org/wiki/Monotonicity_of_entailment
https://plato.stanford.edu/entries/logic-relevance/
https://plato.stanford.edu/entries/logic-nonmonotonic/
@OP Note: this is, for example, part of Hilbert's positive implicational calculus. I have seen it referred to as the "Positive Paradox" for this reason. Whether or not you accept this rule depends a lot on how strong you desire your notion of implication to be; for example, many relevant logics distinguish between implication and material implication, so this is not so trivial a rule (and may not even hold).
– Brevan Ellefsen
May 10 at 5:12
It is important to remember that the material conditional is a mathematical construct, designed for doing basic first order logic and (in historical retrospect) for building a solid axiomatic set theory. It "plays nicely" with the universal quantifier, and can very naturally be used to quantify over individual sets instead of the entire domain of discourse. It "makes sense" that we can say "For all X, if X is an integer, then..." without having to think about what happens when X is not an integer. If you think of logic as a foundation of math, you probably aren't bothered by this at all.
– Kevin
May 10 at 5:53
add a comment |
The topic you want to research is 'paradoxes of material implication.' That is, you are right to think there's something odd about this formula. But it does not mean that every fact deductively follows from every other. Consider your target string:
B-->(A-->B)
Now note that the convention is to define the connectives such that the conditional 'P-->Q' is logically equivalent to '~PvQ'. Hence, replacing each instance of the conditional with the disjunctive equivalent, your target string is logically equivalent to:
~B v (~AvB).
i.e., Either not-B, or else not-A or B. On this formulation, it is clear that this formula does not mean what you suggest it means. Furthermore, this disjunction is a trivial logical truth . There are two ways the world could be with respect to B: either B is true or it is not true. If B is true, this formula is satisfied by the second disjunct. If B is not true, this formula is satisfied by the first disjunct. Hence, it is logically true.
This formula is one among many reasons why philosophers have thought that the material conditional of first-order logic does not capture any interesting epistemically-rich notion of implication. When formulated as a conditional, this leads to valid implications that are counterintuitive, pragmatically odd, or prehaps even false. That is, we usually expect implications to carry some kind of epistemic or pragmatic force, to deliver some information beyond the initial terms. So carrying out this kind of inference, though valid, holds no water in ordinary discourse. Depending on your views about the relationship between semantics and pragmatics, then, there is actually an argument to be made that such "implications" are a kind of nonsense.
Monotonicity is another feature of logic that you may want to look into. Entailment in first-order logic is a monotonic operation, which in this setting means, intuitively, that once something B is proven, the addition of new assumptions will never make B false. One way to think about your formula is through the monotonicity of entailment. Suppose we have some string of background assumptions T, such that T entails B:
T |- B
The monotonicity of entailment ensures that the addition of new assumptions will never prohibit concluding B. Hence, the extension of T by arbitrary new assumption A will not alter the result:
T, A |- B
Many systems are intuitively non-monotonic, such as when we reason in a default way or when we assume that implications should have some relevance to one another. If you're in the United States, then you do default reasoning whenever you pay your taxes. For example, by default you owe the standard amount of taxes for your income level. But the addition of other assumptions --- such as itemized deductions, being the owner of a business, the head of a household, of retirement age, capital gains, student loan interest payments, etc. --- can make those default truths false. We also usually demand that the terms of an inference be relevant to one another. This is, at least arguably, one point of departure between traditional Aristotelian logic and modern mathematical logic. Aristotle's logic exhibits features of a relevance logic, i.e., the assumption that the premises in an argument must have some relevance to one another by virtue of which the terms of the premises can be combined or separated. This is one of the many ways in whcih Aristotle's understanding of syllogisms and deduction differs from our modern notions of validity. How to pin down 'relevance' is an entirely other can of worms. In a Wittgensteinian vein, we might also complain that someone who asserted a conditional like yours in conversation would be failing to make a move in the language game.
First-order logic validates all kinds of at best pragmatically useless and---again, depending on your background philosophical views---perhaps false entailments.
Consider:
Priest, Graham (2001). Introduction to Non-Classical Logic. Cambridge University Press, pp.12-13 (on material implication)
https://projecteuclid.org/download/pdf_1/euclid.ndjfl/1093883397
https://en.wikipedia.org/wiki/Monotonicity_of_entailment
https://plato.stanford.edu/entries/logic-relevance/
https://plato.stanford.edu/entries/logic-nonmonotonic/
The topic you want to research is 'paradoxes of material implication.' That is, you are right to think there's something odd about this formula. But it does not mean that every fact deductively follows from every other. Consider your target string:
B-->(A-->B)
Now note that the convention is to define the connectives such that the conditional 'P-->Q' is logically equivalent to '~PvQ'. Hence, replacing each instance of the conditional with the disjunctive equivalent, your target string is logically equivalent to:
~B v (~AvB).
i.e., Either not-B, or else not-A or B. On this formulation, it is clear that this formula does not mean what you suggest it means. Furthermore, this disjunction is a trivial logical truth . There are two ways the world could be with respect to B: either B is true or it is not true. If B is true, this formula is satisfied by the second disjunct. If B is not true, this formula is satisfied by the first disjunct. Hence, it is logically true.
This formula is one among many reasons why philosophers have thought that the material conditional of first-order logic does not capture any interesting epistemically-rich notion of implication. When formulated as a conditional, this leads to valid implications that are counterintuitive, pragmatically odd, or prehaps even false. That is, we usually expect implications to carry some kind of epistemic or pragmatic force, to deliver some information beyond the initial terms. So carrying out this kind of inference, though valid, holds no water in ordinary discourse. Depending on your views about the relationship between semantics and pragmatics, then, there is actually an argument to be made that such "implications" are a kind of nonsense.
Monotonicity is another feature of logic that you may want to look into. Entailment in first-order logic is a monotonic operation, which in this setting means, intuitively, that once something B is proven, the addition of new assumptions will never make B false. One way to think about your formula is through the monotonicity of entailment. Suppose we have some string of background assumptions T, such that T entails B:
T |- B
The monotonicity of entailment ensures that the addition of new assumptions will never prohibit concluding B. Hence, the extension of T by arbitrary new assumption A will not alter the result:
T, A |- B
Many systems are intuitively non-monotonic, such as when we reason in a default way or when we assume that implications should have some relevance to one another. If you're in the United States, then you do default reasoning whenever you pay your taxes. For example, by default you owe the standard amount of taxes for your income level. But the addition of other assumptions --- such as itemized deductions, being the owner of a business, the head of a household, of retirement age, capital gains, student loan interest payments, etc. --- can make those default truths false. We also usually demand that the terms of an inference be relevant to one another. This is, at least arguably, one point of departure between traditional Aristotelian logic and modern mathematical logic. Aristotle's logic exhibits features of a relevance logic, i.e., the assumption that the premises in an argument must have some relevance to one another by virtue of which the terms of the premises can be combined or separated. This is one of the many ways in whcih Aristotle's understanding of syllogisms and deduction differs from our modern notions of validity. How to pin down 'relevance' is an entirely other can of worms. In a Wittgensteinian vein, we might also complain that someone who asserted a conditional like yours in conversation would be failing to make a move in the language game.
First-order logic validates all kinds of at best pragmatically useless and---again, depending on your background philosophical views---perhaps false entailments.
Consider:
Priest, Graham (2001). Introduction to Non-Classical Logic. Cambridge University Press, pp.12-13 (on material implication)
https://projecteuclid.org/download/pdf_1/euclid.ndjfl/1093883397
https://en.wikipedia.org/wiki/Monotonicity_of_entailment
https://plato.stanford.edu/entries/logic-relevance/
https://plato.stanford.edu/entries/logic-nonmonotonic/
edited May 9 at 20:17
answered May 9 at 19:40
transitionsynthesistransitionsynthesis
1,00367
1,00367
@OP Note: this is, for example, part of Hilbert's positive implicational calculus. I have seen it referred to as the "Positive Paradox" for this reason. Whether or not you accept this rule depends a lot on how strong you desire your notion of implication to be; for example, many relevant logics distinguish between implication and material implication, so this is not so trivial a rule (and may not even hold).
– Brevan Ellefsen
May 10 at 5:12
It is important to remember that the material conditional is a mathematical construct, designed for doing basic first order logic and (in historical retrospect) for building a solid axiomatic set theory. It "plays nicely" with the universal quantifier, and can very naturally be used to quantify over individual sets instead of the entire domain of discourse. It "makes sense" that we can say "For all X, if X is an integer, then..." without having to think about what happens when X is not an integer. If you think of logic as a foundation of math, you probably aren't bothered by this at all.
– Kevin
May 10 at 5:53
add a comment |
@OP Note: this is, for example, part of Hilbert's positive implicational calculus. I have seen it referred to as the "Positive Paradox" for this reason. Whether or not you accept this rule depends a lot on how strong you desire your notion of implication to be; for example, many relevant logics distinguish between implication and material implication, so this is not so trivial a rule (and may not even hold).
– Brevan Ellefsen
May 10 at 5:12
It is important to remember that the material conditional is a mathematical construct, designed for doing basic first order logic and (in historical retrospect) for building a solid axiomatic set theory. It "plays nicely" with the universal quantifier, and can very naturally be used to quantify over individual sets instead of the entire domain of discourse. It "makes sense" that we can say "For all X, if X is an integer, then..." without having to think about what happens when X is not an integer. If you think of logic as a foundation of math, you probably aren't bothered by this at all.
– Kevin
May 10 at 5:53
@OP Note: this is, for example, part of Hilbert's positive implicational calculus. I have seen it referred to as the "Positive Paradox" for this reason. Whether or not you accept this rule depends a lot on how strong you desire your notion of implication to be; for example, many relevant logics distinguish between implication and material implication, so this is not so trivial a rule (and may not even hold).
– Brevan Ellefsen
May 10 at 5:12
@OP Note: this is, for example, part of Hilbert's positive implicational calculus. I have seen it referred to as the "Positive Paradox" for this reason. Whether or not you accept this rule depends a lot on how strong you desire your notion of implication to be; for example, many relevant logics distinguish between implication and material implication, so this is not so trivial a rule (and may not even hold).
– Brevan Ellefsen
May 10 at 5:12
It is important to remember that the material conditional is a mathematical construct, designed for doing basic first order logic and (in historical retrospect) for building a solid axiomatic set theory. It "plays nicely" with the universal quantifier, and can very naturally be used to quantify over individual sets instead of the entire domain of discourse. It "makes sense" that we can say "For all X, if X is an integer, then..." without having to think about what happens when X is not an integer. If you think of logic as a foundation of math, you probably aren't bothered by this at all.
– Kevin
May 10 at 5:53
It is important to remember that the material conditional is a mathematical construct, designed for doing basic first order logic and (in historical retrospect) for building a solid axiomatic set theory. It "plays nicely" with the universal quantifier, and can very naturally be used to quantify over individual sets instead of the entire domain of discourse. It "makes sense" that we can say "For all X, if X is an integer, then..." without having to think about what happens when X is not an integer. If you think of logic as a foundation of math, you probably aren't bothered by this at all.
– Kevin
May 10 at 5:53
add a comment |
When proving a conditional one assumes the antecedent, B. The goal is not to derive this, but from this assumption to derive the consequent which happens to be A > B.
But A > B is another conditional. Since it is a conditional one derives that in the same way. First assume the antecedent, A. Can one derive B? Yes, one can, because in this derivation we already assumed B in the first line. We can "reiterate" it in the next line as B.
Since from A one can derive B in a subproof, one can introduce a conditional (→I), A > B. This is the same as saying: if I have A, I can derive B. It is a shorthand for that subproof.
But if one has A > B one can introduce the conditional again and write B > [A > B].
Here's the result using the proof checker linked to below:
What this shows is that the argument is valid. The argument might be nonsense depending on what A and B are. Suppose A is Unicorns are white and B is The sky is blue. What this says is If the sky is blue then if unicorns are white then the sky is blue. Suppose the sky is blue. It doesn't matter whether unicorns are white or not. The sky is still blue.
For the reiteration rule (R) see section 16.1 of the forallx text below.
Kevin Klement's JavaScript/PHP Fitch-style natural deduction proof editor and checker http://proofs.openlogicproject.org/
P. D. Magnus, Tim Button with additions by J. Robert Loftis remixed and revised by Aaron Thomas-Bolduc, Richard Zach, forallx Calgary Remix: An Introduction to Formal Logic, Winter 2018. http://forallx.openlogicproject.org/
thanks, i think i followed the answer, though it's not really in terms i feel comfortable with. you did lose me a bit when talking about unicorns. do you mean that if it's true that the sky is blue then it's blue whatever else is? what's the phrase for that?
– another_name
May 9 at 17:41
@another_name I put the unicorns in there to give an example of an argument that might be considered nonsense. It could be any statement.
– Frank Hubeny
May 9 at 17:43
sure, i'm not complaining about the 'unicorn'. i'm just still not sure what it means to say that B > [A > B]. does it have a name?
– another_name
May 9 at 17:43
@another_name It could be thought of as reiteration. One could remove the A entirely and write B > B. It could be viewed as a circular argument or begging the question. This would be fallacious unless all sides in an argument accepted B.
– Frank Hubeny
May 9 at 18:59
Very good answer, the drawings are widely understood by any philosopher. You may also note that even A may be equivalent to non-B, so B -> (~B -> B). Sure, if a scientific theory holds B and ~B, it is pretty useless (which is a pragmatic point of view), but nevertheless, B holds.
– rexkogitans
May 10 at 11:26
|
show 2 more comments
When proving a conditional one assumes the antecedent, B. The goal is not to derive this, but from this assumption to derive the consequent which happens to be A > B.
But A > B is another conditional. Since it is a conditional one derives that in the same way. First assume the antecedent, A. Can one derive B? Yes, one can, because in this derivation we already assumed B in the first line. We can "reiterate" it in the next line as B.
Since from A one can derive B in a subproof, one can introduce a conditional (→I), A > B. This is the same as saying: if I have A, I can derive B. It is a shorthand for that subproof.
But if one has A > B one can introduce the conditional again and write B > [A > B].
Here's the result using the proof checker linked to below:
What this shows is that the argument is valid. The argument might be nonsense depending on what A and B are. Suppose A is Unicorns are white and B is The sky is blue. What this says is If the sky is blue then if unicorns are white then the sky is blue. Suppose the sky is blue. It doesn't matter whether unicorns are white or not. The sky is still blue.
For the reiteration rule (R) see section 16.1 of the forallx text below.
Kevin Klement's JavaScript/PHP Fitch-style natural deduction proof editor and checker http://proofs.openlogicproject.org/
P. D. Magnus, Tim Button with additions by J. Robert Loftis remixed and revised by Aaron Thomas-Bolduc, Richard Zach, forallx Calgary Remix: An Introduction to Formal Logic, Winter 2018. http://forallx.openlogicproject.org/
thanks, i think i followed the answer, though it's not really in terms i feel comfortable with. you did lose me a bit when talking about unicorns. do you mean that if it's true that the sky is blue then it's blue whatever else is? what's the phrase for that?
– another_name
May 9 at 17:41
@another_name I put the unicorns in there to give an example of an argument that might be considered nonsense. It could be any statement.
– Frank Hubeny
May 9 at 17:43
sure, i'm not complaining about the 'unicorn'. i'm just still not sure what it means to say that B > [A > B]. does it have a name?
– another_name
May 9 at 17:43
@another_name It could be thought of as reiteration. One could remove the A entirely and write B > B. It could be viewed as a circular argument or begging the question. This would be fallacious unless all sides in an argument accepted B.
– Frank Hubeny
May 9 at 18:59
Very good answer, the drawings are widely understood by any philosopher. You may also note that even A may be equivalent to non-B, so B -> (~B -> B). Sure, if a scientific theory holds B and ~B, it is pretty useless (which is a pragmatic point of view), but nevertheless, B holds.
– rexkogitans
May 10 at 11:26
|
show 2 more comments
When proving a conditional one assumes the antecedent, B. The goal is not to derive this, but from this assumption to derive the consequent which happens to be A > B.
But A > B is another conditional. Since it is a conditional one derives that in the same way. First assume the antecedent, A. Can one derive B? Yes, one can, because in this derivation we already assumed B in the first line. We can "reiterate" it in the next line as B.
Since from A one can derive B in a subproof, one can introduce a conditional (→I), A > B. This is the same as saying: if I have A, I can derive B. It is a shorthand for that subproof.
But if one has A > B one can introduce the conditional again and write B > [A > B].
Here's the result using the proof checker linked to below:
What this shows is that the argument is valid. The argument might be nonsense depending on what A and B are. Suppose A is Unicorns are white and B is The sky is blue. What this says is If the sky is blue then if unicorns are white then the sky is blue. Suppose the sky is blue. It doesn't matter whether unicorns are white or not. The sky is still blue.
For the reiteration rule (R) see section 16.1 of the forallx text below.
Kevin Klement's JavaScript/PHP Fitch-style natural deduction proof editor and checker http://proofs.openlogicproject.org/
P. D. Magnus, Tim Button with additions by J. Robert Loftis remixed and revised by Aaron Thomas-Bolduc, Richard Zach, forallx Calgary Remix: An Introduction to Formal Logic, Winter 2018. http://forallx.openlogicproject.org/
When proving a conditional one assumes the antecedent, B. The goal is not to derive this, but from this assumption to derive the consequent which happens to be A > B.
But A > B is another conditional. Since it is a conditional one derives that in the same way. First assume the antecedent, A. Can one derive B? Yes, one can, because in this derivation we already assumed B in the first line. We can "reiterate" it in the next line as B.
Since from A one can derive B in a subproof, one can introduce a conditional (→I), A > B. This is the same as saying: if I have A, I can derive B. It is a shorthand for that subproof.
But if one has A > B one can introduce the conditional again and write B > [A > B].
Here's the result using the proof checker linked to below:
What this shows is that the argument is valid. The argument might be nonsense depending on what A and B are. Suppose A is Unicorns are white and B is The sky is blue. What this says is If the sky is blue then if unicorns are white then the sky is blue. Suppose the sky is blue. It doesn't matter whether unicorns are white or not. The sky is still blue.
For the reiteration rule (R) see section 16.1 of the forallx text below.
Kevin Klement's JavaScript/PHP Fitch-style natural deduction proof editor and checker http://proofs.openlogicproject.org/
P. D. Magnus, Tim Button with additions by J. Robert Loftis remixed and revised by Aaron Thomas-Bolduc, Richard Zach, forallx Calgary Remix: An Introduction to Formal Logic, Winter 2018. http://forallx.openlogicproject.org/
answered May 9 at 17:35
Frank HubenyFrank Hubeny
11.5k51564
11.5k51564
thanks, i think i followed the answer, though it's not really in terms i feel comfortable with. you did lose me a bit when talking about unicorns. do you mean that if it's true that the sky is blue then it's blue whatever else is? what's the phrase for that?
– another_name
May 9 at 17:41
@another_name I put the unicorns in there to give an example of an argument that might be considered nonsense. It could be any statement.
– Frank Hubeny
May 9 at 17:43
sure, i'm not complaining about the 'unicorn'. i'm just still not sure what it means to say that B > [A > B]. does it have a name?
– another_name
May 9 at 17:43
@another_name It could be thought of as reiteration. One could remove the A entirely and write B > B. It could be viewed as a circular argument or begging the question. This would be fallacious unless all sides in an argument accepted B.
– Frank Hubeny
May 9 at 18:59
Very good answer, the drawings are widely understood by any philosopher. You may also note that even A may be equivalent to non-B, so B -> (~B -> B). Sure, if a scientific theory holds B and ~B, it is pretty useless (which is a pragmatic point of view), but nevertheless, B holds.
– rexkogitans
May 10 at 11:26
|
show 2 more comments
thanks, i think i followed the answer, though it's not really in terms i feel comfortable with. you did lose me a bit when talking about unicorns. do you mean that if it's true that the sky is blue then it's blue whatever else is? what's the phrase for that?
– another_name
May 9 at 17:41
@another_name I put the unicorns in there to give an example of an argument that might be considered nonsense. It could be any statement.
– Frank Hubeny
May 9 at 17:43
sure, i'm not complaining about the 'unicorn'. i'm just still not sure what it means to say that B > [A > B]. does it have a name?
– another_name
May 9 at 17:43
@another_name It could be thought of as reiteration. One could remove the A entirely and write B > B. It could be viewed as a circular argument or begging the question. This would be fallacious unless all sides in an argument accepted B.
– Frank Hubeny
May 9 at 18:59
Very good answer, the drawings are widely understood by any philosopher. You may also note that even A may be equivalent to non-B, so B -> (~B -> B). Sure, if a scientific theory holds B and ~B, it is pretty useless (which is a pragmatic point of view), but nevertheless, B holds.
– rexkogitans
May 10 at 11:26
thanks, i think i followed the answer, though it's not really in terms i feel comfortable with. you did lose me a bit when talking about unicorns. do you mean that if it's true that the sky is blue then it's blue whatever else is? what's the phrase for that?
– another_name
May 9 at 17:41
thanks, i think i followed the answer, though it's not really in terms i feel comfortable with. you did lose me a bit when talking about unicorns. do you mean that if it's true that the sky is blue then it's blue whatever else is? what's the phrase for that?
– another_name
May 9 at 17:41
@another_name I put the unicorns in there to give an example of an argument that might be considered nonsense. It could be any statement.
– Frank Hubeny
May 9 at 17:43
@another_name I put the unicorns in there to give an example of an argument that might be considered nonsense. It could be any statement.
– Frank Hubeny
May 9 at 17:43
sure, i'm not complaining about the 'unicorn'. i'm just still not sure what it means to say that B > [A > B]. does it have a name?
– another_name
May 9 at 17:43
sure, i'm not complaining about the 'unicorn'. i'm just still not sure what it means to say that B > [A > B]. does it have a name?
– another_name
May 9 at 17:43
@another_name It could be thought of as reiteration. One could remove the A entirely and write B > B. It could be viewed as a circular argument or begging the question. This would be fallacious unless all sides in an argument accepted B.
– Frank Hubeny
May 9 at 18:59
@another_name It could be thought of as reiteration. One could remove the A entirely and write B > B. It could be viewed as a circular argument or begging the question. This would be fallacious unless all sides in an argument accepted B.
– Frank Hubeny
May 9 at 18:59
Very good answer, the drawings are widely understood by any philosopher. You may also note that even A may be equivalent to non-B, so B -> (~B -> B). Sure, if a scientific theory holds B and ~B, it is pretty useless (which is a pragmatic point of view), but nevertheless, B holds.
– rexkogitans
May 10 at 11:26
Very good answer, the drawings are widely understood by any philosopher. You may also note that even A may be equivalent to non-B, so B -> (~B -> B). Sure, if a scientific theory holds B and ~B, it is pretty useless (which is a pragmatic point of view), but nevertheless, B holds.
– rexkogitans
May 10 at 11:26
|
show 2 more comments
I take a statement to be ridiculous iff it contains or implies a contradiction. That being said, B --> (A-->B) is not ridiculous, but it is a tautology, which makes it vacuous.
I feel the previous answers over complicated things, but here are two short arguments showing that it is a tautology.
1. B --> (A --> B) Assumption
2. B --> (~A v B) 1, Implication
3. ~B v (~A v B) 2, Implication
4. ~B v (B v ~A) 3, Commutation
5. (~B v B) v ~A 4, Association
You get the same solution using the rule of Exportation
1. B --> (A --> B) Assumption
2. (B ^ A) --> B 1, Exportation
3. ~(B ^ A) v B 2, Implication
4. (~B v ~A) v B 3, De Morgan
5. (~A v ~B) v B 4, Commutation
6. ~A v (~B v B) 5, Association
7. (~B v B) v ~A 6, Commutation
~B v B is logically true. It is trivial to show that (~B v ~B) v ~A is also logically true.
(@FrankHubeny Thanks for keeping me honest and reminding me to include the Commutation steps in both proofs).
There is also a commutative operation after line 3 in the first proof:. (~A v B) rewritten as (B v ~A). In the second, shouldn't the second line be ~B v (A --> B)? Regardless, this is another way to show the result. +1
– Frank Hubeny
May 9 at 23:24
You are correct, I did skip the commutation move in the first proof. I'll fix that. In the second proof at line 2 I used a rule called Exportation which says P--> (Q-->R) <---> (P ^ Q)-->R. Thanks for the vote!
– Rob
May 9 at 23:34
What does "contain a contradiction" mean?
– Jishin Noben
May 10 at 8:10
add a comment |
I take a statement to be ridiculous iff it contains or implies a contradiction. That being said, B --> (A-->B) is not ridiculous, but it is a tautology, which makes it vacuous.
I feel the previous answers over complicated things, but here are two short arguments showing that it is a tautology.
1. B --> (A --> B) Assumption
2. B --> (~A v B) 1, Implication
3. ~B v (~A v B) 2, Implication
4. ~B v (B v ~A) 3, Commutation
5. (~B v B) v ~A 4, Association
You get the same solution using the rule of Exportation
1. B --> (A --> B) Assumption
2. (B ^ A) --> B 1, Exportation
3. ~(B ^ A) v B 2, Implication
4. (~B v ~A) v B 3, De Morgan
5. (~A v ~B) v B 4, Commutation
6. ~A v (~B v B) 5, Association
7. (~B v B) v ~A 6, Commutation
~B v B is logically true. It is trivial to show that (~B v ~B) v ~A is also logically true.
(@FrankHubeny Thanks for keeping me honest and reminding me to include the Commutation steps in both proofs).
There is also a commutative operation after line 3 in the first proof:. (~A v B) rewritten as (B v ~A). In the second, shouldn't the second line be ~B v (A --> B)? Regardless, this is another way to show the result. +1
– Frank Hubeny
May 9 at 23:24
You are correct, I did skip the commutation move in the first proof. I'll fix that. In the second proof at line 2 I used a rule called Exportation which says P--> (Q-->R) <---> (P ^ Q)-->R. Thanks for the vote!
– Rob
May 9 at 23:34
What does "contain a contradiction" mean?
– Jishin Noben
May 10 at 8:10
add a comment |
I take a statement to be ridiculous iff it contains or implies a contradiction. That being said, B --> (A-->B) is not ridiculous, but it is a tautology, which makes it vacuous.
I feel the previous answers over complicated things, but here are two short arguments showing that it is a tautology.
1. B --> (A --> B) Assumption
2. B --> (~A v B) 1, Implication
3. ~B v (~A v B) 2, Implication
4. ~B v (B v ~A) 3, Commutation
5. (~B v B) v ~A 4, Association
You get the same solution using the rule of Exportation
1. B --> (A --> B) Assumption
2. (B ^ A) --> B 1, Exportation
3. ~(B ^ A) v B 2, Implication
4. (~B v ~A) v B 3, De Morgan
5. (~A v ~B) v B 4, Commutation
6. ~A v (~B v B) 5, Association
7. (~B v B) v ~A 6, Commutation
~B v B is logically true. It is trivial to show that (~B v ~B) v ~A is also logically true.
(@FrankHubeny Thanks for keeping me honest and reminding me to include the Commutation steps in both proofs).
I take a statement to be ridiculous iff it contains or implies a contradiction. That being said, B --> (A-->B) is not ridiculous, but it is a tautology, which makes it vacuous.
I feel the previous answers over complicated things, but here are two short arguments showing that it is a tautology.
1. B --> (A --> B) Assumption
2. B --> (~A v B) 1, Implication
3. ~B v (~A v B) 2, Implication
4. ~B v (B v ~A) 3, Commutation
5. (~B v B) v ~A 4, Association
You get the same solution using the rule of Exportation
1. B --> (A --> B) Assumption
2. (B ^ A) --> B 1, Exportation
3. ~(B ^ A) v B 2, Implication
4. (~B v ~A) v B 3, De Morgan
5. (~A v ~B) v B 4, Commutation
6. ~A v (~B v B) 5, Association
7. (~B v B) v ~A 6, Commutation
~B v B is logically true. It is trivial to show that (~B v ~B) v ~A is also logically true.
(@FrankHubeny Thanks for keeping me honest and reminding me to include the Commutation steps in both proofs).
edited May 10 at 18:55
answered May 9 at 23:14
RobRob
3366
3366
There is also a commutative operation after line 3 in the first proof:. (~A v B) rewritten as (B v ~A). In the second, shouldn't the second line be ~B v (A --> B)? Regardless, this is another way to show the result. +1
– Frank Hubeny
May 9 at 23:24
You are correct, I did skip the commutation move in the first proof. I'll fix that. In the second proof at line 2 I used a rule called Exportation which says P--> (Q-->R) <---> (P ^ Q)-->R. Thanks for the vote!
– Rob
May 9 at 23:34
What does "contain a contradiction" mean?
– Jishin Noben
May 10 at 8:10
add a comment |
There is also a commutative operation after line 3 in the first proof:. (~A v B) rewritten as (B v ~A). In the second, shouldn't the second line be ~B v (A --> B)? Regardless, this is another way to show the result. +1
– Frank Hubeny
May 9 at 23:24
You are correct, I did skip the commutation move in the first proof. I'll fix that. In the second proof at line 2 I used a rule called Exportation which says P--> (Q-->R) <---> (P ^ Q)-->R. Thanks for the vote!
– Rob
May 9 at 23:34
What does "contain a contradiction" mean?
– Jishin Noben
May 10 at 8:10
There is also a commutative operation after line 3 in the first proof:. (~A v B) rewritten as (B v ~A). In the second, shouldn't the second line be ~B v (A --> B)? Regardless, this is another way to show the result. +1
– Frank Hubeny
May 9 at 23:24
There is also a commutative operation after line 3 in the first proof:. (~A v B) rewritten as (B v ~A). In the second, shouldn't the second line be ~B v (A --> B)? Regardless, this is another way to show the result. +1
– Frank Hubeny
May 9 at 23:24
You are correct, I did skip the commutation move in the first proof. I'll fix that. In the second proof at line 2 I used a rule called Exportation which says P--> (Q-->R) <---> (P ^ Q)-->R. Thanks for the vote!
– Rob
May 9 at 23:34
You are correct, I did skip the commutation move in the first proof. I'll fix that. In the second proof at line 2 I used a rule called Exportation which says P--> (Q-->R) <---> (P ^ Q)-->R. Thanks for the vote!
– Rob
May 9 at 23:34
What does "contain a contradiction" mean?
– Jishin Noben
May 10 at 8:10
What does "contain a contradiction" mean?
– Jishin Noben
May 10 at 8:10
add a comment |
A → (B → C) is basically another way to write (A ∧ B) → C. So B → (A → B) is better not described as every fact deductively follows from every other, but every fact deductively follows from itself and every other. It is tautology, as the "every other" part is redundant.
The original statement would be wrong if you mean every statement follows from every other, without involving itself. If you actually mean fact, note that facts are true, and don't need to follow from anything after you already knew they are facts.
thanks, this was helpful
– another_name
May 10 at 13:57
add a comment |
A → (B → C) is basically another way to write (A ∧ B) → C. So B → (A → B) is better not described as every fact deductively follows from every other, but every fact deductively follows from itself and every other. It is tautology, as the "every other" part is redundant.
The original statement would be wrong if you mean every statement follows from every other, without involving itself. If you actually mean fact, note that facts are true, and don't need to follow from anything after you already knew they are facts.
thanks, this was helpful
– another_name
May 10 at 13:57
add a comment |
A → (B → C) is basically another way to write (A ∧ B) → C. So B → (A → B) is better not described as every fact deductively follows from every other, but every fact deductively follows from itself and every other. It is tautology, as the "every other" part is redundant.
The original statement would be wrong if you mean every statement follows from every other, without involving itself. If you actually mean fact, note that facts are true, and don't need to follow from anything after you already knew they are facts.
A → (B → C) is basically another way to write (A ∧ B) → C. So B → (A → B) is better not described as every fact deductively follows from every other, but every fact deductively follows from itself and every other. It is tautology, as the "every other" part is redundant.
The original statement would be wrong if you mean every statement follows from every other, without involving itself. If you actually mean fact, note that facts are true, and don't need to follow from anything after you already knew they are facts.
answered May 10 at 12:02
user23013user23013
1287
1287
thanks, this was helpful
– another_name
May 10 at 13:57
add a comment |
thanks, this was helpful
– another_name
May 10 at 13:57
thanks, this was helpful
– another_name
May 10 at 13:57
thanks, this was helpful
– another_name
May 10 at 13:57
add a comment |
It is a tautology.
•If B is true, then A->B is true. Then B->(A->B) is B->T. B->T is T.
•If B is false, then B->(A->B) is F->(A->B). F-> Something is always True. That's why B->(A->B) is true.
add a comment |
It is a tautology.
•If B is true, then A->B is true. Then B->(A->B) is B->T. B->T is T.
•If B is false, then B->(A->B) is F->(A->B). F-> Something is always True. That's why B->(A->B) is true.
add a comment |
It is a tautology.
•If B is true, then A->B is true. Then B->(A->B) is B->T. B->T is T.
•If B is false, then B->(A->B) is F->(A->B). F-> Something is always True. That's why B->(A->B) is true.
It is a tautology.
•If B is true, then A->B is true. Then B->(A->B) is B->T. B->T is T.
•If B is false, then B->(A->B) is F->(A->B). F-> Something is always True. That's why B->(A->B) is true.
answered May 10 at 1:16
Asım KayaAsım Kaya
91
91
add a comment |
add a comment |
Thanks for contributing an answer to Philosophy Stack Exchange!
- Please be sure to answer the question. Provide details and share your research!
But avoid …
- Asking for help, clarification, or responding to other answers.
- Making statements based on opinion; back them up with references or personal experience.
To learn more, see our tips on writing great answers.
Sign up or log in
StackExchange.ready(function ()
StackExchange.helpers.onClickDraftSave('#login-link');
);
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Required, but never shown
StackExchange.ready(
function ()
StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fphilosophy.stackexchange.com%2fquestions%2f63354%2fis-it-nonsense-to-say-b-a-b%23new-answer', 'question_page');
);
Post as a guest
Required, but never shown
Sign up or log in
StackExchange.ready(function ()
StackExchange.helpers.onClickDraftSave('#login-link');
);
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Required, but never shown
Sign up or log in
StackExchange.ready(function ()
StackExchange.helpers.onClickDraftSave('#login-link');
);
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Required, but never shown
Sign up or log in
StackExchange.ready(function ()
StackExchange.helpers.onClickDraftSave('#login-link');
);
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
12
It doesn't say that "every fact deductively follows from every other", it says that if you have already deduced B, you can deduce it from anything else.
– Eliran
May 9 at 17:17
is there a phrase for that. one that i can google @Eliran ?
– another_name
May 9 at 17:38
2
I do not think this tautology has an established name. If you rewrite it as B→(¬A∨B) it is just an instance of disjunction introduction. Alternatively, if B is derivable from nothing then it surely is derivable from some A, whatever A is. In particular, it is even intuitionistically valid.
– Conifold
May 9 at 21:29
2
Do not google, read a textbook.
– Jishin Noben
May 10 at 8:08
@rexkogitans Embarrassing to say the least. Sensibility/nonsensibility is a semantic notion, not a formal one. Where did you get your "degree" in analytic philosophy again?
– Bertrand Wittgenstein's Ghost
May 10 at 10:31