Interpretation of R output from Cohen's KappaStrange values of Cohen's kappaCohen's Kappa using (irr) and kappa2() outputs NaNCohen's Kappa, why not simple ratioWhy is Cohen's kappa low despite high observed agreement?Cohen's kappa with three categories of variableExplain Cohen's kappa in a simplest way?Inter-rater reliability - when Cohen's Kappa doesn't workCohen's Kappa: is it valid to average kappa for different rater pairs across multiple trials?Cohen's kappa for repeated measures longitudinal dataInterpreting SPSS Cohen's Kappa output
How to display Aura JS Errors Lightning Out
Why did C use the -> operator instead of reusing the . operator?
Critique of timeline aesthetic
How to pronounce 'c++' in Spanish
Is there really no use for MD5 anymore?
What makes accurate emulation of old systems a difficult task?
Can we say “you can pay when the order gets ready”?
"Whatever a Russian does, they end up making the Kalashnikov gun"? Are there any similar proverbs in English?
Which big number is bigger?
Multiple options vs single option UI
Pre-plastic human skin alternative
How do I deal with a coworker that keeps asking to make small superficial changes to a report, and it is seriously triggering my anxiety?
Alignment of various blocks in tikz
acheter à, to mean both "from" and "for"?
Philosophical question on logistic regression: why isn't the optimal threshold value trained?
What does ゆーか mean?
Extension of 2-adic valuation to the real numbers
Phrase for the opposite of "foolproof"
How could Tony Stark make this in Endgame?
Check if a string is entirely made of the same substring
How can Republicans who favour free markets, consistently express anger when they don't like the outcome of that choice?
Elements other than carbon that can form many different compounds by bonding to themselves?
As an international instructor, should I openly talk about my accent?
Was there a shared-world project before "Thieves World"?
Interpretation of R output from Cohen's Kappa
Strange values of Cohen's kappaCohen's Kappa using (irr) and kappa2() outputs NaNCohen's Kappa, why not simple ratioWhy is Cohen's kappa low despite high observed agreement?Cohen's kappa with three categories of variableExplain Cohen's kappa in a simplest way?Inter-rater reliability - when Cohen's Kappa doesn't workCohen's Kappa: is it valid to average kappa for different rater pairs across multiple trials?Cohen's kappa for repeated measures longitudinal dataInterpreting SPSS Cohen's Kappa output
.everyoneloves__top-leaderboard:empty,.everyoneloves__mid-leaderboard:empty,.everyoneloves__bot-mid-leaderboard:empty margin-bottom:0;
$begingroup$
I have the following result from carrying out Cohen's kappa in R
library(irr)
n = 100
o = c(rep(0,n), rep(1,n))
p = c(rbinom(n,1,0.5), rbinom(n,1,0.51))
k = kappa2(
data.frame(p,o), "unweighted"
)
k
Which outputs
Cohen's Kappa for 2 Raters (Weights: unweighted)
Subjects = 200
Raters = 2
Kappa = -0.08
z = -1.13
p-value = 0.258
My interpretation of this
the test is displaying that there seems to be disagreement between the two vectors as kappa is negative. However, given the p value of 0.258 we can't say that this disagreement is significant, and may just be down to chance.
If
someone could highlight if there is anything I'm missing from this interpretation that would be appreciated.
hypothesis-testing model-comparison agreement-statistics association-measure cohens-kappa
$endgroup$
add a comment |
$begingroup$
I have the following result from carrying out Cohen's kappa in R
library(irr)
n = 100
o = c(rep(0,n), rep(1,n))
p = c(rbinom(n,1,0.5), rbinom(n,1,0.51))
k = kappa2(
data.frame(p,o), "unweighted"
)
k
Which outputs
Cohen's Kappa for 2 Raters (Weights: unweighted)
Subjects = 200
Raters = 2
Kappa = -0.08
z = -1.13
p-value = 0.258
My interpretation of this
the test is displaying that there seems to be disagreement between the two vectors as kappa is negative. However, given the p value of 0.258 we can't say that this disagreement is significant, and may just be down to chance.
If
someone could highlight if there is anything I'm missing from this interpretation that would be appreciated.
hypothesis-testing model-comparison agreement-statistics association-measure cohens-kappa
$endgroup$
2
$begingroup$
Please use seeded-random data (set.seed()
) so we get a reproducible example. Also, try other package implementations such asDescTools::CohenKappa()
, it gives you lower and upper confidence intervals which might be more meaningful to decide whether you can conclude there was no agreement/disagreement.
$endgroup$
– smci
Apr 23 at 8:45
add a comment |
$begingroup$
I have the following result from carrying out Cohen's kappa in R
library(irr)
n = 100
o = c(rep(0,n), rep(1,n))
p = c(rbinom(n,1,0.5), rbinom(n,1,0.51))
k = kappa2(
data.frame(p,o), "unweighted"
)
k
Which outputs
Cohen's Kappa for 2 Raters (Weights: unweighted)
Subjects = 200
Raters = 2
Kappa = -0.08
z = -1.13
p-value = 0.258
My interpretation of this
the test is displaying that there seems to be disagreement between the two vectors as kappa is negative. However, given the p value of 0.258 we can't say that this disagreement is significant, and may just be down to chance.
If
someone could highlight if there is anything I'm missing from this interpretation that would be appreciated.
hypothesis-testing model-comparison agreement-statistics association-measure cohens-kappa
$endgroup$
I have the following result from carrying out Cohen's kappa in R
library(irr)
n = 100
o = c(rep(0,n), rep(1,n))
p = c(rbinom(n,1,0.5), rbinom(n,1,0.51))
k = kappa2(
data.frame(p,o), "unweighted"
)
k
Which outputs
Cohen's Kappa for 2 Raters (Weights: unweighted)
Subjects = 200
Raters = 2
Kappa = -0.08
z = -1.13
p-value = 0.258
My interpretation of this
the test is displaying that there seems to be disagreement between the two vectors as kappa is negative. However, given the p value of 0.258 we can't say that this disagreement is significant, and may just be down to chance.
If
someone could highlight if there is anything I'm missing from this interpretation that would be appreciated.
hypothesis-testing model-comparison agreement-statistics association-measure cohens-kappa
hypothesis-testing model-comparison agreement-statistics association-measure cohens-kappa
edited Apr 19 at 17:32
baxx
asked Apr 19 at 14:08
baxxbaxx
320111
320111
2
$begingroup$
Please use seeded-random data (set.seed()
) so we get a reproducible example. Also, try other package implementations such asDescTools::CohenKappa()
, it gives you lower and upper confidence intervals which might be more meaningful to decide whether you can conclude there was no agreement/disagreement.
$endgroup$
– smci
Apr 23 at 8:45
add a comment |
2
$begingroup$
Please use seeded-random data (set.seed()
) so we get a reproducible example. Also, try other package implementations such asDescTools::CohenKappa()
, it gives you lower and upper confidence intervals which might be more meaningful to decide whether you can conclude there was no agreement/disagreement.
$endgroup$
– smci
Apr 23 at 8:45
2
2
$begingroup$
Please use seeded-random data (
set.seed()
) so we get a reproducible example. Also, try other package implementations such as DescTools::CohenKappa()
, it gives you lower and upper confidence intervals which might be more meaningful to decide whether you can conclude there was no agreement/disagreement.$endgroup$
– smci
Apr 23 at 8:45
$begingroup$
Please use seeded-random data (
set.seed()
) so we get a reproducible example. Also, try other package implementations such as DescTools::CohenKappa()
, it gives you lower and upper confidence intervals which might be more meaningful to decide whether you can conclude there was no agreement/disagreement.$endgroup$
– smci
Apr 23 at 8:45
add a comment |
1 Answer
1
active
oldest
votes
$begingroup$
From the perspective of an applied analyst:
First note: that disagreement means if rater A says 1 rater B says 0; it is like how a Pearson correlation of -1 denotes a strong, albeit negative, relationship. The actual null hypothesis here is: what rater A says has no relation to what rater B says.
I wouldn't make such vague yet absolute declarations such as "there seems to be disagreement" (or rather there seems to be no agreement). It is not really an appropriate summary of data without significant background and context. If we had that background and context (such as in a discussions section), we could contribute some nuanced synthesis of the result, pointing to improvements or reasons for disagreement, etc.
To interpret the results:
- report the percentage agreement, note if any one category was more prevalent (a case when % agreement may be high but $kappa$ may be low)
- state the kappa statistic and it's confidence interval
- I often question the worth of a p-value where the null hypothesis is a stupid case of "no agreement", but you can quote the p-value and say that the data did not provide evidence that the raters agree.
$endgroup$
add a comment |
Your Answer
StackExchange.ready(function()
var channelOptions =
tags: "".split(" "),
id: "65"
;
initTagRenderer("".split(" "), "".split(" "), channelOptions);
StackExchange.using("externalEditor", function()
// Have to fire editor after snippets, if snippets enabled
if (StackExchange.settings.snippets.snippetsEnabled)
StackExchange.using("snippets", function()
createEditor();
);
else
createEditor();
);
function createEditor()
StackExchange.prepareEditor(
heartbeatType: 'answer',
autoActivateHeartbeat: false,
convertImagesToLinks: false,
noModals: true,
showLowRepImageUploadWarning: true,
reputationToPostImages: null,
bindNavPrevention: true,
postfix: "",
imageUploader:
brandingHtml: "Powered by u003ca class="icon-imgur-white" href="https://imgur.com/"u003eu003c/au003e",
contentPolicyHtml: "User contributions licensed under u003ca href="https://creativecommons.org/licenses/by-sa/3.0/"u003ecc by-sa 3.0 with attribution requiredu003c/au003e u003ca href="https://stackoverflow.com/legal/content-policy"u003e(content policy)u003c/au003e",
allowUrls: true
,
onDemand: true,
discardSelector: ".discard-answer"
,immediatelyShowMarkdownHelp:true
);
);
Sign up or log in
StackExchange.ready(function ()
StackExchange.helpers.onClickDraftSave('#login-link');
);
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Required, but never shown
StackExchange.ready(
function ()
StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fstats.stackexchange.com%2fquestions%2f403970%2finterpretation-of-r-output-from-cohens-kappa%23new-answer', 'question_page');
);
Post as a guest
Required, but never shown
1 Answer
1
active
oldest
votes
1 Answer
1
active
oldest
votes
active
oldest
votes
active
oldest
votes
$begingroup$
From the perspective of an applied analyst:
First note: that disagreement means if rater A says 1 rater B says 0; it is like how a Pearson correlation of -1 denotes a strong, albeit negative, relationship. The actual null hypothesis here is: what rater A says has no relation to what rater B says.
I wouldn't make such vague yet absolute declarations such as "there seems to be disagreement" (or rather there seems to be no agreement). It is not really an appropriate summary of data without significant background and context. If we had that background and context (such as in a discussions section), we could contribute some nuanced synthesis of the result, pointing to improvements or reasons for disagreement, etc.
To interpret the results:
- report the percentage agreement, note if any one category was more prevalent (a case when % agreement may be high but $kappa$ may be low)
- state the kappa statistic and it's confidence interval
- I often question the worth of a p-value where the null hypothesis is a stupid case of "no agreement", but you can quote the p-value and say that the data did not provide evidence that the raters agree.
$endgroup$
add a comment |
$begingroup$
From the perspective of an applied analyst:
First note: that disagreement means if rater A says 1 rater B says 0; it is like how a Pearson correlation of -1 denotes a strong, albeit negative, relationship. The actual null hypothesis here is: what rater A says has no relation to what rater B says.
I wouldn't make such vague yet absolute declarations such as "there seems to be disagreement" (or rather there seems to be no agreement). It is not really an appropriate summary of data without significant background and context. If we had that background and context (such as in a discussions section), we could contribute some nuanced synthesis of the result, pointing to improvements or reasons for disagreement, etc.
To interpret the results:
- report the percentage agreement, note if any one category was more prevalent (a case when % agreement may be high but $kappa$ may be low)
- state the kappa statistic and it's confidence interval
- I often question the worth of a p-value where the null hypothesis is a stupid case of "no agreement", but you can quote the p-value and say that the data did not provide evidence that the raters agree.
$endgroup$
add a comment |
$begingroup$
From the perspective of an applied analyst:
First note: that disagreement means if rater A says 1 rater B says 0; it is like how a Pearson correlation of -1 denotes a strong, albeit negative, relationship. The actual null hypothesis here is: what rater A says has no relation to what rater B says.
I wouldn't make such vague yet absolute declarations such as "there seems to be disagreement" (or rather there seems to be no agreement). It is not really an appropriate summary of data without significant background and context. If we had that background and context (such as in a discussions section), we could contribute some nuanced synthesis of the result, pointing to improvements or reasons for disagreement, etc.
To interpret the results:
- report the percentage agreement, note if any one category was more prevalent (a case when % agreement may be high but $kappa$ may be low)
- state the kappa statistic and it's confidence interval
- I often question the worth of a p-value where the null hypothesis is a stupid case of "no agreement", but you can quote the p-value and say that the data did not provide evidence that the raters agree.
$endgroup$
From the perspective of an applied analyst:
First note: that disagreement means if rater A says 1 rater B says 0; it is like how a Pearson correlation of -1 denotes a strong, albeit negative, relationship. The actual null hypothesis here is: what rater A says has no relation to what rater B says.
I wouldn't make such vague yet absolute declarations such as "there seems to be disagreement" (or rather there seems to be no agreement). It is not really an appropriate summary of data without significant background and context. If we had that background and context (such as in a discussions section), we could contribute some nuanced synthesis of the result, pointing to improvements or reasons for disagreement, etc.
To interpret the results:
- report the percentage agreement, note if any one category was more prevalent (a case when % agreement may be high but $kappa$ may be low)
- state the kappa statistic and it's confidence interval
- I often question the worth of a p-value where the null hypothesis is a stupid case of "no agreement", but you can quote the p-value and say that the data did not provide evidence that the raters agree.
edited Apr 23 at 11:00
smci
89911018
89911018
answered Apr 19 at 14:30
AdamOAdamO
35.3k266143
35.3k266143
add a comment |
add a comment |
Thanks for contributing an answer to Cross Validated!
- Please be sure to answer the question. Provide details and share your research!
But avoid …
- Asking for help, clarification, or responding to other answers.
- Making statements based on opinion; back them up with references or personal experience.
Use MathJax to format equations. MathJax reference.
To learn more, see our tips on writing great answers.
Sign up or log in
StackExchange.ready(function ()
StackExchange.helpers.onClickDraftSave('#login-link');
);
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Required, but never shown
StackExchange.ready(
function ()
StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fstats.stackexchange.com%2fquestions%2f403970%2finterpretation-of-r-output-from-cohens-kappa%23new-answer', 'question_page');
);
Post as a guest
Required, but never shown
Sign up or log in
StackExchange.ready(function ()
StackExchange.helpers.onClickDraftSave('#login-link');
);
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Required, but never shown
Sign up or log in
StackExchange.ready(function ()
StackExchange.helpers.onClickDraftSave('#login-link');
);
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Required, but never shown
Sign up or log in
StackExchange.ready(function ()
StackExchange.helpers.onClickDraftSave('#login-link');
);
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
2
$begingroup$
Please use seeded-random data (
set.seed()
) so we get a reproducible example. Also, try other package implementations such asDescTools::CohenKappa()
, it gives you lower and upper confidence intervals which might be more meaningful to decide whether you can conclude there was no agreement/disagreement.$endgroup$
– smci
Apr 23 at 8:45