Interpretation of R output from Cohen's Kappa Announcing the arrival of Valued Associate #679: Cesar Manara Planned maintenance scheduled April 23, 2019 at 00:00UTC (8:00pm US/Eastern)Strange values of Cohen's kappaCohen's Kappa using (irr) and kappa2() outputs NaNCohen's Kappa, why not simple ratioWhy is Cohen's kappa low despite high observed agreement?Cohen's kappa with three categories of variableExplain Cohen's kappa in a simplest way?Inter-rater reliability - when Cohen's Kappa doesn't workCohen's Kappa: is it valid to average kappa for different rater pairs across multiple trials?Cohen's kappa for repeated measures longitudinal dataInterpreting SPSS Cohen's Kappa output

Trademark violation for app?

Find 108 by using 3,4,6

Why are vacuum tubes still used in amateur radios?

Central Vacuuming: Is it worth it, and how does it compare to normal vacuuming?

Can anything be seen from the center of the Boötes void? How dark would it be?

How to plot logistic regression decision boundary?

How to run automated tests after each commit?

Importance of からだ in this sentence

What initially awakened the Balrog?

How were pictures turned from film to a big picture in a picture frame before digital scanning?

What happened to Thoros of Myr's flaming sword?

Why limits give us the exact value of the slope of the tangent line?

What order were files/directories outputted in dir?

How to compare two different files line by line in unix?

Generate an RGB colour grid

Is it fair for a professor to grade us on the possession of past papers?

Is there any word for a place full of confusion?

How to make a Field only accept Numbers in Magento 2

Did any compiler fully use 80-bit floating point?

Belief In God or Knowledge Of God. Which is better?

Significance of Cersei's obsession with elephants?

Why is it faster to reheat something than it is to cook it?

Has negative voting ever been officially implemented in elections, or seriously proposed, or even studied?

Proof of work - 51% attack



Interpretation of R output from Cohen's Kappa



Announcing the arrival of Valued Associate #679: Cesar Manara
Planned maintenance scheduled April 23, 2019 at 00:00UTC (8:00pm US/Eastern)Strange values of Cohen's kappaCohen's Kappa using (irr) and kappa2() outputs NaNCohen's Kappa, why not simple ratioWhy is Cohen's kappa low despite high observed agreement?Cohen's kappa with three categories of variableExplain Cohen's kappa in a simplest way?Inter-rater reliability - when Cohen's Kappa doesn't workCohen's Kappa: is it valid to average kappa for different rater pairs across multiple trials?Cohen's kappa for repeated measures longitudinal dataInterpreting SPSS Cohen's Kappa output



.everyoneloves__top-leaderboard:empty,.everyoneloves__mid-leaderboard:empty,.everyoneloves__bot-mid-leaderboard:empty margin-bottom:0;








1












$begingroup$


I have the following result from carrying out Cohen's kappa in R



library(irr)
n = 100
o = c(rep(0,n), rep(1,n))
p = c(rbinom(n,1,0.5), rbinom(n,1,0.51))
k = kappa2(
data.frame(p,o), "unweighted"
)
k


Which outputs



 Cohen's Kappa for 2 Raters (Weights: unweighted)

Subjects = 200
Raters = 2
Kappa = -0.08

z = -1.13
p-value = 0.258


My interpretation of this is that the test is displaying that there seems to be disagreement between the two vectors as kappa is negative. However, given the p value of 0.258 we can't say that this disagreement is significant, and may just be down to chance.



If
someone could highlight if there is anything I'm missing from this interpretation that would be appreciated.










share|cite|improve this question









$endgroup$


















    1












    $begingroup$


    I have the following result from carrying out Cohen's kappa in R



    library(irr)
    n = 100
    o = c(rep(0,n), rep(1,n))
    p = c(rbinom(n,1,0.5), rbinom(n,1,0.51))
    k = kappa2(
    data.frame(p,o), "unweighted"
    )
    k


    Which outputs



     Cohen's Kappa for 2 Raters (Weights: unweighted)

    Subjects = 200
    Raters = 2
    Kappa = -0.08

    z = -1.13
    p-value = 0.258


    My interpretation of this is that the test is displaying that there seems to be disagreement between the two vectors as kappa is negative. However, given the p value of 0.258 we can't say that this disagreement is significant, and may just be down to chance.



    If
    someone could highlight if there is anything I'm missing from this interpretation that would be appreciated.










    share|cite|improve this question









    $endgroup$














      1












      1








      1





      $begingroup$


      I have the following result from carrying out Cohen's kappa in R



      library(irr)
      n = 100
      o = c(rep(0,n), rep(1,n))
      p = c(rbinom(n,1,0.5), rbinom(n,1,0.51))
      k = kappa2(
      data.frame(p,o), "unweighted"
      )
      k


      Which outputs



       Cohen's Kappa for 2 Raters (Weights: unweighted)

      Subjects = 200
      Raters = 2
      Kappa = -0.08

      z = -1.13
      p-value = 0.258


      My interpretation of this is that the test is displaying that there seems to be disagreement between the two vectors as kappa is negative. However, given the p value of 0.258 we can't say that this disagreement is significant, and may just be down to chance.



      If
      someone could highlight if there is anything I'm missing from this interpretation that would be appreciated.










      share|cite|improve this question









      $endgroup$




      I have the following result from carrying out Cohen's kappa in R



      library(irr)
      n = 100
      o = c(rep(0,n), rep(1,n))
      p = c(rbinom(n,1,0.5), rbinom(n,1,0.51))
      k = kappa2(
      data.frame(p,o), "unweighted"
      )
      k


      Which outputs



       Cohen's Kappa for 2 Raters (Weights: unweighted)

      Subjects = 200
      Raters = 2
      Kappa = -0.08

      z = -1.13
      p-value = 0.258


      My interpretation of this is that the test is displaying that there seems to be disagreement between the two vectors as kappa is negative. However, given the p value of 0.258 we can't say that this disagreement is significant, and may just be down to chance.



      If
      someone could highlight if there is anything I'm missing from this interpretation that would be appreciated.







      hypothesis-testing model-comparison agreement-statistics association-measure cohens-kappa






      share|cite|improve this question













      share|cite|improve this question











      share|cite|improve this question




      share|cite|improve this question










      asked 1 hour ago









      baxxbaxx

      293111




      293111




















          1 Answer
          1






          active

          oldest

          votes


















          2












          $begingroup$

          From the perspective of an applied analyst:



          First note: that disagreement means if rater A says 1 rater B says 0; it is like how a Pearson correlation of -1 denotes a strong, albeit negative, relationship. The actual null hypothesis here is: what rater A says has no relation to what rater B says.



          I wouldn't make such vague yet absolute declarations such as "there seems to be disagreement" (or rather there seems to be no agreement). It is not really an appropriate summary of data without significant background and context. If we had that background and context (such as in a discussions section), we could contribute some nuanced synthesis of the result, pointing to improvements or reasons for disagreement, etc.



          To interpret the results: report the percentage agreement, note if any one category was more prevalent (a case when % agreement may be high but $kappa$ may be low). State the kappa statistic and it's confidence interval. I often question the worth of a p-value where the null hypothesis is a stupid case of "no agreement", but you can quote the p-value and say that the data did not provide evidence that the raters agree.






          share|cite|improve this answer









          $endgroup$













            Your Answer








            StackExchange.ready(function()
            var channelOptions =
            tags: "".split(" "),
            id: "65"
            ;
            initTagRenderer("".split(" "), "".split(" "), channelOptions);

            StackExchange.using("externalEditor", function()
            // Have to fire editor after snippets, if snippets enabled
            if (StackExchange.settings.snippets.snippetsEnabled)
            StackExchange.using("snippets", function()
            createEditor();
            );

            else
            createEditor();

            );

            function createEditor()
            StackExchange.prepareEditor(
            heartbeatType: 'answer',
            autoActivateHeartbeat: false,
            convertImagesToLinks: false,
            noModals: true,
            showLowRepImageUploadWarning: true,
            reputationToPostImages: null,
            bindNavPrevention: true,
            postfix: "",
            imageUploader:
            brandingHtml: "Powered by u003ca class="icon-imgur-white" href="https://imgur.com/"u003eu003c/au003e",
            contentPolicyHtml: "User contributions licensed under u003ca href="https://creativecommons.org/licenses/by-sa/3.0/"u003ecc by-sa 3.0 with attribution requiredu003c/au003e u003ca href="https://stackoverflow.com/legal/content-policy"u003e(content policy)u003c/au003e",
            allowUrls: true
            ,
            onDemand: true,
            discardSelector: ".discard-answer"
            ,immediatelyShowMarkdownHelp:true
            );



            );













            draft saved

            draft discarded


















            StackExchange.ready(
            function ()
            StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fstats.stackexchange.com%2fquestions%2f403970%2finterpretation-of-r-output-from-cohens-kappa%23new-answer', 'question_page');

            );

            Post as a guest















            Required, but never shown

























            1 Answer
            1






            active

            oldest

            votes








            1 Answer
            1






            active

            oldest

            votes









            active

            oldest

            votes






            active

            oldest

            votes









            2












            $begingroup$

            From the perspective of an applied analyst:



            First note: that disagreement means if rater A says 1 rater B says 0; it is like how a Pearson correlation of -1 denotes a strong, albeit negative, relationship. The actual null hypothesis here is: what rater A says has no relation to what rater B says.



            I wouldn't make such vague yet absolute declarations such as "there seems to be disagreement" (or rather there seems to be no agreement). It is not really an appropriate summary of data without significant background and context. If we had that background and context (such as in a discussions section), we could contribute some nuanced synthesis of the result, pointing to improvements or reasons for disagreement, etc.



            To interpret the results: report the percentage agreement, note if any one category was more prevalent (a case when % agreement may be high but $kappa$ may be low). State the kappa statistic and it's confidence interval. I often question the worth of a p-value where the null hypothesis is a stupid case of "no agreement", but you can quote the p-value and say that the data did not provide evidence that the raters agree.






            share|cite|improve this answer









            $endgroup$

















              2












              $begingroup$

              From the perspective of an applied analyst:



              First note: that disagreement means if rater A says 1 rater B says 0; it is like how a Pearson correlation of -1 denotes a strong, albeit negative, relationship. The actual null hypothesis here is: what rater A says has no relation to what rater B says.



              I wouldn't make such vague yet absolute declarations such as "there seems to be disagreement" (or rather there seems to be no agreement). It is not really an appropriate summary of data without significant background and context. If we had that background and context (such as in a discussions section), we could contribute some nuanced synthesis of the result, pointing to improvements or reasons for disagreement, etc.



              To interpret the results: report the percentage agreement, note if any one category was more prevalent (a case when % agreement may be high but $kappa$ may be low). State the kappa statistic and it's confidence interval. I often question the worth of a p-value where the null hypothesis is a stupid case of "no agreement", but you can quote the p-value and say that the data did not provide evidence that the raters agree.






              share|cite|improve this answer









              $endgroup$















                2












                2








                2





                $begingroup$

                From the perspective of an applied analyst:



                First note: that disagreement means if rater A says 1 rater B says 0; it is like how a Pearson correlation of -1 denotes a strong, albeit negative, relationship. The actual null hypothesis here is: what rater A says has no relation to what rater B says.



                I wouldn't make such vague yet absolute declarations such as "there seems to be disagreement" (or rather there seems to be no agreement). It is not really an appropriate summary of data without significant background and context. If we had that background and context (such as in a discussions section), we could contribute some nuanced synthesis of the result, pointing to improvements or reasons for disagreement, etc.



                To interpret the results: report the percentage agreement, note if any one category was more prevalent (a case when % agreement may be high but $kappa$ may be low). State the kappa statistic and it's confidence interval. I often question the worth of a p-value where the null hypothesis is a stupid case of "no agreement", but you can quote the p-value and say that the data did not provide evidence that the raters agree.






                share|cite|improve this answer









                $endgroup$



                From the perspective of an applied analyst:



                First note: that disagreement means if rater A says 1 rater B says 0; it is like how a Pearson correlation of -1 denotes a strong, albeit negative, relationship. The actual null hypothesis here is: what rater A says has no relation to what rater B says.



                I wouldn't make such vague yet absolute declarations such as "there seems to be disagreement" (or rather there seems to be no agreement). It is not really an appropriate summary of data without significant background and context. If we had that background and context (such as in a discussions section), we could contribute some nuanced synthesis of the result, pointing to improvements or reasons for disagreement, etc.



                To interpret the results: report the percentage agreement, note if any one category was more prevalent (a case when % agreement may be high but $kappa$ may be low). State the kappa statistic and it's confidence interval. I often question the worth of a p-value where the null hypothesis is a stupid case of "no agreement", but you can quote the p-value and say that the data did not provide evidence that the raters agree.







                share|cite|improve this answer












                share|cite|improve this answer



                share|cite|improve this answer










                answered 1 hour ago









                AdamOAdamO

                35.1k264142




                35.1k264142



























                    draft saved

                    draft discarded
















































                    Thanks for contributing an answer to Cross Validated!


                    • Please be sure to answer the question. Provide details and share your research!

                    But avoid


                    • Asking for help, clarification, or responding to other answers.

                    • Making statements based on opinion; back them up with references or personal experience.

                    Use MathJax to format equations. MathJax reference.


                    To learn more, see our tips on writing great answers.




                    draft saved


                    draft discarded














                    StackExchange.ready(
                    function ()
                    StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fstats.stackexchange.com%2fquestions%2f403970%2finterpretation-of-r-output-from-cohens-kappa%23new-answer', 'question_page');

                    );

                    Post as a guest















                    Required, but never shown





















































                    Required, but never shown














                    Required, but never shown












                    Required, but never shown







                    Required, but never shown

































                    Required, but never shown














                    Required, but never shown












                    Required, but never shown







                    Required, but never shown







                    Popular posts from this blog

                    Category:Fedor von Bock Media in category "Fedor von Bock"Navigation menuUpload mediaISNI: 0000 0000 5511 3417VIAF ID: 24712551GND ID: 119294796Library of Congress authority ID: n96068363BnF ID: 12534305fSUDOC authorities ID: 034604189Open Library ID: OL338253ANKCR AUT ID: jn19990000869National Library of Israel ID: 000514068National Thesaurus for Author Names ID: 341574317ReasonatorScholiaStatistics

                    Reverse int within the 32-bit signed integer range: [−2^31, 2^31 − 1]Combining two 32-bit integers into one 64-bit integerDetermine if an int is within rangeLossy packing 32 bit integer to 16 bitComputing the square root of a 64-bit integerKeeping integer addition within boundsSafe multiplication of two 64-bit signed integersLeetcode 10: Regular Expression MatchingSigned integer-to-ascii x86_64 assembler macroReverse the digits of an Integer“Add two numbers given in reverse order from a linked list”

                    Kiel Indholdsfortegnelse Historie | Transport og færgeforbindelser | Sejlsport og anden sport | Kultur | Kendte personer fra Kiel | Noter | Litteratur | Eksterne henvisninger | Navigationsmenuwww.kiel.de54°19′31″N 10°8′26″Ø / 54.32528°N 10.14056°Ø / 54.32528; 10.14056Oberbürgermeister Dr. Ulf Kämpferwww.statistik-nord.deDen danske Stats StatistikKiels hjemmesiderrrWorldCat312794080n790547494030481-4