What is a common way to tell if an academic is “above average,” or outstanding in their field? Is their h-index (Hirsh index) one of them?What are the common productivity measures of a scientist, like h-index?Two professors want me to work at their labs, and I'd rather work with one of them than the otherWhat does it mean when a department has 'retired professors' as part of their faculty? Can I still work with them?What motivates professors to train Ph.D students who aren't nearly as smart as them in their schooldays?What to tell professors to persuade them to let a pre-college student work with them

Developers demotivated due to working on same project for more than 2 years

About matrices whose row and column sums are 0

Will a coyote attack my dog on a leash while I'm on a hiking trail?

Adding labels and comments to a matrix

What is this old US Air Force plane?

How to redirect stdout to a file, and stdout+stderr to another one?

How do I identify the partitions of my hard drive in order to then shred them all?

Resize before convert or convert before resize?

Should I communicate in my applications that I'm unemployed out of choice rather than because nobody will have me?

Given 0s on Assignments with suspected and dismissed cheating?

What was the ring Varys took off?

Why weren't the bells paid heed to in S8E5?

Why is it harder to turn a motor/generator with shorted terminals?

How to cope with regret and shame about not fully utilizing opportunities during PhD?

Mark command as obsolete

Can my Serbian girlfriend apply for a UK Standard Visitor visa and stay for the whole 6 months?

Who commanded or executed this action in Game of Thrones S8E5?

Fixed width with p doesn't work

Unexpected Netflix account registered to my Gmail address - any way it could be a hack attempt?

A case where Bishop for knight isn't a good trade

Why does lemon juice reduce the "fish" odor of sea food — specifically fish?

Why doesn't Iron Man's action affect this person in Endgame?

Are there any sonatas with only two sections?

How to describe a building set which is like LEGO without using the "LEGO" word?



What is a common way to tell if an academic is “above average,” or outstanding in their field? Is their h-index (Hirsh index) one of them?


What are the common productivity measures of a scientist, like h-index?Two professors want me to work at their labs, and I'd rather work with one of them than the otherWhat does it mean when a department has 'retired professors' as part of their faculty? Can I still work with them?What motivates professors to train Ph.D students who aren't nearly as smart as them in their schooldays?What to tell professors to persuade them to let a pre-college student work with them













16















If one claimed that a particular scholar was "above average" or "noted" in their field, is there any good metric by which to support or deny such a claim?



Is there a generally accepted way to indicate that a particular professor or scholar is outstanding, or above average, in their field? I understand there are certain indicators, such as chairs, endowments, prizes, etc. But these don't really seem to help to compare one scholar to another, except in a sort of gross, simple count way (i.e. one professor has had more chairs than another)



Is it theoretically possible to create a "ranking" of professors in their fields, by some metric? Could their h-index serve as such a metric?










share|improve this question
























  • Answers in comments and general advice has been moved to chat. Please read this FAQ before posting another comment.

    – Wrzlprmft
    May 7 at 7:18
















16















If one claimed that a particular scholar was "above average" or "noted" in their field, is there any good metric by which to support or deny such a claim?



Is there a generally accepted way to indicate that a particular professor or scholar is outstanding, or above average, in their field? I understand there are certain indicators, such as chairs, endowments, prizes, etc. But these don't really seem to help to compare one scholar to another, except in a sort of gross, simple count way (i.e. one professor has had more chairs than another)



Is it theoretically possible to create a "ranking" of professors in their fields, by some metric? Could their h-index serve as such a metric?










share|improve this question
























  • Answers in comments and general advice has been moved to chat. Please read this FAQ before posting another comment.

    – Wrzlprmft
    May 7 at 7:18














16












16








16


7






If one claimed that a particular scholar was "above average" or "noted" in their field, is there any good metric by which to support or deny such a claim?



Is there a generally accepted way to indicate that a particular professor or scholar is outstanding, or above average, in their field? I understand there are certain indicators, such as chairs, endowments, prizes, etc. But these don't really seem to help to compare one scholar to another, except in a sort of gross, simple count way (i.e. one professor has had more chairs than another)



Is it theoretically possible to create a "ranking" of professors in their fields, by some metric? Could their h-index serve as such a metric?










share|improve this question
















If one claimed that a particular scholar was "above average" or "noted" in their field, is there any good metric by which to support or deny such a claim?



Is there a generally accepted way to indicate that a particular professor or scholar is outstanding, or above average, in their field? I understand there are certain indicators, such as chairs, endowments, prizes, etc. But these don't really seem to help to compare one scholar to another, except in a sort of gross, simple count way (i.e. one professor has had more chairs than another)



Is it theoretically possible to create a "ranking" of professors in their fields, by some metric? Could their h-index serve as such a metric?







professors bibliometrics ranking evaluation






share|improve this question















share|improve this question













share|improve this question




share|improve this question








edited May 3 at 4:33









Nate Eldredge

110k36318415




110k36318415










asked May 3 at 4:05









user151841user151841

18314




18314












  • Answers in comments and general advice has been moved to chat. Please read this FAQ before posting another comment.

    – Wrzlprmft
    May 7 at 7:18


















  • Answers in comments and general advice has been moved to chat. Please read this FAQ before posting another comment.

    – Wrzlprmft
    May 7 at 7:18

















Answers in comments and general advice has been moved to chat. Please read this FAQ before posting another comment.

– Wrzlprmft
May 7 at 7:18






Answers in comments and general advice has been moved to chat. Please read this FAQ before posting another comment.

– Wrzlprmft
May 7 at 7:18











7 Answers
7






active

oldest

votes


















72














Yes, there is one and only one standard method that is universally employed by reputable academic institutions worldwide. This is how you evaluate a researcher:



  1. Read their papers.

  2. Attend one of their talks.

  3. Ask the opinion of other experts in the field.

This is how hiring committees and promotion committees do their job. There are no shortcuts. Parts 1-2 require that you have some relevant expertise; if not then you must rely entirely on #3.



Every academic is regularly asked to give an expert opinion through reference letters, which often are expected to include some sort of ranking (e.g. "Assistant Prof. X should be promoted because she is clearly as talented/better than Prof. Y who was recently promoted at Prestigious University Z"). How does one justify this kind of claim in the reference letter? You guessed it:



  1. Read their papers.

  2. Attend their talks.





share|improve this answer




















  • 11





    Well, this is perhaps how committees and panels should work, but definitely not how every hiring committee works in practice.

    – Dmitry Savostyanov
    May 3 at 11:18






  • 8





    @David Ketcheson The method you propose is not fool-proof. What if there is a cabal of researchers who dismiss contrary opinions because they have a lot at stake? Happens a lot in fields like economics and social sciences that make unfalsifiable claims or else simply use fake statistics to "empirically prove" their hypothesis. If you are a researcher that publishes works that go against established dogma you are unlikely to be given good references.

    – Tryer
    May 3 at 15:05






  • 6





    @Tryer Well, that's what points 1 and 2 are there for. If the researcher in question is right, but neither you nor their colleagues can tell, I don't think there's any rating available that will do that job for you.

    – sgf
    May 3 at 15:10






  • 3





    @sgf People who have unorthodox ideas very rarely get to publish or get invited to talks. This happens more in fields like the social sciences and history and economics where the people make unfalsifiable claims. How many orthodox economics departments invite for talks folks like Nassim Nicholas Taleb, for instance. The guy has a good set of arguments that when taken to their logical extreme would lead to dismantling finance and economics departments in universities. Which "respected" journal would publish his work and which university will invite him for talks?

    – Tryer
    May 3 at 15:17







  • 3





    @Tryer Which is why you have to read his stuff and come up with an opinion on them yourself. Which sort of metric could you imagine that would help there, short of making everything a popularity contest? There's no metric that will allow you to extract truth from papers.

    – sgf
    May 3 at 15:52



















26















If one claimed that a particular scholar was "above average" or "noted" in their field, is there any good metric by which to support or deny such a claim?




No. As a rule of thumb, this isn't the kind of thing that you can measure with a metric. Elvis Presley was the king of rock and roll. Why? Is it because he pumped out more albums than the others? Because he sold more? Because journalists wrote more about his albums than the others'? No. It was because he was the king and few people contested that. It's the same in academia. Either you can say that someone is "noted" in the field and be reasonably confident that you won't be contested when saying that, or you can't. If you can't, then you should avoid it, on pain of looking pretentious or like a toady.




Is there a generally accepted way to indicate that a particular professor or scholar is outstanding, or above average, in their field? I understand there are certain indicators, such as chairs, endowments, prizes, etc. But these don't really seem to help to compare one scholar to another, except in a sort of gross, simple count way (i.e. one professor has had more chairs than another)




In general, you can't compare people. This isn't a video game, people don't have a numeric level associated to their academic ability, someone with a 12 being better than someone with a 5. It doesn't work like that. There is a multitude of factors, most often not measurable or not comparable. Trying to make a sum out of these and comparing the result for two different people will only lead to crap. Ask any hiring committee if determining who is the best candidate for a job is easy, let alone determining who is the best researcher.




Is it theoretically possible to create a "ranking" of professors in their fields, by some metric? Could their h-index serve as such a metric?




God no. Of all the metrics, you've probably picked one of the worse ones. If I write two dozens pieces of trash that all cite one another and publish them in vanity press, I will have a great h-index. Will I be a good researcher? No. On the other hand, if I write a single article in my whole life solving the Riemann hypothesis, then I would probably become one of the most famous mathematician in the world overnight, but my h-index will be crap.






share|improve this answer























  • Could you suggest a metric which is better than h-index?

    – Dmitry Savostyanov
    May 3 at 8:56






  • 14





    @DmitrySavostyanov Why would you want any metric at all?

    – Massimo Ortolano
    May 3 at 9:39






  • 3





    @DmitrySavostyanov Asking other academics. Or better, evaluating them for yourself.

    – knzhou
    May 3 at 10:57












  • Comments are not for extended discussion; this conversation has been moved to chat.

    – eykanal
    May 7 at 16:45


















9














If you want to rank two professors against each other, you might be tempted to use the h-index. Don't. As many of the other answers point out, it's a severely flawed metric, and it doesn't really tell you a lot.



However, if you want to figure out whether a given professor can reasonably be described as "noted" or "outstanding", then that is a quite different question. And here, yes indeed, I would say that you can use certain indicators, namely awards, honors and prizes. I do not think anyone disputes that a scientist holding a Nobel prize is outstanding. (Peace and literature, maybe not so much.) If a mathematician wins the Fields medal or the Abel prize, the same.



Many societies award fellowships. To get one of those, you have to demonstrate academic excellence, and often also things like service to the society in question, outreach, teaching etc. The advantage is that the "overall package" a professor offers has already been evaluated by people who are presumably experts in the field. For instance, here is a list of the Fellows of the International Institute of Forecasters, which I happen to be involved with. Some of the Fellows are a bit contentious, but nobody from the field would dispute their being noted.



Best paper awards are similar.



Of course, you need to use a little expertise in deciding whether a Best Paper Award from a journal on Beall's list is truly a mark of excellence, or whether a Fellowship from an academic society that offers little more than a one-page webpresence is. But unless you go with the extremely well-known marks of excellence like the prizes I noted above, there is simply no shortcut that will avoid having at least a passing knowledge of the field.



And note that this allows you to decide whether someone is distinguished or not. It won't tell you whether A is "more distinguished" than B, like one might try to use the h-index to indicate. Which, as I argue above, is impossible.






share|improve this answer






























    7















    Is there a generally accepted way to indicate that a particular professor or scholar is outstanding, or above average, in their field?




    The generally accepted method for assessing a particular scholar's merit is to familiarize oneself with their work. Such an assessment requires a solid basis of expert knowledge.




    I understand there are certain indicators, such as chairs, endowments, prizes, etc. But these don't really seem to help to compare one scholar to another, except in a sort of gross, simple count way.




    This applies to any "metric", although some are worse than others. Any sound assessment would have to be qualitative and require some substantive engagement with the scholar's work. Therefore, any comparison, to the extent that it would be useful at all, could only point out qualitative differences that don't lend themselves to a ranking, "except in a sort of gross" way.






    share|improve this answer























    • Qualitative differences DO lend themselves to a ranking, just as much as quantitative differences: “condition XXXX increases by YY.Y% the likelihood of a patient to develop disease ZZZZZ within K years”. Condition XXX can be quantitative (age, number of alcohol units per week, etc) or qualitative (sex, existence of other diseases). They are equally good to build a ranking and qualitative variables may very well be more important than then the quantitative ones.

      – famargar
      May 4 at 8:15



















    3














    The negative proof to the question here is far broader than academics: is there a metric for the best car? Best parent? Best programming language? Smartest person? No, because all these things have many orthogonal dimensions that simply can't be collapsed to one without unacceptable information loss. Researchers can be creative, well funded, methodical, hard working, well versed in literature, collaborative with peers/students, etc.



    I concur with David Ketcheson's answer on what to do instead.






    share|improve this answer























    • Your comment assumes that no multidimensional problem can be analytically quantified. However, this comment comes a decade after machine learning and artificial intelligence (in english: algorithms solving complex multidimensional problems) became pervasive in our society. And a few decades after the mathematical/statistical groundwork to do so were born.

      – famargar
      May 10 at 11:05












    • It also misses important contributions to psychology and behavioural economics that proved when and how humans take wrong decisions (hint: all the time ;)

      – famargar
      May 10 at 11:08


















    2














    All metrics that used (e.g. number of first/senior authorships, sum of impact factors, percentile ranks of impact factors, citations, H-index, grants and other funding etc) have all their advantages and many more disadvantages. Never the less they are used in hiring processes in one or the other way because otherwise it is not possible to assess several hundred candidates that apply for a faculty position. Which of these factors are important in a certain sub-field is very different. Only the ones scoring top in these metrics will make it to the interview where then other factors might count as well.



    For someone who is not familiar with a certain field the easiest (but still not always correct) way to see how good a professor might be is the name of the university. e.g. a professor at Cambridge will most likely have achieved a lot in his life. Someone at a no-name place will not have made much impact that impressed other people in the same field and if such a person does make a big impact one day then he will most likely get offers to move to a place with a better name.






    share|improve this answer






























      1















      If one claimed that a particular scholar was "above average" or
      "noted" in their field, is there any good metric by which to support
      or deny such a claim?




      The only "generally approved" quantitative metric is the h-index. H-index is a metric, is OKfor your task as it allows you to define above or below average. As a matter of fact, this is the way some national educations systems stamp their professors as good enough for tenure. It is also agreed that it is not "good enough" - famously, Peter Higgs, 2013 Nobel in Physics, would fail miserably a ranking based on h-index only, as he published very few paper, although with huge citation count. Also, h-index is a measure of lifetime achievements, thus needs to be corrected for the academic age. Which brings us to the next point.




      Is there a generally accepted way to indicate that a particular
      professor or scholar is outstanding, or above average, in their field?
      I understand there are certain indicators, such as chairs, endowments,
      prizes, etc. But these don't really seem to help to compare one
      scholar to another, except in a sort of gross, simple count way (i.e.
      one professor has had more chairs than another)




      Other, mostly qualitative metrics are regularly used, consciously or not, in academic's minds, although no official ranking exist. I will mention a few, the ordering only reflecting the stage in an academic career:



      1. Institution where PhD has been obtained
      2. PhD supervisor
      3. national prizes
      4. national grants
      5. number of PhD students supervised
      6. chairs at institutions or conferences
      6. international prizes
      7. academic success of PhD students mentored
      8. more I could not think about now :)



      Is it theoretically possible to create a "ranking" of professors in
      their fields, by some metric?




      Of course it is, there is entire field about it called Scientometrics. You have to 1) fix for h-index known limitations 2) combine with the variables above to come up with a more comprehensive algorithm that will rank any researcher in any field. The reasons why this has not been done before are twofold. First, it is not easy at all to define objectively how much every metric listed here should weight in the ranking algorithm. Second, and most importantly, academics rank every day for jobs, promotions, accepting papers or conference contributions, prizes etc. However, they prefer their ranking algorithm to suit their individual minds, rather than adopting a common framework.




      Could their h-index serve as such a metric?




      As described above, h-index has many limitations that make it impractical for most purposes. But an entire field of research exists around it - Scientometrics - so rest assured there will be developments.






      share|improve this answer




















      • 4





        I disagree with many of the things you say: h-index is not a good metric because it led to more or less submersed citation rings to artificially boost that metric and get promotions. Second, "academics like to rank": who said that? Most of the academics I know are certainly not interested in ranking anyone.

        – Massimo Ortolano
        May 3 at 9:54






      • 1





        Academics rank all the time other academics for a job, a promotion, a prize. However, they don't like to use commonly shared algorithms for doing that.I will specify that. H-index is good in the sense that is well-defined, sensible and allows comparisons. It is not a comprehensive metric however, as I specifically write.

        – famargar
        May 3 at 9:59






      • 1





        The fact that they have to rank for certain specific purposes doesn't mean that they like to rank. Furthermore, as I said, the h-index brought several distortions in scientific publishing and that's definitely not good.

        – Massimo Ortolano
        May 3 at 10:05











      • Thanks. I did edit the answer integrating your inputs. I think I made clearly enough that h-index is not good enough. As for the bias induced by ranking, this applies to any ranking, explicit or not. See the fact that most of US professors come from Ivy League schools - nobody attached points to it, however most committees do take that into accout.

        – famargar
        May 3 at 10:17











      • Since academics rank each other all the time for jobs, promotions, prizes, etc. do they use the H-index as the whole or as a part of that ranking calculation?

        – user151841
        May 3 at 13:43











      Your Answer








      StackExchange.ready(function()
      var channelOptions =
      tags: "".split(" "),
      id: "415"
      ;
      initTagRenderer("".split(" "), "".split(" "), channelOptions);

      StackExchange.using("externalEditor", function()
      // Have to fire editor after snippets, if snippets enabled
      if (StackExchange.settings.snippets.snippetsEnabled)
      StackExchange.using("snippets", function()
      createEditor();
      );

      else
      createEditor();

      );

      function createEditor()
      StackExchange.prepareEditor(
      heartbeatType: 'answer',
      autoActivateHeartbeat: false,
      convertImagesToLinks: true,
      noModals: true,
      showLowRepImageUploadWarning: true,
      reputationToPostImages: 10,
      bindNavPrevention: true,
      postfix: "",
      imageUploader:
      brandingHtml: "Powered by u003ca class="icon-imgur-white" href="https://imgur.com/"u003eu003c/au003e",
      contentPolicyHtml: "User contributions licensed under u003ca href="https://creativecommons.org/licenses/by-sa/3.0/"u003ecc by-sa 3.0 with attribution requiredu003c/au003e u003ca href="https://stackoverflow.com/legal/content-policy"u003e(content policy)u003c/au003e",
      allowUrls: true
      ,
      noCode: true, onDemand: true,
      discardSelector: ".discard-answer"
      ,immediatelyShowMarkdownHelp:true
      );



      );













      draft saved

      draft discarded


















      StackExchange.ready(
      function ()
      StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2facademia.stackexchange.com%2fquestions%2f130027%2fwhat-is-a-common-way-to-tell-if-an-academic-is-above-average-or-outstanding-i%23new-answer', 'question_page');

      );

      Post as a guest















      Required, but never shown

























      7 Answers
      7






      active

      oldest

      votes








      7 Answers
      7






      active

      oldest

      votes









      active

      oldest

      votes






      active

      oldest

      votes









      72














      Yes, there is one and only one standard method that is universally employed by reputable academic institutions worldwide. This is how you evaluate a researcher:



      1. Read their papers.

      2. Attend one of their talks.

      3. Ask the opinion of other experts in the field.

      This is how hiring committees and promotion committees do their job. There are no shortcuts. Parts 1-2 require that you have some relevant expertise; if not then you must rely entirely on #3.



      Every academic is regularly asked to give an expert opinion through reference letters, which often are expected to include some sort of ranking (e.g. "Assistant Prof. X should be promoted because she is clearly as talented/better than Prof. Y who was recently promoted at Prestigious University Z"). How does one justify this kind of claim in the reference letter? You guessed it:



      1. Read their papers.

      2. Attend their talks.





      share|improve this answer




















      • 11





        Well, this is perhaps how committees and panels should work, but definitely not how every hiring committee works in practice.

        – Dmitry Savostyanov
        May 3 at 11:18






      • 8





        @David Ketcheson The method you propose is not fool-proof. What if there is a cabal of researchers who dismiss contrary opinions because they have a lot at stake? Happens a lot in fields like economics and social sciences that make unfalsifiable claims or else simply use fake statistics to "empirically prove" their hypothesis. If you are a researcher that publishes works that go against established dogma you are unlikely to be given good references.

        – Tryer
        May 3 at 15:05






      • 6





        @Tryer Well, that's what points 1 and 2 are there for. If the researcher in question is right, but neither you nor their colleagues can tell, I don't think there's any rating available that will do that job for you.

        – sgf
        May 3 at 15:10






      • 3





        @sgf People who have unorthodox ideas very rarely get to publish or get invited to talks. This happens more in fields like the social sciences and history and economics where the people make unfalsifiable claims. How many orthodox economics departments invite for talks folks like Nassim Nicholas Taleb, for instance. The guy has a good set of arguments that when taken to their logical extreme would lead to dismantling finance and economics departments in universities. Which "respected" journal would publish his work and which university will invite him for talks?

        – Tryer
        May 3 at 15:17







      • 3





        @Tryer Which is why you have to read his stuff and come up with an opinion on them yourself. Which sort of metric could you imagine that would help there, short of making everything a popularity contest? There's no metric that will allow you to extract truth from papers.

        – sgf
        May 3 at 15:52
















      72














      Yes, there is one and only one standard method that is universally employed by reputable academic institutions worldwide. This is how you evaluate a researcher:



      1. Read their papers.

      2. Attend one of their talks.

      3. Ask the opinion of other experts in the field.

      This is how hiring committees and promotion committees do their job. There are no shortcuts. Parts 1-2 require that you have some relevant expertise; if not then you must rely entirely on #3.



      Every academic is regularly asked to give an expert opinion through reference letters, which often are expected to include some sort of ranking (e.g. "Assistant Prof. X should be promoted because she is clearly as talented/better than Prof. Y who was recently promoted at Prestigious University Z"). How does one justify this kind of claim in the reference letter? You guessed it:



      1. Read their papers.

      2. Attend their talks.





      share|improve this answer




















      • 11





        Well, this is perhaps how committees and panels should work, but definitely not how every hiring committee works in practice.

        – Dmitry Savostyanov
        May 3 at 11:18






      • 8





        @David Ketcheson The method you propose is not fool-proof. What if there is a cabal of researchers who dismiss contrary opinions because they have a lot at stake? Happens a lot in fields like economics and social sciences that make unfalsifiable claims or else simply use fake statistics to "empirically prove" their hypothesis. If you are a researcher that publishes works that go against established dogma you are unlikely to be given good references.

        – Tryer
        May 3 at 15:05






      • 6





        @Tryer Well, that's what points 1 and 2 are there for. If the researcher in question is right, but neither you nor their colleagues can tell, I don't think there's any rating available that will do that job for you.

        – sgf
        May 3 at 15:10






      • 3





        @sgf People who have unorthodox ideas very rarely get to publish or get invited to talks. This happens more in fields like the social sciences and history and economics where the people make unfalsifiable claims. How many orthodox economics departments invite for talks folks like Nassim Nicholas Taleb, for instance. The guy has a good set of arguments that when taken to their logical extreme would lead to dismantling finance and economics departments in universities. Which "respected" journal would publish his work and which university will invite him for talks?

        – Tryer
        May 3 at 15:17







      • 3





        @Tryer Which is why you have to read his stuff and come up with an opinion on them yourself. Which sort of metric could you imagine that would help there, short of making everything a popularity contest? There's no metric that will allow you to extract truth from papers.

        – sgf
        May 3 at 15:52














      72












      72








      72







      Yes, there is one and only one standard method that is universally employed by reputable academic institutions worldwide. This is how you evaluate a researcher:



      1. Read their papers.

      2. Attend one of their talks.

      3. Ask the opinion of other experts in the field.

      This is how hiring committees and promotion committees do their job. There are no shortcuts. Parts 1-2 require that you have some relevant expertise; if not then you must rely entirely on #3.



      Every academic is regularly asked to give an expert opinion through reference letters, which often are expected to include some sort of ranking (e.g. "Assistant Prof. X should be promoted because she is clearly as talented/better than Prof. Y who was recently promoted at Prestigious University Z"). How does one justify this kind of claim in the reference letter? You guessed it:



      1. Read their papers.

      2. Attend their talks.





      share|improve this answer















      Yes, there is one and only one standard method that is universally employed by reputable academic institutions worldwide. This is how you evaluate a researcher:



      1. Read their papers.

      2. Attend one of their talks.

      3. Ask the opinion of other experts in the field.

      This is how hiring committees and promotion committees do their job. There are no shortcuts. Parts 1-2 require that you have some relevant expertise; if not then you must rely entirely on #3.



      Every academic is regularly asked to give an expert opinion through reference letters, which often are expected to include some sort of ranking (e.g. "Assistant Prof. X should be promoted because she is clearly as talented/better than Prof. Y who was recently promoted at Prestigious University Z"). How does one justify this kind of claim in the reference letter? You guessed it:



      1. Read their papers.

      2. Attend their talks.






      share|improve this answer














      share|improve this answer



      share|improve this answer








      edited May 3 at 12:22

























      answered May 3 at 10:28









      David KetchesonDavid Ketcheson

      29.4k690142




      29.4k690142







      • 11





        Well, this is perhaps how committees and panels should work, but definitely not how every hiring committee works in practice.

        – Dmitry Savostyanov
        May 3 at 11:18






      • 8





        @David Ketcheson The method you propose is not fool-proof. What if there is a cabal of researchers who dismiss contrary opinions because they have a lot at stake? Happens a lot in fields like economics and social sciences that make unfalsifiable claims or else simply use fake statistics to "empirically prove" their hypothesis. If you are a researcher that publishes works that go against established dogma you are unlikely to be given good references.

        – Tryer
        May 3 at 15:05






      • 6





        @Tryer Well, that's what points 1 and 2 are there for. If the researcher in question is right, but neither you nor their colleagues can tell, I don't think there's any rating available that will do that job for you.

        – sgf
        May 3 at 15:10






      • 3





        @sgf People who have unorthodox ideas very rarely get to publish or get invited to talks. This happens more in fields like the social sciences and history and economics where the people make unfalsifiable claims. How many orthodox economics departments invite for talks folks like Nassim Nicholas Taleb, for instance. The guy has a good set of arguments that when taken to their logical extreme would lead to dismantling finance and economics departments in universities. Which "respected" journal would publish his work and which university will invite him for talks?

        – Tryer
        May 3 at 15:17







      • 3





        @Tryer Which is why you have to read his stuff and come up with an opinion on them yourself. Which sort of metric could you imagine that would help there, short of making everything a popularity contest? There's no metric that will allow you to extract truth from papers.

        – sgf
        May 3 at 15:52













      • 11





        Well, this is perhaps how committees and panels should work, but definitely not how every hiring committee works in practice.

        – Dmitry Savostyanov
        May 3 at 11:18






      • 8





        @David Ketcheson The method you propose is not fool-proof. What if there is a cabal of researchers who dismiss contrary opinions because they have a lot at stake? Happens a lot in fields like economics and social sciences that make unfalsifiable claims or else simply use fake statistics to "empirically prove" their hypothesis. If you are a researcher that publishes works that go against established dogma you are unlikely to be given good references.

        – Tryer
        May 3 at 15:05






      • 6





        @Tryer Well, that's what points 1 and 2 are there for. If the researcher in question is right, but neither you nor their colleagues can tell, I don't think there's any rating available that will do that job for you.

        – sgf
        May 3 at 15:10






      • 3





        @sgf People who have unorthodox ideas very rarely get to publish or get invited to talks. This happens more in fields like the social sciences and history and economics where the people make unfalsifiable claims. How many orthodox economics departments invite for talks folks like Nassim Nicholas Taleb, for instance. The guy has a good set of arguments that when taken to their logical extreme would lead to dismantling finance and economics departments in universities. Which "respected" journal would publish his work and which university will invite him for talks?

        – Tryer
        May 3 at 15:17







      • 3





        @Tryer Which is why you have to read his stuff and come up with an opinion on them yourself. Which sort of metric could you imagine that would help there, short of making everything a popularity contest? There's no metric that will allow you to extract truth from papers.

        – sgf
        May 3 at 15:52








      11




      11





      Well, this is perhaps how committees and panels should work, but definitely not how every hiring committee works in practice.

      – Dmitry Savostyanov
      May 3 at 11:18





      Well, this is perhaps how committees and panels should work, but definitely not how every hiring committee works in practice.

      – Dmitry Savostyanov
      May 3 at 11:18




      8




      8





      @David Ketcheson The method you propose is not fool-proof. What if there is a cabal of researchers who dismiss contrary opinions because they have a lot at stake? Happens a lot in fields like economics and social sciences that make unfalsifiable claims or else simply use fake statistics to "empirically prove" their hypothesis. If you are a researcher that publishes works that go against established dogma you are unlikely to be given good references.

      – Tryer
      May 3 at 15:05





      @David Ketcheson The method you propose is not fool-proof. What if there is a cabal of researchers who dismiss contrary opinions because they have a lot at stake? Happens a lot in fields like economics and social sciences that make unfalsifiable claims or else simply use fake statistics to "empirically prove" their hypothesis. If you are a researcher that publishes works that go against established dogma you are unlikely to be given good references.

      – Tryer
      May 3 at 15:05




      6




      6





      @Tryer Well, that's what points 1 and 2 are there for. If the researcher in question is right, but neither you nor their colleagues can tell, I don't think there's any rating available that will do that job for you.

      – sgf
      May 3 at 15:10





      @Tryer Well, that's what points 1 and 2 are there for. If the researcher in question is right, but neither you nor their colleagues can tell, I don't think there's any rating available that will do that job for you.

      – sgf
      May 3 at 15:10




      3




      3





      @sgf People who have unorthodox ideas very rarely get to publish or get invited to talks. This happens more in fields like the social sciences and history and economics where the people make unfalsifiable claims. How many orthodox economics departments invite for talks folks like Nassim Nicholas Taleb, for instance. The guy has a good set of arguments that when taken to their logical extreme would lead to dismantling finance and economics departments in universities. Which "respected" journal would publish his work and which university will invite him for talks?

      – Tryer
      May 3 at 15:17






      @sgf People who have unorthodox ideas very rarely get to publish or get invited to talks. This happens more in fields like the social sciences and history and economics where the people make unfalsifiable claims. How many orthodox economics departments invite for talks folks like Nassim Nicholas Taleb, for instance. The guy has a good set of arguments that when taken to their logical extreme would lead to dismantling finance and economics departments in universities. Which "respected" journal would publish his work and which university will invite him for talks?

      – Tryer
      May 3 at 15:17





      3




      3





      @Tryer Which is why you have to read his stuff and come up with an opinion on them yourself. Which sort of metric could you imagine that would help there, short of making everything a popularity contest? There's no metric that will allow you to extract truth from papers.

      – sgf
      May 3 at 15:52






      @Tryer Which is why you have to read his stuff and come up with an opinion on them yourself. Which sort of metric could you imagine that would help there, short of making everything a popularity contest? There's no metric that will allow you to extract truth from papers.

      – sgf
      May 3 at 15:52












      26















      If one claimed that a particular scholar was "above average" or "noted" in their field, is there any good metric by which to support or deny such a claim?




      No. As a rule of thumb, this isn't the kind of thing that you can measure with a metric. Elvis Presley was the king of rock and roll. Why? Is it because he pumped out more albums than the others? Because he sold more? Because journalists wrote more about his albums than the others'? No. It was because he was the king and few people contested that. It's the same in academia. Either you can say that someone is "noted" in the field and be reasonably confident that you won't be contested when saying that, or you can't. If you can't, then you should avoid it, on pain of looking pretentious or like a toady.




      Is there a generally accepted way to indicate that a particular professor or scholar is outstanding, or above average, in their field? I understand there are certain indicators, such as chairs, endowments, prizes, etc. But these don't really seem to help to compare one scholar to another, except in a sort of gross, simple count way (i.e. one professor has had more chairs than another)




      In general, you can't compare people. This isn't a video game, people don't have a numeric level associated to their academic ability, someone with a 12 being better than someone with a 5. It doesn't work like that. There is a multitude of factors, most often not measurable or not comparable. Trying to make a sum out of these and comparing the result for two different people will only lead to crap. Ask any hiring committee if determining who is the best candidate for a job is easy, let alone determining who is the best researcher.




      Is it theoretically possible to create a "ranking" of professors in their fields, by some metric? Could their h-index serve as such a metric?




      God no. Of all the metrics, you've probably picked one of the worse ones. If I write two dozens pieces of trash that all cite one another and publish them in vanity press, I will have a great h-index. Will I be a good researcher? No. On the other hand, if I write a single article in my whole life solving the Riemann hypothesis, then I would probably become one of the most famous mathematician in the world overnight, but my h-index will be crap.






      share|improve this answer























      • Could you suggest a metric which is better than h-index?

        – Dmitry Savostyanov
        May 3 at 8:56






      • 14





        @DmitrySavostyanov Why would you want any metric at all?

        – Massimo Ortolano
        May 3 at 9:39






      • 3





        @DmitrySavostyanov Asking other academics. Or better, evaluating them for yourself.

        – knzhou
        May 3 at 10:57












      • Comments are not for extended discussion; this conversation has been moved to chat.

        – eykanal
        May 7 at 16:45















      26















      If one claimed that a particular scholar was "above average" or "noted" in their field, is there any good metric by which to support or deny such a claim?




      No. As a rule of thumb, this isn't the kind of thing that you can measure with a metric. Elvis Presley was the king of rock and roll. Why? Is it because he pumped out more albums than the others? Because he sold more? Because journalists wrote more about his albums than the others'? No. It was because he was the king and few people contested that. It's the same in academia. Either you can say that someone is "noted" in the field and be reasonably confident that you won't be contested when saying that, or you can't. If you can't, then you should avoid it, on pain of looking pretentious or like a toady.




      Is there a generally accepted way to indicate that a particular professor or scholar is outstanding, or above average, in their field? I understand there are certain indicators, such as chairs, endowments, prizes, etc. But these don't really seem to help to compare one scholar to another, except in a sort of gross, simple count way (i.e. one professor has had more chairs than another)




      In general, you can't compare people. This isn't a video game, people don't have a numeric level associated to their academic ability, someone with a 12 being better than someone with a 5. It doesn't work like that. There is a multitude of factors, most often not measurable or not comparable. Trying to make a sum out of these and comparing the result for two different people will only lead to crap. Ask any hiring committee if determining who is the best candidate for a job is easy, let alone determining who is the best researcher.




      Is it theoretically possible to create a "ranking" of professors in their fields, by some metric? Could their h-index serve as such a metric?




      God no. Of all the metrics, you've probably picked one of the worse ones. If I write two dozens pieces of trash that all cite one another and publish them in vanity press, I will have a great h-index. Will I be a good researcher? No. On the other hand, if I write a single article in my whole life solving the Riemann hypothesis, then I would probably become one of the most famous mathematician in the world overnight, but my h-index will be crap.






      share|improve this answer























      • Could you suggest a metric which is better than h-index?

        – Dmitry Savostyanov
        May 3 at 8:56






      • 14





        @DmitrySavostyanov Why would you want any metric at all?

        – Massimo Ortolano
        May 3 at 9:39






      • 3





        @DmitrySavostyanov Asking other academics. Or better, evaluating them for yourself.

        – knzhou
        May 3 at 10:57












      • Comments are not for extended discussion; this conversation has been moved to chat.

        – eykanal
        May 7 at 16:45













      26












      26








      26








      If one claimed that a particular scholar was "above average" or "noted" in their field, is there any good metric by which to support or deny such a claim?




      No. As a rule of thumb, this isn't the kind of thing that you can measure with a metric. Elvis Presley was the king of rock and roll. Why? Is it because he pumped out more albums than the others? Because he sold more? Because journalists wrote more about his albums than the others'? No. It was because he was the king and few people contested that. It's the same in academia. Either you can say that someone is "noted" in the field and be reasonably confident that you won't be contested when saying that, or you can't. If you can't, then you should avoid it, on pain of looking pretentious or like a toady.




      Is there a generally accepted way to indicate that a particular professor or scholar is outstanding, or above average, in their field? I understand there are certain indicators, such as chairs, endowments, prizes, etc. But these don't really seem to help to compare one scholar to another, except in a sort of gross, simple count way (i.e. one professor has had more chairs than another)




      In general, you can't compare people. This isn't a video game, people don't have a numeric level associated to their academic ability, someone with a 12 being better than someone with a 5. It doesn't work like that. There is a multitude of factors, most often not measurable or not comparable. Trying to make a sum out of these and comparing the result for two different people will only lead to crap. Ask any hiring committee if determining who is the best candidate for a job is easy, let alone determining who is the best researcher.




      Is it theoretically possible to create a "ranking" of professors in their fields, by some metric? Could their h-index serve as such a metric?




      God no. Of all the metrics, you've probably picked one of the worse ones. If I write two dozens pieces of trash that all cite one another and publish them in vanity press, I will have a great h-index. Will I be a good researcher? No. On the other hand, if I write a single article in my whole life solving the Riemann hypothesis, then I would probably become one of the most famous mathematician in the world overnight, but my h-index will be crap.






      share|improve this answer














      If one claimed that a particular scholar was "above average" or "noted" in their field, is there any good metric by which to support or deny such a claim?




      No. As a rule of thumb, this isn't the kind of thing that you can measure with a metric. Elvis Presley was the king of rock and roll. Why? Is it because he pumped out more albums than the others? Because he sold more? Because journalists wrote more about his albums than the others'? No. It was because he was the king and few people contested that. It's the same in academia. Either you can say that someone is "noted" in the field and be reasonably confident that you won't be contested when saying that, or you can't. If you can't, then you should avoid it, on pain of looking pretentious or like a toady.




      Is there a generally accepted way to indicate that a particular professor or scholar is outstanding, or above average, in their field? I understand there are certain indicators, such as chairs, endowments, prizes, etc. But these don't really seem to help to compare one scholar to another, except in a sort of gross, simple count way (i.e. one professor has had more chairs than another)




      In general, you can't compare people. This isn't a video game, people don't have a numeric level associated to their academic ability, someone with a 12 being better than someone with a 5. It doesn't work like that. There is a multitude of factors, most often not measurable or not comparable. Trying to make a sum out of these and comparing the result for two different people will only lead to crap. Ask any hiring committee if determining who is the best candidate for a job is easy, let alone determining who is the best researcher.




      Is it theoretically possible to create a "ranking" of professors in their fields, by some metric? Could their h-index serve as such a metric?




      God no. Of all the metrics, you've probably picked one of the worse ones. If I write two dozens pieces of trash that all cite one another and publish them in vanity press, I will have a great h-index. Will I be a good researcher? No. On the other hand, if I write a single article in my whole life solving the Riemann hypothesis, then I would probably become one of the most famous mathematician in the world overnight, but my h-index will be crap.







      share|improve this answer












      share|improve this answer



      share|improve this answer










      answered May 3 at 8:24









      user108384user108384

      29913




      29913












      • Could you suggest a metric which is better than h-index?

        – Dmitry Savostyanov
        May 3 at 8:56






      • 14





        @DmitrySavostyanov Why would you want any metric at all?

        – Massimo Ortolano
        May 3 at 9:39






      • 3





        @DmitrySavostyanov Asking other academics. Or better, evaluating them for yourself.

        – knzhou
        May 3 at 10:57












      • Comments are not for extended discussion; this conversation has been moved to chat.

        – eykanal
        May 7 at 16:45

















      • Could you suggest a metric which is better than h-index?

        – Dmitry Savostyanov
        May 3 at 8:56






      • 14





        @DmitrySavostyanov Why would you want any metric at all?

        – Massimo Ortolano
        May 3 at 9:39






      • 3





        @DmitrySavostyanov Asking other academics. Or better, evaluating them for yourself.

        – knzhou
        May 3 at 10:57












      • Comments are not for extended discussion; this conversation has been moved to chat.

        – eykanal
        May 7 at 16:45
















      Could you suggest a metric which is better than h-index?

      – Dmitry Savostyanov
      May 3 at 8:56





      Could you suggest a metric which is better than h-index?

      – Dmitry Savostyanov
      May 3 at 8:56




      14




      14





      @DmitrySavostyanov Why would you want any metric at all?

      – Massimo Ortolano
      May 3 at 9:39





      @DmitrySavostyanov Why would you want any metric at all?

      – Massimo Ortolano
      May 3 at 9:39




      3




      3





      @DmitrySavostyanov Asking other academics. Or better, evaluating them for yourself.

      – knzhou
      May 3 at 10:57






      @DmitrySavostyanov Asking other academics. Or better, evaluating them for yourself.

      – knzhou
      May 3 at 10:57














      Comments are not for extended discussion; this conversation has been moved to chat.

      – eykanal
      May 7 at 16:45





      Comments are not for extended discussion; this conversation has been moved to chat.

      – eykanal
      May 7 at 16:45











      9














      If you want to rank two professors against each other, you might be tempted to use the h-index. Don't. As many of the other answers point out, it's a severely flawed metric, and it doesn't really tell you a lot.



      However, if you want to figure out whether a given professor can reasonably be described as "noted" or "outstanding", then that is a quite different question. And here, yes indeed, I would say that you can use certain indicators, namely awards, honors and prizes. I do not think anyone disputes that a scientist holding a Nobel prize is outstanding. (Peace and literature, maybe not so much.) If a mathematician wins the Fields medal or the Abel prize, the same.



      Many societies award fellowships. To get one of those, you have to demonstrate academic excellence, and often also things like service to the society in question, outreach, teaching etc. The advantage is that the "overall package" a professor offers has already been evaluated by people who are presumably experts in the field. For instance, here is a list of the Fellows of the International Institute of Forecasters, which I happen to be involved with. Some of the Fellows are a bit contentious, but nobody from the field would dispute their being noted.



      Best paper awards are similar.



      Of course, you need to use a little expertise in deciding whether a Best Paper Award from a journal on Beall's list is truly a mark of excellence, or whether a Fellowship from an academic society that offers little more than a one-page webpresence is. But unless you go with the extremely well-known marks of excellence like the prizes I noted above, there is simply no shortcut that will avoid having at least a passing knowledge of the field.



      And note that this allows you to decide whether someone is distinguished or not. It won't tell you whether A is "more distinguished" than B, like one might try to use the h-index to indicate. Which, as I argue above, is impossible.






      share|improve this answer



























        9














        If you want to rank two professors against each other, you might be tempted to use the h-index. Don't. As many of the other answers point out, it's a severely flawed metric, and it doesn't really tell you a lot.



        However, if you want to figure out whether a given professor can reasonably be described as "noted" or "outstanding", then that is a quite different question. And here, yes indeed, I would say that you can use certain indicators, namely awards, honors and prizes. I do not think anyone disputes that a scientist holding a Nobel prize is outstanding. (Peace and literature, maybe not so much.) If a mathematician wins the Fields medal or the Abel prize, the same.



        Many societies award fellowships. To get one of those, you have to demonstrate academic excellence, and often also things like service to the society in question, outreach, teaching etc. The advantage is that the "overall package" a professor offers has already been evaluated by people who are presumably experts in the field. For instance, here is a list of the Fellows of the International Institute of Forecasters, which I happen to be involved with. Some of the Fellows are a bit contentious, but nobody from the field would dispute their being noted.



        Best paper awards are similar.



        Of course, you need to use a little expertise in deciding whether a Best Paper Award from a journal on Beall's list is truly a mark of excellence, or whether a Fellowship from an academic society that offers little more than a one-page webpresence is. But unless you go with the extremely well-known marks of excellence like the prizes I noted above, there is simply no shortcut that will avoid having at least a passing knowledge of the field.



        And note that this allows you to decide whether someone is distinguished or not. It won't tell you whether A is "more distinguished" than B, like one might try to use the h-index to indicate. Which, as I argue above, is impossible.






        share|improve this answer

























          9












          9








          9







          If you want to rank two professors against each other, you might be tempted to use the h-index. Don't. As many of the other answers point out, it's a severely flawed metric, and it doesn't really tell you a lot.



          However, if you want to figure out whether a given professor can reasonably be described as "noted" or "outstanding", then that is a quite different question. And here, yes indeed, I would say that you can use certain indicators, namely awards, honors and prizes. I do not think anyone disputes that a scientist holding a Nobel prize is outstanding. (Peace and literature, maybe not so much.) If a mathematician wins the Fields medal or the Abel prize, the same.



          Many societies award fellowships. To get one of those, you have to demonstrate academic excellence, and often also things like service to the society in question, outreach, teaching etc. The advantage is that the "overall package" a professor offers has already been evaluated by people who are presumably experts in the field. For instance, here is a list of the Fellows of the International Institute of Forecasters, which I happen to be involved with. Some of the Fellows are a bit contentious, but nobody from the field would dispute their being noted.



          Best paper awards are similar.



          Of course, you need to use a little expertise in deciding whether a Best Paper Award from a journal on Beall's list is truly a mark of excellence, or whether a Fellowship from an academic society that offers little more than a one-page webpresence is. But unless you go with the extremely well-known marks of excellence like the prizes I noted above, there is simply no shortcut that will avoid having at least a passing knowledge of the field.



          And note that this allows you to decide whether someone is distinguished or not. It won't tell you whether A is "more distinguished" than B, like one might try to use the h-index to indicate. Which, as I argue above, is impossible.






          share|improve this answer













          If you want to rank two professors against each other, you might be tempted to use the h-index. Don't. As many of the other answers point out, it's a severely flawed metric, and it doesn't really tell you a lot.



          However, if you want to figure out whether a given professor can reasonably be described as "noted" or "outstanding", then that is a quite different question. And here, yes indeed, I would say that you can use certain indicators, namely awards, honors and prizes. I do not think anyone disputes that a scientist holding a Nobel prize is outstanding. (Peace and literature, maybe not so much.) If a mathematician wins the Fields medal or the Abel prize, the same.



          Many societies award fellowships. To get one of those, you have to demonstrate academic excellence, and often also things like service to the society in question, outreach, teaching etc. The advantage is that the "overall package" a professor offers has already been evaluated by people who are presumably experts in the field. For instance, here is a list of the Fellows of the International Institute of Forecasters, which I happen to be involved with. Some of the Fellows are a bit contentious, but nobody from the field would dispute their being noted.



          Best paper awards are similar.



          Of course, you need to use a little expertise in deciding whether a Best Paper Award from a journal on Beall's list is truly a mark of excellence, or whether a Fellowship from an academic society that offers little more than a one-page webpresence is. But unless you go with the extremely well-known marks of excellence like the prizes I noted above, there is simply no shortcut that will avoid having at least a passing knowledge of the field.



          And note that this allows you to decide whether someone is distinguished or not. It won't tell you whether A is "more distinguished" than B, like one might try to use the h-index to indicate. Which, as I argue above, is impossible.







          share|improve this answer












          share|improve this answer



          share|improve this answer










          answered May 3 at 11:21









          Stephan KolassaStephan Kolassa

          27.3k994138




          27.3k994138





















              7















              Is there a generally accepted way to indicate that a particular professor or scholar is outstanding, or above average, in their field?




              The generally accepted method for assessing a particular scholar's merit is to familiarize oneself with their work. Such an assessment requires a solid basis of expert knowledge.




              I understand there are certain indicators, such as chairs, endowments, prizes, etc. But these don't really seem to help to compare one scholar to another, except in a sort of gross, simple count way.




              This applies to any "metric", although some are worse than others. Any sound assessment would have to be qualitative and require some substantive engagement with the scholar's work. Therefore, any comparison, to the extent that it would be useful at all, could only point out qualitative differences that don't lend themselves to a ranking, "except in a sort of gross" way.






              share|improve this answer























              • Qualitative differences DO lend themselves to a ranking, just as much as quantitative differences: “condition XXXX increases by YY.Y% the likelihood of a patient to develop disease ZZZZZ within K years”. Condition XXX can be quantitative (age, number of alcohol units per week, etc) or qualitative (sex, existence of other diseases). They are equally good to build a ranking and qualitative variables may very well be more important than then the quantitative ones.

                – famargar
                May 4 at 8:15
















              7















              Is there a generally accepted way to indicate that a particular professor or scholar is outstanding, or above average, in their field?




              The generally accepted method for assessing a particular scholar's merit is to familiarize oneself with their work. Such an assessment requires a solid basis of expert knowledge.




              I understand there are certain indicators, such as chairs, endowments, prizes, etc. But these don't really seem to help to compare one scholar to another, except in a sort of gross, simple count way.




              This applies to any "metric", although some are worse than others. Any sound assessment would have to be qualitative and require some substantive engagement with the scholar's work. Therefore, any comparison, to the extent that it would be useful at all, could only point out qualitative differences that don't lend themselves to a ranking, "except in a sort of gross" way.






              share|improve this answer























              • Qualitative differences DO lend themselves to a ranking, just as much as quantitative differences: “condition XXXX increases by YY.Y% the likelihood of a patient to develop disease ZZZZZ within K years”. Condition XXX can be quantitative (age, number of alcohol units per week, etc) or qualitative (sex, existence of other diseases). They are equally good to build a ranking and qualitative variables may very well be more important than then the quantitative ones.

                – famargar
                May 4 at 8:15














              7












              7








              7








              Is there a generally accepted way to indicate that a particular professor or scholar is outstanding, or above average, in their field?




              The generally accepted method for assessing a particular scholar's merit is to familiarize oneself with their work. Such an assessment requires a solid basis of expert knowledge.




              I understand there are certain indicators, such as chairs, endowments, prizes, etc. But these don't really seem to help to compare one scholar to another, except in a sort of gross, simple count way.




              This applies to any "metric", although some are worse than others. Any sound assessment would have to be qualitative and require some substantive engagement with the scholar's work. Therefore, any comparison, to the extent that it would be useful at all, could only point out qualitative differences that don't lend themselves to a ranking, "except in a sort of gross" way.






              share|improve this answer














              Is there a generally accepted way to indicate that a particular professor or scholar is outstanding, or above average, in their field?




              The generally accepted method for assessing a particular scholar's merit is to familiarize oneself with their work. Such an assessment requires a solid basis of expert knowledge.




              I understand there are certain indicators, such as chairs, endowments, prizes, etc. But these don't really seem to help to compare one scholar to another, except in a sort of gross, simple count way.




              This applies to any "metric", although some are worse than others. Any sound assessment would have to be qualitative and require some substantive engagement with the scholar's work. Therefore, any comparison, to the extent that it would be useful at all, could only point out qualitative differences that don't lend themselves to a ranking, "except in a sort of gross" way.







              share|improve this answer












              share|improve this answer



              share|improve this answer










              answered May 3 at 10:25









              henninghenning

              19.5k46796




              19.5k46796












              • Qualitative differences DO lend themselves to a ranking, just as much as quantitative differences: “condition XXXX increases by YY.Y% the likelihood of a patient to develop disease ZZZZZ within K years”. Condition XXX can be quantitative (age, number of alcohol units per week, etc) or qualitative (sex, existence of other diseases). They are equally good to build a ranking and qualitative variables may very well be more important than then the quantitative ones.

                – famargar
                May 4 at 8:15


















              • Qualitative differences DO lend themselves to a ranking, just as much as quantitative differences: “condition XXXX increases by YY.Y% the likelihood of a patient to develop disease ZZZZZ within K years”. Condition XXX can be quantitative (age, number of alcohol units per week, etc) or qualitative (sex, existence of other diseases). They are equally good to build a ranking and qualitative variables may very well be more important than then the quantitative ones.

                – famargar
                May 4 at 8:15

















              Qualitative differences DO lend themselves to a ranking, just as much as quantitative differences: “condition XXXX increases by YY.Y% the likelihood of a patient to develop disease ZZZZZ within K years”. Condition XXX can be quantitative (age, number of alcohol units per week, etc) or qualitative (sex, existence of other diseases). They are equally good to build a ranking and qualitative variables may very well be more important than then the quantitative ones.

              – famargar
              May 4 at 8:15






              Qualitative differences DO lend themselves to a ranking, just as much as quantitative differences: “condition XXXX increases by YY.Y% the likelihood of a patient to develop disease ZZZZZ within K years”. Condition XXX can be quantitative (age, number of alcohol units per week, etc) or qualitative (sex, existence of other diseases). They are equally good to build a ranking and qualitative variables may very well be more important than then the quantitative ones.

              – famargar
              May 4 at 8:15












              3














              The negative proof to the question here is far broader than academics: is there a metric for the best car? Best parent? Best programming language? Smartest person? No, because all these things have many orthogonal dimensions that simply can't be collapsed to one without unacceptable information loss. Researchers can be creative, well funded, methodical, hard working, well versed in literature, collaborative with peers/students, etc.



              I concur with David Ketcheson's answer on what to do instead.






              share|improve this answer























              • Your comment assumes that no multidimensional problem can be analytically quantified. However, this comment comes a decade after machine learning and artificial intelligence (in english: algorithms solving complex multidimensional problems) became pervasive in our society. And a few decades after the mathematical/statistical groundwork to do so were born.

                – famargar
                May 10 at 11:05












              • It also misses important contributions to psychology and behavioural economics that proved when and how humans take wrong decisions (hint: all the time ;)

                – famargar
                May 10 at 11:08















              3














              The negative proof to the question here is far broader than academics: is there a metric for the best car? Best parent? Best programming language? Smartest person? No, because all these things have many orthogonal dimensions that simply can't be collapsed to one without unacceptable information loss. Researchers can be creative, well funded, methodical, hard working, well versed in literature, collaborative with peers/students, etc.



              I concur with David Ketcheson's answer on what to do instead.






              share|improve this answer























              • Your comment assumes that no multidimensional problem can be analytically quantified. However, this comment comes a decade after machine learning and artificial intelligence (in english: algorithms solving complex multidimensional problems) became pervasive in our society. And a few decades after the mathematical/statistical groundwork to do so were born.

                – famargar
                May 10 at 11:05












              • It also misses important contributions to psychology and behavioural economics that proved when and how humans take wrong decisions (hint: all the time ;)

                – famargar
                May 10 at 11:08













              3












              3








              3







              The negative proof to the question here is far broader than academics: is there a metric for the best car? Best parent? Best programming language? Smartest person? No, because all these things have many orthogonal dimensions that simply can't be collapsed to one without unacceptable information loss. Researchers can be creative, well funded, methodical, hard working, well versed in literature, collaborative with peers/students, etc.



              I concur with David Ketcheson's answer on what to do instead.






              share|improve this answer













              The negative proof to the question here is far broader than academics: is there a metric for the best car? Best parent? Best programming language? Smartest person? No, because all these things have many orthogonal dimensions that simply can't be collapsed to one without unacceptable information loss. Researchers can be creative, well funded, methodical, hard working, well versed in literature, collaborative with peers/students, etc.



              I concur with David Ketcheson's answer on what to do instead.







              share|improve this answer












              share|improve this answer



              share|improve this answer










              answered May 6 at 5:50









              Eliot GillumEliot Gillum

              1312




              1312












              • Your comment assumes that no multidimensional problem can be analytically quantified. However, this comment comes a decade after machine learning and artificial intelligence (in english: algorithms solving complex multidimensional problems) became pervasive in our society. And a few decades after the mathematical/statistical groundwork to do so were born.

                – famargar
                May 10 at 11:05












              • It also misses important contributions to psychology and behavioural economics that proved when and how humans take wrong decisions (hint: all the time ;)

                – famargar
                May 10 at 11:08

















              • Your comment assumes that no multidimensional problem can be analytically quantified. However, this comment comes a decade after machine learning and artificial intelligence (in english: algorithms solving complex multidimensional problems) became pervasive in our society. And a few decades after the mathematical/statistical groundwork to do so were born.

                – famargar
                May 10 at 11:05












              • It also misses important contributions to psychology and behavioural economics that proved when and how humans take wrong decisions (hint: all the time ;)

                – famargar
                May 10 at 11:08
















              Your comment assumes that no multidimensional problem can be analytically quantified. However, this comment comes a decade after machine learning and artificial intelligence (in english: algorithms solving complex multidimensional problems) became pervasive in our society. And a few decades after the mathematical/statistical groundwork to do so were born.

              – famargar
              May 10 at 11:05






              Your comment assumes that no multidimensional problem can be analytically quantified. However, this comment comes a decade after machine learning and artificial intelligence (in english: algorithms solving complex multidimensional problems) became pervasive in our society. And a few decades after the mathematical/statistical groundwork to do so were born.

              – famargar
              May 10 at 11:05














              It also misses important contributions to psychology and behavioural economics that proved when and how humans take wrong decisions (hint: all the time ;)

              – famargar
              May 10 at 11:08





              It also misses important contributions to psychology and behavioural economics that proved when and how humans take wrong decisions (hint: all the time ;)

              – famargar
              May 10 at 11:08











              2














              All metrics that used (e.g. number of first/senior authorships, sum of impact factors, percentile ranks of impact factors, citations, H-index, grants and other funding etc) have all their advantages and many more disadvantages. Never the less they are used in hiring processes in one or the other way because otherwise it is not possible to assess several hundred candidates that apply for a faculty position. Which of these factors are important in a certain sub-field is very different. Only the ones scoring top in these metrics will make it to the interview where then other factors might count as well.



              For someone who is not familiar with a certain field the easiest (but still not always correct) way to see how good a professor might be is the name of the university. e.g. a professor at Cambridge will most likely have achieved a lot in his life. Someone at a no-name place will not have made much impact that impressed other people in the same field and if such a person does make a big impact one day then he will most likely get offers to move to a place with a better name.






              share|improve this answer



























                2














                All metrics that used (e.g. number of first/senior authorships, sum of impact factors, percentile ranks of impact factors, citations, H-index, grants and other funding etc) have all their advantages and many more disadvantages. Never the less they are used in hiring processes in one or the other way because otherwise it is not possible to assess several hundred candidates that apply for a faculty position. Which of these factors are important in a certain sub-field is very different. Only the ones scoring top in these metrics will make it to the interview where then other factors might count as well.



                For someone who is not familiar with a certain field the easiest (but still not always correct) way to see how good a professor might be is the name of the university. e.g. a professor at Cambridge will most likely have achieved a lot in his life. Someone at a no-name place will not have made much impact that impressed other people in the same field and if such a person does make a big impact one day then he will most likely get offers to move to a place with a better name.






                share|improve this answer

























                  2












                  2








                  2







                  All metrics that used (e.g. number of first/senior authorships, sum of impact factors, percentile ranks of impact factors, citations, H-index, grants and other funding etc) have all their advantages and many more disadvantages. Never the less they are used in hiring processes in one or the other way because otherwise it is not possible to assess several hundred candidates that apply for a faculty position. Which of these factors are important in a certain sub-field is very different. Only the ones scoring top in these metrics will make it to the interview where then other factors might count as well.



                  For someone who is not familiar with a certain field the easiest (but still not always correct) way to see how good a professor might be is the name of the university. e.g. a professor at Cambridge will most likely have achieved a lot in his life. Someone at a no-name place will not have made much impact that impressed other people in the same field and if such a person does make a big impact one day then he will most likely get offers to move to a place with a better name.






                  share|improve this answer













                  All metrics that used (e.g. number of first/senior authorships, sum of impact factors, percentile ranks of impact factors, citations, H-index, grants and other funding etc) have all their advantages and many more disadvantages. Never the less they are used in hiring processes in one or the other way because otherwise it is not possible to assess several hundred candidates that apply for a faculty position. Which of these factors are important in a certain sub-field is very different. Only the ones scoring top in these metrics will make it to the interview where then other factors might count as well.



                  For someone who is not familiar with a certain field the easiest (but still not always correct) way to see how good a professor might be is the name of the university. e.g. a professor at Cambridge will most likely have achieved a lot in his life. Someone at a no-name place will not have made much impact that impressed other people in the same field and if such a person does make a big impact one day then he will most likely get offers to move to a place with a better name.







                  share|improve this answer












                  share|improve this answer



                  share|improve this answer










                  answered May 3 at 11:11









                  lordylordy

                  3,311819




                  3,311819





















                      1















                      If one claimed that a particular scholar was "above average" or
                      "noted" in their field, is there any good metric by which to support
                      or deny such a claim?




                      The only "generally approved" quantitative metric is the h-index. H-index is a metric, is OKfor your task as it allows you to define above or below average. As a matter of fact, this is the way some national educations systems stamp their professors as good enough for tenure. It is also agreed that it is not "good enough" - famously, Peter Higgs, 2013 Nobel in Physics, would fail miserably a ranking based on h-index only, as he published very few paper, although with huge citation count. Also, h-index is a measure of lifetime achievements, thus needs to be corrected for the academic age. Which brings us to the next point.




                      Is there a generally accepted way to indicate that a particular
                      professor or scholar is outstanding, or above average, in their field?
                      I understand there are certain indicators, such as chairs, endowments,
                      prizes, etc. But these don't really seem to help to compare one
                      scholar to another, except in a sort of gross, simple count way (i.e.
                      one professor has had more chairs than another)




                      Other, mostly qualitative metrics are regularly used, consciously or not, in academic's minds, although no official ranking exist. I will mention a few, the ordering only reflecting the stage in an academic career:



                      1. Institution where PhD has been obtained
                      2. PhD supervisor
                      3. national prizes
                      4. national grants
                      5. number of PhD students supervised
                      6. chairs at institutions or conferences
                      6. international prizes
                      7. academic success of PhD students mentored
                      8. more I could not think about now :)



                      Is it theoretically possible to create a "ranking" of professors in
                      their fields, by some metric?




                      Of course it is, there is entire field about it called Scientometrics. You have to 1) fix for h-index known limitations 2) combine with the variables above to come up with a more comprehensive algorithm that will rank any researcher in any field. The reasons why this has not been done before are twofold. First, it is not easy at all to define objectively how much every metric listed here should weight in the ranking algorithm. Second, and most importantly, academics rank every day for jobs, promotions, accepting papers or conference contributions, prizes etc. However, they prefer their ranking algorithm to suit their individual minds, rather than adopting a common framework.




                      Could their h-index serve as such a metric?




                      As described above, h-index has many limitations that make it impractical for most purposes. But an entire field of research exists around it - Scientometrics - so rest assured there will be developments.






                      share|improve this answer




















                      • 4





                        I disagree with many of the things you say: h-index is not a good metric because it led to more or less submersed citation rings to artificially boost that metric and get promotions. Second, "academics like to rank": who said that? Most of the academics I know are certainly not interested in ranking anyone.

                        – Massimo Ortolano
                        May 3 at 9:54






                      • 1





                        Academics rank all the time other academics for a job, a promotion, a prize. However, they don't like to use commonly shared algorithms for doing that.I will specify that. H-index is good in the sense that is well-defined, sensible and allows comparisons. It is not a comprehensive metric however, as I specifically write.

                        – famargar
                        May 3 at 9:59






                      • 1





                        The fact that they have to rank for certain specific purposes doesn't mean that they like to rank. Furthermore, as I said, the h-index brought several distortions in scientific publishing and that's definitely not good.

                        – Massimo Ortolano
                        May 3 at 10:05











                      • Thanks. I did edit the answer integrating your inputs. I think I made clearly enough that h-index is not good enough. As for the bias induced by ranking, this applies to any ranking, explicit or not. See the fact that most of US professors come from Ivy League schools - nobody attached points to it, however most committees do take that into accout.

                        – famargar
                        May 3 at 10:17











                      • Since academics rank each other all the time for jobs, promotions, prizes, etc. do they use the H-index as the whole or as a part of that ranking calculation?

                        – user151841
                        May 3 at 13:43















                      1















                      If one claimed that a particular scholar was "above average" or
                      "noted" in their field, is there any good metric by which to support
                      or deny such a claim?




                      The only "generally approved" quantitative metric is the h-index. H-index is a metric, is OKfor your task as it allows you to define above or below average. As a matter of fact, this is the way some national educations systems stamp their professors as good enough for tenure. It is also agreed that it is not "good enough" - famously, Peter Higgs, 2013 Nobel in Physics, would fail miserably a ranking based on h-index only, as he published very few paper, although with huge citation count. Also, h-index is a measure of lifetime achievements, thus needs to be corrected for the academic age. Which brings us to the next point.




                      Is there a generally accepted way to indicate that a particular
                      professor or scholar is outstanding, or above average, in their field?
                      I understand there are certain indicators, such as chairs, endowments,
                      prizes, etc. But these don't really seem to help to compare one
                      scholar to another, except in a sort of gross, simple count way (i.e.
                      one professor has had more chairs than another)




                      Other, mostly qualitative metrics are regularly used, consciously or not, in academic's minds, although no official ranking exist. I will mention a few, the ordering only reflecting the stage in an academic career:



                      1. Institution where PhD has been obtained
                      2. PhD supervisor
                      3. national prizes
                      4. national grants
                      5. number of PhD students supervised
                      6. chairs at institutions or conferences
                      6. international prizes
                      7. academic success of PhD students mentored
                      8. more I could not think about now :)



                      Is it theoretically possible to create a "ranking" of professors in
                      their fields, by some metric?




                      Of course it is, there is entire field about it called Scientometrics. You have to 1) fix for h-index known limitations 2) combine with the variables above to come up with a more comprehensive algorithm that will rank any researcher in any field. The reasons why this has not been done before are twofold. First, it is not easy at all to define objectively how much every metric listed here should weight in the ranking algorithm. Second, and most importantly, academics rank every day for jobs, promotions, accepting papers or conference contributions, prizes etc. However, they prefer their ranking algorithm to suit their individual minds, rather than adopting a common framework.




                      Could their h-index serve as such a metric?




                      As described above, h-index has many limitations that make it impractical for most purposes. But an entire field of research exists around it - Scientometrics - so rest assured there will be developments.






                      share|improve this answer




















                      • 4





                        I disagree with many of the things you say: h-index is not a good metric because it led to more or less submersed citation rings to artificially boost that metric and get promotions. Second, "academics like to rank": who said that? Most of the academics I know are certainly not interested in ranking anyone.

                        – Massimo Ortolano
                        May 3 at 9:54






                      • 1





                        Academics rank all the time other academics for a job, a promotion, a prize. However, they don't like to use commonly shared algorithms for doing that.I will specify that. H-index is good in the sense that is well-defined, sensible and allows comparisons. It is not a comprehensive metric however, as I specifically write.

                        – famargar
                        May 3 at 9:59






                      • 1





                        The fact that they have to rank for certain specific purposes doesn't mean that they like to rank. Furthermore, as I said, the h-index brought several distortions in scientific publishing and that's definitely not good.

                        – Massimo Ortolano
                        May 3 at 10:05











                      • Thanks. I did edit the answer integrating your inputs. I think I made clearly enough that h-index is not good enough. As for the bias induced by ranking, this applies to any ranking, explicit or not. See the fact that most of US professors come from Ivy League schools - nobody attached points to it, however most committees do take that into accout.

                        – famargar
                        May 3 at 10:17











                      • Since academics rank each other all the time for jobs, promotions, prizes, etc. do they use the H-index as the whole or as a part of that ranking calculation?

                        – user151841
                        May 3 at 13:43













                      1












                      1








                      1








                      If one claimed that a particular scholar was "above average" or
                      "noted" in their field, is there any good metric by which to support
                      or deny such a claim?




                      The only "generally approved" quantitative metric is the h-index. H-index is a metric, is OKfor your task as it allows you to define above or below average. As a matter of fact, this is the way some national educations systems stamp their professors as good enough for tenure. It is also agreed that it is not "good enough" - famously, Peter Higgs, 2013 Nobel in Physics, would fail miserably a ranking based on h-index only, as he published very few paper, although with huge citation count. Also, h-index is a measure of lifetime achievements, thus needs to be corrected for the academic age. Which brings us to the next point.




                      Is there a generally accepted way to indicate that a particular
                      professor or scholar is outstanding, or above average, in their field?
                      I understand there are certain indicators, such as chairs, endowments,
                      prizes, etc. But these don't really seem to help to compare one
                      scholar to another, except in a sort of gross, simple count way (i.e.
                      one professor has had more chairs than another)




                      Other, mostly qualitative metrics are regularly used, consciously or not, in academic's minds, although no official ranking exist. I will mention a few, the ordering only reflecting the stage in an academic career:



                      1. Institution where PhD has been obtained
                      2. PhD supervisor
                      3. national prizes
                      4. national grants
                      5. number of PhD students supervised
                      6. chairs at institutions or conferences
                      6. international prizes
                      7. academic success of PhD students mentored
                      8. more I could not think about now :)



                      Is it theoretically possible to create a "ranking" of professors in
                      their fields, by some metric?




                      Of course it is, there is entire field about it called Scientometrics. You have to 1) fix for h-index known limitations 2) combine with the variables above to come up with a more comprehensive algorithm that will rank any researcher in any field. The reasons why this has not been done before are twofold. First, it is not easy at all to define objectively how much every metric listed here should weight in the ranking algorithm. Second, and most importantly, academics rank every day for jobs, promotions, accepting papers or conference contributions, prizes etc. However, they prefer their ranking algorithm to suit their individual minds, rather than adopting a common framework.




                      Could their h-index serve as such a metric?




                      As described above, h-index has many limitations that make it impractical for most purposes. But an entire field of research exists around it - Scientometrics - so rest assured there will be developments.






                      share|improve this answer
















                      If one claimed that a particular scholar was "above average" or
                      "noted" in their field, is there any good metric by which to support
                      or deny such a claim?




                      The only "generally approved" quantitative metric is the h-index. H-index is a metric, is OKfor your task as it allows you to define above or below average. As a matter of fact, this is the way some national educations systems stamp their professors as good enough for tenure. It is also agreed that it is not "good enough" - famously, Peter Higgs, 2013 Nobel in Physics, would fail miserably a ranking based on h-index only, as he published very few paper, although with huge citation count. Also, h-index is a measure of lifetime achievements, thus needs to be corrected for the academic age. Which brings us to the next point.




                      Is there a generally accepted way to indicate that a particular
                      professor or scholar is outstanding, or above average, in their field?
                      I understand there are certain indicators, such as chairs, endowments,
                      prizes, etc. But these don't really seem to help to compare one
                      scholar to another, except in a sort of gross, simple count way (i.e.
                      one professor has had more chairs than another)




                      Other, mostly qualitative metrics are regularly used, consciously or not, in academic's minds, although no official ranking exist. I will mention a few, the ordering only reflecting the stage in an academic career:



                      1. Institution where PhD has been obtained
                      2. PhD supervisor
                      3. national prizes
                      4. national grants
                      5. number of PhD students supervised
                      6. chairs at institutions or conferences
                      6. international prizes
                      7. academic success of PhD students mentored
                      8. more I could not think about now :)



                      Is it theoretically possible to create a "ranking" of professors in
                      their fields, by some metric?




                      Of course it is, there is entire field about it called Scientometrics. You have to 1) fix for h-index known limitations 2) combine with the variables above to come up with a more comprehensive algorithm that will rank any researcher in any field. The reasons why this has not been done before are twofold. First, it is not easy at all to define objectively how much every metric listed here should weight in the ranking algorithm. Second, and most importantly, academics rank every day for jobs, promotions, accepting papers or conference contributions, prizes etc. However, they prefer their ranking algorithm to suit their individual minds, rather than adopting a common framework.




                      Could their h-index serve as such a metric?




                      As described above, h-index has many limitations that make it impractical for most purposes. But an entire field of research exists around it - Scientometrics - so rest assured there will be developments.







                      share|improve this answer














                      share|improve this answer



                      share|improve this answer








                      edited May 3 at 12:07

























                      answered May 3 at 9:42









                      famargarfamargar

                      2,5061624




                      2,5061624







                      • 4





                        I disagree with many of the things you say: h-index is not a good metric because it led to more or less submersed citation rings to artificially boost that metric and get promotions. Second, "academics like to rank": who said that? Most of the academics I know are certainly not interested in ranking anyone.

                        – Massimo Ortolano
                        May 3 at 9:54






                      • 1





                        Academics rank all the time other academics for a job, a promotion, a prize. However, they don't like to use commonly shared algorithms for doing that.I will specify that. H-index is good in the sense that is well-defined, sensible and allows comparisons. It is not a comprehensive metric however, as I specifically write.

                        – famargar
                        May 3 at 9:59






                      • 1





                        The fact that they have to rank for certain specific purposes doesn't mean that they like to rank. Furthermore, as I said, the h-index brought several distortions in scientific publishing and that's definitely not good.

                        – Massimo Ortolano
                        May 3 at 10:05











                      • Thanks. I did edit the answer integrating your inputs. I think I made clearly enough that h-index is not good enough. As for the bias induced by ranking, this applies to any ranking, explicit or not. See the fact that most of US professors come from Ivy League schools - nobody attached points to it, however most committees do take that into accout.

                        – famargar
                        May 3 at 10:17











                      • Since academics rank each other all the time for jobs, promotions, prizes, etc. do they use the H-index as the whole or as a part of that ranking calculation?

                        – user151841
                        May 3 at 13:43












                      • 4





                        I disagree with many of the things you say: h-index is not a good metric because it led to more or less submersed citation rings to artificially boost that metric and get promotions. Second, "academics like to rank": who said that? Most of the academics I know are certainly not interested in ranking anyone.

                        – Massimo Ortolano
                        May 3 at 9:54






                      • 1





                        Academics rank all the time other academics for a job, a promotion, a prize. However, they don't like to use commonly shared algorithms for doing that.I will specify that. H-index is good in the sense that is well-defined, sensible and allows comparisons. It is not a comprehensive metric however, as I specifically write.

                        – famargar
                        May 3 at 9:59






                      • 1





                        The fact that they have to rank for certain specific purposes doesn't mean that they like to rank. Furthermore, as I said, the h-index brought several distortions in scientific publishing and that's definitely not good.

                        – Massimo Ortolano
                        May 3 at 10:05











                      • Thanks. I did edit the answer integrating your inputs. I think I made clearly enough that h-index is not good enough. As for the bias induced by ranking, this applies to any ranking, explicit or not. See the fact that most of US professors come from Ivy League schools - nobody attached points to it, however most committees do take that into accout.

                        – famargar
                        May 3 at 10:17











                      • Since academics rank each other all the time for jobs, promotions, prizes, etc. do they use the H-index as the whole or as a part of that ranking calculation?

                        – user151841
                        May 3 at 13:43







                      4




                      4





                      I disagree with many of the things you say: h-index is not a good metric because it led to more or less submersed citation rings to artificially boost that metric and get promotions. Second, "academics like to rank": who said that? Most of the academics I know are certainly not interested in ranking anyone.

                      – Massimo Ortolano
                      May 3 at 9:54





                      I disagree with many of the things you say: h-index is not a good metric because it led to more or less submersed citation rings to artificially boost that metric and get promotions. Second, "academics like to rank": who said that? Most of the academics I know are certainly not interested in ranking anyone.

                      – Massimo Ortolano
                      May 3 at 9:54




                      1




                      1





                      Academics rank all the time other academics for a job, a promotion, a prize. However, they don't like to use commonly shared algorithms for doing that.I will specify that. H-index is good in the sense that is well-defined, sensible and allows comparisons. It is not a comprehensive metric however, as I specifically write.

                      – famargar
                      May 3 at 9:59





                      Academics rank all the time other academics for a job, a promotion, a prize. However, they don't like to use commonly shared algorithms for doing that.I will specify that. H-index is good in the sense that is well-defined, sensible and allows comparisons. It is not a comprehensive metric however, as I specifically write.

                      – famargar
                      May 3 at 9:59




                      1




                      1





                      The fact that they have to rank for certain specific purposes doesn't mean that they like to rank. Furthermore, as I said, the h-index brought several distortions in scientific publishing and that's definitely not good.

                      – Massimo Ortolano
                      May 3 at 10:05





                      The fact that they have to rank for certain specific purposes doesn't mean that they like to rank. Furthermore, as I said, the h-index brought several distortions in scientific publishing and that's definitely not good.

                      – Massimo Ortolano
                      May 3 at 10:05













                      Thanks. I did edit the answer integrating your inputs. I think I made clearly enough that h-index is not good enough. As for the bias induced by ranking, this applies to any ranking, explicit or not. See the fact that most of US professors come from Ivy League schools - nobody attached points to it, however most committees do take that into accout.

                      – famargar
                      May 3 at 10:17





                      Thanks. I did edit the answer integrating your inputs. I think I made clearly enough that h-index is not good enough. As for the bias induced by ranking, this applies to any ranking, explicit or not. See the fact that most of US professors come from Ivy League schools - nobody attached points to it, however most committees do take that into accout.

                      – famargar
                      May 3 at 10:17













                      Since academics rank each other all the time for jobs, promotions, prizes, etc. do they use the H-index as the whole or as a part of that ranking calculation?

                      – user151841
                      May 3 at 13:43





                      Since academics rank each other all the time for jobs, promotions, prizes, etc. do they use the H-index as the whole or as a part of that ranking calculation?

                      – user151841
                      May 3 at 13:43

















                      draft saved

                      draft discarded
















































                      Thanks for contributing an answer to Academia Stack Exchange!


                      • Please be sure to answer the question. Provide details and share your research!

                      But avoid


                      • Asking for help, clarification, or responding to other answers.

                      • Making statements based on opinion; back them up with references or personal experience.

                      To learn more, see our tips on writing great answers.




                      draft saved


                      draft discarded














                      StackExchange.ready(
                      function ()
                      StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2facademia.stackexchange.com%2fquestions%2f130027%2fwhat-is-a-common-way-to-tell-if-an-academic-is-above-average-or-outstanding-i%23new-answer', 'question_page');

                      );

                      Post as a guest















                      Required, but never shown





















































                      Required, but never shown














                      Required, but never shown












                      Required, but never shown







                      Required, but never shown

































                      Required, but never shown














                      Required, but never shown












                      Required, but never shown







                      Required, but never shown







                      Popular posts from this blog

                      Wikipedia:Vital articles Мазмуну Biography - Өмүр баян Philosophy and psychology - Философия жана психология Religion - Дин Social sciences - Коомдук илимдер Language and literature - Тил жана адабият Science - Илим Technology - Технология Arts and recreation - Искусство жана эс алуу History and geography - Тарых жана география Навигация менюсу

                      Bruxelas-Capital Índice Historia | Composición | Situación lingüística | Clima | Cidades irmandadas | Notas | Véxase tamén | Menú de navegacióneO uso das linguas en Bruxelas e a situación do neerlandés"Rexión de Bruxelas Capital"o orixinalSitio da rexiónPáxina de Bruselas no sitio da Oficina de Promoción Turística de Valonia e BruxelasMapa Interactivo da Rexión de Bruxelas-CapitaleeWorldCat332144929079854441105155190212ID28008674080552-90000 0001 0666 3698n94104302ID540940339365017018237

                      What should I write in an apology letter, since I have decided not to join a company after accepting an offer letterShould I keep looking after accepting a job offer?What should I do when I've been verbally told I would get an offer letter, but still haven't gotten one after 4 weeks?Do I accept an offer from a company that I am not likely to join?New job hasn't confirmed starting date and I want to give current employer as much notice as possibleHow should I address my manager in my resignation letter?HR delayed background verification, now jobless as resignedNo email communication after accepting a formal written offer. How should I phrase the call?What should I do if after receiving a verbal offer letter I am informed that my written job offer is put on hold due to some internal issues?Should I inform the current employer that I am about to resign within 1-2 weeks since I have signed the offer letter and waiting for visa?What company will do, if I send their offer letter to another company