Second reviewer clearly used AI. I pasted the "Conflation of Religiosity and Spirituality" section into https://undetectable.ai/ and they said it was 1% human.
Haha, I thought that was the most interesting part of the second review, even if it was a bit unsubstantial. I did suspect AI because it was a strange thing to point out.
Edit: I've just checked some of my own writings, and I got 99% AI in some parts, despite only slightly using them for grammar and phrasing checking. In other portions which I also checked for grammar and phrasing, I got 1%. I don't think it really works, but I do suspect that the reviewer used AI to come up with the idea (regardless, it goes without saying that it was a politically-motivated retraction). Moreover, they have an incentive to mark texts as AI so that you use their "humanize" feature.
Indeed, comparing the two reviews makes this glaringly obvious. That makes two reasons for the second reviewer to be fired. Kirkegaard's paper has been rejected because of a criticism written by ChatGPT.
"By failing to recognize these dimensions, the paper perpetuates stereotypes rather than presenting a balanced analysis of ideological orientations. This not only weakens the theoretical framework but also risks alienating readers and undermining the journal's reputation for objectivity." is so obviously an LLM
Had to renew my subscription to Emil Kirkegaard (I keep them pared) so I could comment. Fantastic article requiring deeper perusal.
Just want to remark that I have been a Prolific subject for public service reasons and had stopped bcs they seem to be focussing on marketing research, probably lucrative.
There is also a perplexing specified requirement that desktops be used but not laptops. I tried doing one on my laptop and think it submitted. If anyone knows more about this issue, I am interested.
Splendid work, and its good to see you can defend it on objective grounds. I wouldn't worry if they push for you to publish elsewhere, if anything it would seem its a badge of honour these days!
First reviewer actually makes good points. For instance, use of non-peer-reviewed works to justify an extraordinary claim (genetic roots of political views, in this case). More importantly, the critique of the language associating conservatism with positive traits and liberalism to negative traits is valid. I don’t agree that the discussion section doesn’t matter much. Data, data analysis, and model fits by themselves mostly just produce decontextualized numbers that could be explained in many different ways. The discussion is there to provide the necessary context needed to come up with a conclusion relevant to the paper’s topic, making it almost as important as the methodology. Your associative language (especially the “less attractive” remark on Page 7, which is not at all backed up by any data in the study or by any cited literature) raises a valid concern that the conclusions drawn from your data analysis and model-fitting outputs are driven less by the methods’ output than by your confirmation bias. Finally, fact that you’re not using the same scale as Lahtinen wouldn’t really matter if you were just doing a study aimed at what mental health traits are correlated with conservatism or wokeness, but since one of the stated central goals of your study is to evaluate whether Lahtinen’s measures are biased, it’s common sense that to produce a reliable evaluation of a measure, you need to use that same exact measure. Yes, you reran your code, but you actually need to put it into the paper, otherwise it’s a critical error warranting a retraction.
Second reviewer: not so much. At best, the second reviewer concurs with what the first reviewer says (which are valid points). But the first critique, that it leaves out spirituality, can be addressed with a note in the discussion of religiosity that says something along the lines of ”That being said, spiritual practices do exist among non-religious people and a potential topic for future work could be the extent to which non-religious spiritual practices replicate religion’s effect on mental health”, rather than a full re-write or a retraction. Same with lack of nuance in describing liberalism - that could also be addressed by a clear definition of liberalism in the introduction, to which the article sticks religiously to, rather than a full rewrite. In general, it’s a lot sloppier than the first reviewer’s.
Still, based on the critique of the first reviewer, if I were a reviewer I would probably personally ask that the paper be completely rewritten before it can be accepted again.
Also, your point that “Our paper is the 9th most read paper in their journal of all time according to altmetrics” - that’s a textbook example of the ad populum fallacy. Methodological soundness and relevance to the journal’s subject should determine what gets into journals. Popularity should have no bearing.
That said, unreliable work does fall through the cracks of peer review, since peer reviewer don’t have time to look at everything (like at all the references) and some problems, like complete fabrication of data (like the cold fusion paper from 1989) or methodological problems not apparent from what is written in the paper (like the LK-99 superconductor paper from 2023). And I would grant that in the social scientists, due to the human element, maybe a left-leaning work would be less likely to attract complaints in places like the discussion that would result in a second round of peer review due the there being many more left-leaning people in the academy. I, personally, would critique unreliable work to the journals more wherever it exists rather than criticize the decision to retract the paper.
As for the other book supporting that claim, it does pass that basic check for "do you trust the publisher to uphold high standards?". In my original comment I dismissed it as not peer-reviewed under the principle that "books are not peer-reviewed". I think this attitude comes from the fact that my Ph.D. advisor always told me to cite the source from which every idea came from, and unless it was already pretty well-established knowledge in the field it would as a rule be a journal article. Though maybe it's different in your field than it is in physics (which is my field).
To my understanding, books are not peer reviewed the same way academic journals are. They are reviewed by their editors. Most books are not peer reviewed, but a few scholarly books are published through publishers that are effectively the same as peer review. And you can tell whether they're peer reviewed by looking at the publishers, and whether you trust the publishers to uphold high standards.
I don't think that your 2023 book falls under that for the purposes of the claim that it supports in the first paragraph. it's published by Imperium Press, and I looked at their website and their background and they are a publishing house whose main purpose is to create teaching and academic material from the perspective of a subset of conservatism that wants to bring back the worldviews of the past. Hence it's safe to say that its editorial review process folllows that perspective too. People who agree with that perspective will see it as reliable, people who don't will see it as unreliable. My understanding is that few outside those who hold to that subset of conservatism believe that conservatism is part of an evolved fitness factor. Thus, supporting that claim with a work reviewed and published by a publisher that aligns its review process with that subset of conservatism is only a strong argument to those who are already inclined to believe that claim. And that’s not a strong argument at all.
Outrageous. A forced retraction should be reserved for cases of proven fraud or other ethical violations... not because the editor was able to scrounge up a couple of post-publication referees with their panties in a bunch.
Peer review is fine for the hard sciences but so many reviewers otherwise can’t keep their own political biases out of the equation. Instead of pretending that reviewers are ‘neutral”, it would be better if there were lists of known conservative and progressive reviewers and journals had to choose an even number from each. At least there would be no pretence.
Peer review is fine for the hard sciences but so many reviewers otherwise can’t keep their own political biases out of the equation. Instead of pretending that reviewers are ‘neutral”, it would be better if there were lists of know conservative and progressive reviewers and journals had to choose an even number from each. At least there would be no pretence.
The retraction is definitely uncalled for. It has my attention since I recently published on this topic in the U.S. My work did not have a full scale, and I would love to do that in the future. But I did find that, at times, identity politics completely mediated the well-being differences between conservatives and liberals. Under the right conditions, I would love to do further work to explore whether ID politics leads to lower levels of well-being, whether it attracts individuals with lower levels of well-being, or, what I think happens, both. Looking forward to hearing about the resolution that you can work out with the journal.
Second reviewer clearly used AI. I pasted the "Conflation of Religiosity and Spirituality" section into https://undetectable.ai/ and they said it was 1% human.
Haha, I thought that was the most interesting part of the second review, even if it was a bit unsubstantial. I did suspect AI because it was a strange thing to point out.
Edit: I've just checked some of my own writings, and I got 99% AI in some parts, despite only slightly using them for grammar and phrasing checking. In other portions which I also checked for grammar and phrasing, I got 1%. I don't think it really works, but I do suspect that the reviewer used AI to come up with the idea (regardless, it goes without saying that it was a politically-motivated retraction). Moreover, they have an incentive to mark texts as AI so that you use their "humanize" feature.
Indeed, comparing the two reviews makes this glaringly obvious. That makes two reasons for the second reviewer to be fired. Kirkegaard's paper has been rejected because of a criticism written by ChatGPT.
Whats the evidence that site is accurate in detecting AI texts?
xaxaxaxaxaxaxaxaxa
not sure how they didn't realize this
"By failing to recognize these dimensions, the paper perpetuates stereotypes rather than presenting a balanced analysis of ideological orientations. This not only weakens the theoretical framework but also risks alienating readers and undermining the journal's reputation for objectivity." is so obviously an LLM
> COPE guidelines
The jokes write themselves.
I looked up the COPE retraction guidelines that were cited and they don't list anything that should make this paper qualify.
https://authorservices.wiley.com/ethics-guidelines/retractions-and-expressions-of-concern.html
https://publicationethics.org/guidance/guideline/retraction-guidelines
Welcome to the club.
Wiley it is, I think.
from my general readings, it's mostly Wiley doing the politicisation stuff rather than the journal.
another aspect is their new rule for complaints, which reminds me of Wikipedia activist editors.
have a rule system that is optimal to left wing activists. as they are likely to utilise and benefit from it
Had to renew my subscription to Emil Kirkegaard (I keep them pared) so I could comment. Fantastic article requiring deeper perusal.
Just want to remark that I have been a Prolific subject for public service reasons and had stopped bcs they seem to be focussing on marketing research, probably lucrative.
There is also a perplexing specified requirement that desktops be used but not laptops. I tried doing one on my laptop and think it submitted. If anyone knows more about this issue, I am interested.
Splendid work, and its good to see you can defend it on objective grounds. I wouldn't worry if they push for you to publish elsewhere, if anything it would seem its a badge of honour these days!
First reviewer actually makes good points. For instance, use of non-peer-reviewed works to justify an extraordinary claim (genetic roots of political views, in this case). More importantly, the critique of the language associating conservatism with positive traits and liberalism to negative traits is valid. I don’t agree that the discussion section doesn’t matter much. Data, data analysis, and model fits by themselves mostly just produce decontextualized numbers that could be explained in many different ways. The discussion is there to provide the necessary context needed to come up with a conclusion relevant to the paper’s topic, making it almost as important as the methodology. Your associative language (especially the “less attractive” remark on Page 7, which is not at all backed up by any data in the study or by any cited literature) raises a valid concern that the conclusions drawn from your data analysis and model-fitting outputs are driven less by the methods’ output than by your confirmation bias. Finally, fact that you’re not using the same scale as Lahtinen wouldn’t really matter if you were just doing a study aimed at what mental health traits are correlated with conservatism or wokeness, but since one of the stated central goals of your study is to evaluate whether Lahtinen’s measures are biased, it’s common sense that to produce a reliable evaluation of a measure, you need to use that same exact measure. Yes, you reran your code, but you actually need to put it into the paper, otherwise it’s a critical error warranting a retraction.
Second reviewer: not so much. At best, the second reviewer concurs with what the first reviewer says (which are valid points). But the first critique, that it leaves out spirituality, can be addressed with a note in the discussion of religiosity that says something along the lines of ”That being said, spiritual practices do exist among non-religious people and a potential topic for future work could be the extent to which non-religious spiritual practices replicate religion’s effect on mental health”, rather than a full re-write or a retraction. Same with lack of nuance in describing liberalism - that could also be addressed by a clear definition of liberalism in the introduction, to which the article sticks religiously to, rather than a full rewrite. In general, it’s a lot sloppier than the first reviewer’s.
Still, based on the critique of the first reviewer, if I were a reviewer I would probably personally ask that the paper be completely rewritten before it can be accepted again.
Also, your point that “Our paper is the 9th most read paper in their journal of all time according to altmetrics” - that’s a textbook example of the ad populum fallacy. Methodological soundness and relevance to the journal’s subject should determine what gets into journals. Popularity should have no bearing.
That said, unreliable work does fall through the cracks of peer review, since peer reviewer don’t have time to look at everything (like at all the references) and some problems, like complete fabrication of data (like the cold fusion paper from 1989) or methodological problems not apparent from what is written in the paper (like the LK-99 superconductor paper from 2023). And I would grant that in the social scientists, due to the human element, maybe a left-leaning work would be less likely to attract complaints in places like the discussion that would result in a second round of peer review due the there being many more left-leaning people in the academy. I, personally, would critique unreliable work to the journals more wherever it exists rather than criticize the decision to retract the paper.
Which works were non-peer-reviewed? Everything cited was peer-reviewed, including the books.
As for the other book supporting that claim, it does pass that basic check for "do you trust the publisher to uphold high standards?". In my original comment I dismissed it as not peer-reviewed under the principle that "books are not peer-reviewed". I think this attitude comes from the fact that my Ph.D. advisor always told me to cite the source from which every idea came from, and unless it was already pretty well-established knowledge in the field it would as a rule be a journal article. Though maybe it's different in your field than it is in physics (which is my field).
To my understanding, books are not peer reviewed the same way academic journals are. They are reviewed by their editors. Most books are not peer reviewed, but a few scholarly books are published through publishers that are effectively the same as peer review. And you can tell whether they're peer reviewed by looking at the publishers, and whether you trust the publishers to uphold high standards.
I don't think that your 2023 book falls under that for the purposes of the claim that it supports in the first paragraph. it's published by Imperium Press, and I looked at their website and their background and they are a publishing house whose main purpose is to create teaching and academic material from the perspective of a subset of conservatism that wants to bring back the worldviews of the past. Hence it's safe to say that its editorial review process folllows that perspective too. People who agree with that perspective will see it as reliable, people who don't will see it as unreliable. My understanding is that few outside those who hold to that subset of conservatism believe that conservatism is part of an evolved fitness factor. Thus, supporting that claim with a work reviewed and published by a publisher that aligns its review process with that subset of conservatism is only a strong argument to those who are already inclined to believe that claim. And that’s not a strong argument at all.
Go get 'em, killer. Data will out!
Congrats! Perhaps republish someplace like that journal of heterodox ideas.
Outrageous. A forced retraction should be reserved for cases of proven fraud or other ethical violations... not because the editor was able to scrounge up a couple of post-publication referees with their panties in a bunch.
Congrats!
I looked up the COPE retraction guidelines that were cited and they don't list anything that should make this paper qualify.
https://authorservices.wiley.com/ethics-guidelines/retractions-and-expressions-of-concern.html
https://publicationethics.org/guidance/guideline/retraction-guidelines
In the end, you did nothing wrong.
Peer review is fine for the hard sciences but so many reviewers otherwise can’t keep their own political biases out of the equation. Instead of pretending that reviewers are ‘neutral”, it would be better if there were lists of known conservative and progressive reviewers and journals had to choose an even number from each. At least there would be no pretence.
Peer review is fine for the hard sciences but so many reviewers otherwise can’t keep their own political biases out of the equation. Instead of pretending that reviewers are ‘neutral”, it would be better if there were lists of know conservative and progressive reviewers and journals had to choose an even number from each. At least there would be no pretence.
The retraction is definitely uncalled for. It has my attention since I recently published on this topic in the U.S. My work did not have a full scale, and I would love to do that in the future. But I did find that, at times, identity politics completely mediated the well-being differences between conservatives and liberals. Under the right conditions, I would love to do further work to explore whether ID politics leads to lower levels of well-being, whether it attracts individuals with lower levels of well-being, or, what I think happens, both. Looking forward to hearing about the resolution that you can work out with the journal.
https://onlinelibrary.wiley.com/doi/full/10.1111/socf.12966