Dan Gardner who writes on Substack and whom I suspect is more left wing wrote a response to the claims that Wikipedia is a "Wokepedia". (Which isn't what you're talking about directly in this article.) One of his point is that "Sentiment is Is not slant" meaning that many articles come out as left coded in a sentiment analysis but that that's not necessarily evidence of political slant, because certain topics or people will just naturally have more negatively valenced language around them even in a fully neutral write up (eg Donald Trump). Thoughts on that critique?
So he finds that all the evidence points in the same direction, and new we presented survey results supporting the same thing, and he somehow thinks this amounts to nothing because he can find another interpretations in each case. This is not a very reasonable way to argue. The only question I have for him is what kind of evidence he could actually accept, evidence that one could actually collect realistically? So far we have 1) obvious and extreme left-wing tilt of the owning organization and its prior CEO, 2) language slant towards the left in articles, 2) a survey of the editors, 4) and analysis of the editors' and admins' user pages, 5) detailed studies of particular Wiki legal cases like in ArbCom as well as related policy documents.
I understand the desire to defend Wikipedia, a truly great resource and boon to mankind, but that doesn't mean we get to ignore the political problem. After all, the political issue only affects a tiny proportion of articles, probably <1%.
I suppose you could argue for both things: on the one hand it's a crude overstatement to say the Wikipedia is entirely woke and that therefore it can't be trusted. And on the other hand that many editors and articles have left leaning slant, which is problematic (although it only affects at most 1% of articles).
I'd put that under interpretation #4: "The available source material leans left, coloring the content (these being media articles and academic articles)."
We have to keep in mind that it's not radical, by definition, to be on the other half of a 50/50 political divide.
I was once a Wikipedia editor myself -- true fact. 😉🙂
However, I didn't get to make many posts and edits before getting defenestrated, getting a lifetime ban, for objecting to the claim in the article on transwoman Laurel Hubbard that "she" had "transitioned to female". My tale of woe, my bill of particulars about the usual suspects there who've drunk the gender ideology KoolAid -- a very toxic brew indeed:
Though maybe of particular note there is a link to a post by Larry Sanger -- one of the co-founders of Wikipedia though he seems to have suffered the same fate as Leon Trotsky -- "Wikipedia Is More One-Sided Than Ever":
An impressive and quite useful resource in many ways. But becoming more and more corrupted by ideological biases of one sort or another -- Lysenkoism writ large.
On the plus side, it won't matter. Next year's AI can rewrite the entire thing without the bias in a day or two, plus add all the things the humans missed. Not that people will be using Wikipedias in the future, they will just be asking the AIs directly.
At the moment Wikipedia is extensively used by LLMs, Chatbots, and AIs for their training data. This allows biased Wikipedia editors to 'launder' their bias and inject it into yet more information sources. LLMs make Wikipedia more influential.
Even though it would obviously be speculative, it would be interesting to hear more of your thoughts on the potential of AI becoming an impartial source of information, particularly how this could be implemented and by whom. I have found your political opinions, like those of Peter Frost, refreshingly lucid and grounded in reason
I don't know if it is a good idea, but it seems inevitable. The only important question is which AIs people will be using, and who gets to decide their biases.
I've long had the hypothesis that a way to address Wikipedia bias would be to quietly get together a group of experienced Wikipedia editors (along with someone(s) who has media savvy) and ...
1) Select few well patrolled article(s), the topics of which would be interesting to the media [1].
2) Add a few very accurate politically incorrect lines with supporting citations to eminent sources.
3) Wait for the politically incorrect text and citations to be deleted.
4) Enter into the dispute resolution process.
5) Collect evidence of biased conduct by Wikipedia administrator (it's going to happen).
6) Take the misconduct by the Wikipedia administration(s) to Administrative Action Review with a view toward taking the issue to the Arbitration committee.
7) Attempt to make the documentation of the administrator's bias as embarrassing to Wikipedia as possible.
Yes, I understand that the above-described process is laborious; that's the way woke Wikipedians keep muggles from interfering with their virtuous censorship. The undertaking could be made minimally laborious by focusing *narrowly* on the disputed edit and the administrator(s)'s misconduct.
Ref.
[1] For article topic selection, any topic that touches on genes & IQ or race & crime will be well patrolled but a Trump-related to topic is more likely to attract the media attention that Wikipedia wants to avoid. Perhaps a legally rigorous explication of the very thin evidence supporting E. Jean Carroll's accusations would be a good way to start this effort? It is timely (Trump) and salacious -- that might attract media attention.
If we imagine a bimodal distribution with a bell curve on the left with a mean of 3 and a smaller bell curve on the the right with a mean of 15, but the ceiling were 10 artificially, then that's the graph we would get. That's a decent description of the Overton window too, where anything right of center is Hitler. You wouldn't expect that to come through in self-description survey data though.
Dan Gardner who writes on Substack and whom I suspect is more left wing wrote a response to the claims that Wikipedia is a "Wokepedia". (Which isn't what you're talking about directly in this article.) One of his point is that "Sentiment is Is not slant" meaning that many articles come out as left coded in a sentiment analysis but that that's not necessarily evidence of political slant, because certain topics or people will just naturally have more negatively valenced language around them even in a fully neutral write up (eg Donald Trump). Thoughts on that critique?
https://open.substack.com/pub/dgardner/p/has-wikipedia-become-wokepedia?r=2mmx1&utm_medium=ios
So he finds that all the evidence points in the same direction, and new we presented survey results supporting the same thing, and he somehow thinks this amounts to nothing because he can find another interpretations in each case. This is not a very reasonable way to argue. The only question I have for him is what kind of evidence he could actually accept, evidence that one could actually collect realistically? So far we have 1) obvious and extreme left-wing tilt of the owning organization and its prior CEO, 2) language slant towards the left in articles, 2) a survey of the editors, 4) and analysis of the editors' and admins' user pages, 5) detailed studies of particular Wiki legal cases like in ArbCom as well as related policy documents.
I understand the desire to defend Wikipedia, a truly great resource and boon to mankind, but that doesn't mean we get to ignore the political problem. After all, the political issue only affects a tiny proportion of articles, probably <1%.
I suppose you could argue for both things: on the one hand it's a crude overstatement to say the Wikipedia is entirely woke and that therefore it can't be trusted. And on the other hand that many editors and articles have left leaning slant, which is problematic (although it only affects at most 1% of articles).
I'd put that under interpretation #4: "The available source material leans left, coloring the content (these being media articles and academic articles)."
We have to keep in mind that it's not radical, by definition, to be on the other half of a 50/50 political divide.
WE ARE… Far Right Wikipedia users…
I was once a Wikipedia editor myself -- true fact. 😉🙂
However, I didn't get to make many posts and edits before getting defenestrated, getting a lifetime ban, for objecting to the claim in the article on transwoman Laurel Hubbard that "she" had "transitioned to female". My tale of woe, my bill of particulars about the usual suspects there who've drunk the gender ideology KoolAid -- a very toxic brew indeed:
https://humanuseofhumanbeings.substack.com/p/wikipedias-lysenkoism
Though maybe of particular note there is a link to a post by Larry Sanger -- one of the co-founders of Wikipedia though he seems to have suffered the same fate as Leon Trotsky -- "Wikipedia Is More One-Sided Than Ever":
https://larrysanger.org/2021/06/wikipedia-is-more-one-sided-than-ever/#comments
An impressive and quite useful resource in many ways. But becoming more and more corrupted by ideological biases of one sort or another -- Lysenkoism writ large.
On the plus side, it won't matter. Next year's AI can rewrite the entire thing without the bias in a day or two, plus add all the things the humans missed. Not that people will be using Wikipedias in the future, they will just be asking the AIs directly.
AI - the modern, or latest version of the Golem. Generally a bad idea to be giving any sort of a free rein to such:
https://www.takimag.com/article/stop-with-the-golems-already/
https://en.wikipedia.org/wiki/God_%26_Golem,_Inc.
At the moment Wikipedia is extensively used by LLMs, Chatbots, and AIs for their training data. This allows biased Wikipedia editors to 'launder' their bias and inject it into yet more information sources. LLMs make Wikipedia more influential.
Even though it would obviously be speculative, it would be interesting to hear more of your thoughts on the potential of AI becoming an impartial source of information, particularly how this could be implemented and by whom. I have found your political opinions, like those of Peter Frost, refreshingly lucid and grounded in reason
I don't know if it is a good idea, but it seems inevitable. The only important question is which AIs people will be using, and who gets to decide their biases.
On the surface, Wikipedia is leftist—an excellent look below the surface of Wikipedia presentations.
I've long had the hypothesis that a way to address Wikipedia bias would be to quietly get together a group of experienced Wikipedia editors (along with someone(s) who has media savvy) and ...
1) Select few well patrolled article(s), the topics of which would be interesting to the media [1].
2) Add a few very accurate politically incorrect lines with supporting citations to eminent sources.
3) Wait for the politically incorrect text and citations to be deleted.
4) Enter into the dispute resolution process.
5) Collect evidence of biased conduct by Wikipedia administrator (it's going to happen).
6) Take the misconduct by the Wikipedia administration(s) to Administrative Action Review with a view toward taking the issue to the Arbitration committee.
7) Attempt to make the documentation of the administrator's bias as embarrassing to Wikipedia as possible.
Yes, I understand that the above-described process is laborious; that's the way woke Wikipedians keep muggles from interfering with their virtuous censorship. The undertaking could be made minimally laborious by focusing *narrowly* on the disputed edit and the administrator(s)'s misconduct.
Ref.
[1] For article topic selection, any topic that touches on genes & IQ or race & crime will be well patrolled but a Trump-related to topic is more likely to attract the media attention that Wikipedia wants to avoid. Perhaps a legally rigorous explication of the very thin evidence supporting E. Jean Carroll's accusations would be a good way to start this effort? It is timely (Trump) and salacious -- that might attract media attention.
If we imagine a bimodal distribution with a bell curve on the left with a mean of 3 and a smaller bell curve on the the right with a mean of 15, but the ceiling were 10 artificially, then that's the graph we would get. That's a decent description of the Overton window too, where anything right of center is Hitler. You wouldn't expect that to come through in self-description survey data though.