Aid to Africa mostly makes things worse for Africans. If you trade with Africa they have the money to buy mosquito nets or whatever it is you want to give them.
What's the issue with 'their' in "a creature generally believed to be endowed with the propensity to ignore their [sic] own drowning children"? Seems fine to me as "creature" is implicitly plural, though I presume "their" should be "his or her".
"These are laudable humanitarian goals. However, they don't do anything to further humanity's long-term interests."
In 2021-23 I helped to restart Poland's chapter of Effective Altruism funding, at times, 50-80% of the organization's activities. To some extent the decision to stop was driven by the fact that the organization became more and more self-sufficient and self-funding, but there were other reasons as well.
My exact wording when I decided to stop supporting Polish EA org ~2 years ago were:
"For many years I have had doubts about what proportions of my activities should be for helping the weak and needy and what for pushing humanity forward. In the last year I have had more need to act for the latter type of activity."
Human children are "week and needy". It stands to reason as a successful species that we have a strong drive to nurture them (and comparable members of our group), but that is not the only reason for our success (so far).
Doesn't utilitarianism also rely on the realist, normative claim that pleasure or happiness is the highest good and the ultimate goal? Seems to be a subjective moral intuition underlying utilitarianism, the same as any other moral system.
One could similarly set up a mathematical philosophy for calculating which outcomes result in the greatest amount of virtue distributed throughout the world. That wouldn't automatically make virtue ethics the greatest moral system in the world.
What I like about moral systems other than utilitarianism is that they have a built in restraint to them. Whereas utilitarianism is so convinced of its correctness and is so focused on maximizing happiness, that it justifies some craaaaazy shit... like Bentham's Bulldog writing an essay endorsing paving over nature because it'll minimize wild animals and therefore minimize suffering.
It's almost like utilitarians are reward maximisers, who end up misspecified and doing ludicrous shit, whereas other ethical agents are more akin to cybernetic minimizers who are more concerned with tracking and minimizing their own negative behaviour. Read this short page about the dangers of reward maximisers, vs cybernetic minimizers: https://slimemoldtimemold.com/2025/04/10/the-mind-in-the-wheel-part-viii-artificial-intelligence/
In theory, utilitarianism allows for almost any concept of desirable states; In practice, hedonism as the primary measure of utils always seems to get smuggled in.
But hedonism applies to an individual while utilitarianism applies to the group of which the individual is a part. The extent of the group is what people argue over. ie family/kin/clan/ethnicity/nation/ thru to . . . . all living things.
"There isn't any good reason to believe anything is morally wrong, so here is a reason why X is morally wrong."
Aid to Africa mostly makes things worse for Africans. If you trade with Africa they have the money to buy mosquito nets or whatever it is you want to give them.
See e.g. this rant - https://magatte.substack.com/p/how-mrbeast-is-keeping-africa-poor
If EA people actually want to help Africans they need to use the aid money to buy African products
A lot of work involved to contextualise that quote, but worth it I would say-Somewhat of a banger
What's the issue with 'their' in "a creature generally believed to be endowed with the propensity to ignore their [sic] own drowning children"? Seems fine to me as "creature" is implicitly plural, though I presume "their" should be "his or her".
These two sentences hit so close to home:
"These are laudable humanitarian goals. However, they don't do anything to further humanity's long-term interests."
In 2021-23 I helped to restart Poland's chapter of Effective Altruism funding, at times, 50-80% of the organization's activities. To some extent the decision to stop was driven by the fact that the organization became more and more self-sufficient and self-funding, but there were other reasons as well.
My exact wording when I decided to stop supporting Polish EA org ~2 years ago were:
"For many years I have had doubts about what proportions of my activities should be for helping the weak and needy and what for pushing humanity forward. In the last year I have had more need to act for the latter type of activity."
Human children are "week and needy". It stands to reason as a successful species that we have a strong drive to nurture them (and comparable members of our group), but that is not the only reason for our success (so far).
Doesn't utilitarianism also rely on the realist, normative claim that pleasure or happiness is the highest good and the ultimate goal? Seems to be a subjective moral intuition underlying utilitarianism, the same as any other moral system.
One could similarly set up a mathematical philosophy for calculating which outcomes result in the greatest amount of virtue distributed throughout the world. That wouldn't automatically make virtue ethics the greatest moral system in the world.
What I like about moral systems other than utilitarianism is that they have a built in restraint to them. Whereas utilitarianism is so convinced of its correctness and is so focused on maximizing happiness, that it justifies some craaaaazy shit... like Bentham's Bulldog writing an essay endorsing paving over nature because it'll minimize wild animals and therefore minimize suffering.
It's almost like utilitarians are reward maximisers, who end up misspecified and doing ludicrous shit, whereas other ethical agents are more akin to cybernetic minimizers who are more concerned with tracking and minimizing their own negative behaviour. Read this short page about the dangers of reward maximisers, vs cybernetic minimizers: https://slimemoldtimemold.com/2025/04/10/the-mind-in-the-wheel-part-viii-artificial-intelligence/
No. You are on the wrong (railway) track.
In theory, utilitarianism allows for almost any concept of desirable states; In practice, hedonism as the primary measure of utils always seems to get smuggled in.
But hedonism applies to an individual while utilitarianism applies to the group of which the individual is a part. The extent of the group is what people argue over. ie family/kin/clan/ethnicity/nation/ thru to . . . . all living things.