Discussion about this post

User's avatar
Keller Scholl's avatar

I hate to tell someone else about their field, and I still think you're hitting on one component of the problem.

1. AI ethics has, certainly from my outside view, coalesced around a particular empirical/metaphysical claim (on the empirical side, AI is useless / a con; on the metaphysical side, that AI does something different from human brains, is just a stochastic parrot, and can't reason or produce useful text). This view was wrong when stochastic parrots was first published, and the obviousness of that wrongness is continually increasing. And yet so far I have seen relatively little motion to recognize that. It's not just being obviously wrong: it's the failure to build a track record of accurate predictions.

2. Because of the uniformly critical slant of the field, false claims (about AI and water, for example, which long predate Hao) that are negative about AI go unchallenged, resulting in bad thinking filling the field, bad thinking that outsiders can see. Intellectual diversity is useful, but the field understands itself, it seems, as having an agenda, and that's almost always harmful for making true statements.

3. Because the field is, at least as perceived from the outside, so overwhelmingly left-wing, it's been unable to usefully engage with anti-AI populist conservatives, and the AI hostility means that the only ability to work with anti-AI regulation people is hating EAs. So to the extent that the field talks to policymakers, it's limited to a fraction of people on the left.

4. As you point out, the constructive work is good and important. It's also incredibly rare, and I think that tends to make a field have less influence. This also seems to fit, say, sociologists vs economists: the latter so much more neutral and constructive work that policymakers can use, rather than just complaining.

5. My personal favorite AI Ethics Moment is the complaints about facial recognition being worse at identifying Black faces. It's obvious that if models had come out and were reliably *better* at identifying Black faces than white faces, that also would have been taken as evidence of anti-Black racism and something that needed to be changed because of how it empowered the carceral surveillance state. If you make the same critique regardless of the data, you're not providing information. And yet that's what I expect from the field.

I really want AI Ethics to be a vibrant field that raises good points and helps our culture adapt to AI better. I agree that you have raised some important points. But I have little optimism for the field.

Peter Rex's avatar

You put your finger on something that's almost philosophically incoherent at its core. If your ethical framework is entirely organized around harm-avoidance, you don't actually have an ethics — you have a risk assessment protocol. A real ethics has to be able to say "this is worth doing, and here's why the benefits justify the costs." Without that capacity, you can't reason about trade-offs at all. You can only ever say no.

And "only ever say no" is a position that real decision-makers — in hospitals, in governments, in companies — will simply route around. They're not going to stop doing things because an ethicist told them it was bad. They're going to stop consulting ethicists. Which is exactly the irrelevance your warning is about.

The water data error in the Hao book is instructive here — not because one factual mistake damns a whole argument, but because of what it reveals about the direction of motivated reasoning. A factor-of-a-thousand mistake in the alarming direction slips through teams of fact-checkers. The same mistake in the reassuring direction would have been caught by the third pair of eyes.

What I think sits just underneath the piece, and isn't quite said explicitly, is the distinction between ethics as a practice and ethics as a posture. A lot of what currently passes for AI ethics is posture — it signals values, performs concern, positions the speaker on the right side of history. That's not nothing, but it's not the same as helping anyone navigate a hard decision. And the incentive structures Königs describes reward posture over practice almost perfectly.

The negative bell only has force if people take it seriously. And people stop taking it seriously when it rings constantly. Remember “The boy who cried: Wolf?

15 more comments...

No posts

Ready for more?