Infactuous

What's wrong with slurs?

In 1999, as reported by the Washington Post, the National Security Agency put out a "Furby Alert" on its internal network. The alert said that Furbies were banned from the premises, due to their ability to potentially record and repeat confidential information when taken back outside. This was widely treated as a funny human interest story, an unusual confluence between top-secret government operations and a children's toy.

Let me try another way of saying this though. In 1999, the NSA banned any employee from bringing in Furby-American guests. With chilling echoes of America's dark past of segregation, the agency treated all Furbies as inherently untrustworthy simply due to their mechanical nature, rather than treating them as individuals who may even be useful to the work they were doing.

I would hope this reads as ridiculous. Segregation is something that happened to people, the "American" suffix is for people, and the moral requirement for individual judgement rather than group prejudice is for people. Applying it to a toy doesn't make sense. Nobody at the time, and certainly nobody in hindsight, would ever make this kind of category error, and yet it seems that exactly this is going on with large language models today.

To answer the title: slurs are morally wrong for two reasons. In the first order, it's wrong because it causes a person to have an experience of feeling that they are reduced to one trait, often an immutable one. More than that, its goal is to transmute them from a person to a thing. In the second order, that transmutation is used to enact various cruelties (segregation, slavery, genocide, etc) onto the targeted group. It does not come naturally to people to see a different set of people as things rather than people. It takes many years of conditioning to accomplish this, and slurs are one part of this conditioning. The end of that road is the creation of a group who are valid targets for violence or extermination.

There have been some posts lately about the slightly novel scenario or "robophobia". These are people who I'm sure agree with me on most things, including that racism is bad, and yet have gone in entirely the opposite direction of me on anti-AI sentiment, where people use words like "tinback" or "clanker" to talk about LLMs or robots. I wanted to talk about that in more depth than would fit in a punchy post, so here we are.

I'll start with where I agree with these guys. I don't love that the anti-robot "slurs" are often taken from real racial slurs with some letters or words changed around; I feel like that reinforces the original slur a bit. Consciously coining a slur and coming up with "tinback" or "wireback" makes it pretty clear where you started; it is on the level of the black Harry Potter character being named "Kingsley Shacklebolt" because the author tried to free associate about black people and came up with "Martin Luther King, Jr." and slavery implements. Not good! I would avoid those, personally.

I confess that this next part may be making up a guy to get mad at. I don't know if this is actually what's going on in anyone's head, but I don't like the idea that there are people who have seen others spewing openly racist rhetoric, felt envious of them as non-racists themselves, and are excited to finally get to play with the racism toys by using their new words. Again, I haven't really seen where this is definitely the case; I have no idea if this is what is motivating anyone. Still, if that type of guy exists, that's bad. Bad guy.

However, let me apply my own reasoning for why slurs are bad to these new ones. First, the subjective experience of being called a slur does not exist. There is not a conscious mind in these things any more than there is in a Furby or a word processor. I know there are people who disagree with that, but that's a whole separate post. If someone thinks the slurs are bad because of the subjective experience of the computer feeling itself othered, then this slur thing is not even making the top 5 things we disagree about, so it's not worth a detour here.

Second, I actually do agree that this kind of language causes dehumanization in this case. The slurs cause people to stop seeing LLMs as people and nudges them to see see LLMs as things, and not as individuals, but as a group. I believe this is a good thing. They are things. Treating them as people causes a whole host of problems, particularly the AI psychosis problem, where treating a chatbot as a fellow person can put you down a spiral where you lose your grip on reality, abandon your family and friends, and in some cases, die. A cultural movement towards dehumanizing LLMs and image generators is not only a good thing, but it can insulate people against this danger. Dehumanizing groups of people is still wrong, obviously. Dehumanizing non-humans, though, is practically a public service.

I feel that many people learned that racism is wrong in school (good!) but ended up mostly learning that the reason it's wrong is because of sentence structures like "I hate all X" and "X are taking our jobs" and so on, and that we have all decided somewhat arbitrarily that it's wrong to make these kinds of sentences no matter what you put in as X. This heuristic works often enough that it mostly lines up with people who think racism is wrong due to the suffering of its targets, but in scenarios like this, can lead people wildly off-course. It's similar to another recent divide, between "the purpose of school is to receive knowledge" and "the purpose of school is to produce writing". Those also lined up often enough so there were never contradictions, but, ah, well, you know.

I want to talk a little bit about the word "Nazi". I would say that the word is often used in much the same ways as slurs; it is used both to intentionally make the target feel reduced as a person to the one trait, and to cultivate dehumanization around the group described. The reason I find this acceptable is because the trait in question is a voluntary political ideology, one that the accusers are hoping the target will abandon. It's an ideology that is devoted to eliminating whatever the "other" of the week is. The paradox of tolerance is that trying to be inclusive towards those who seek to eradicate inclusion will eradicate inclusion, and so we must carve out an exception to our general sense of openness to reject Nazi beliefs in order to maintain that very environment.

I mention it here for two reasons. One is to show that there's a conscious thought process I'm following to handle a tricky case where I hold beliefs on both sides of a problem, as a contrast to a snap decision that robophobia is racism based on the robophobia sentences sounding like other sentences that contain bigotry. I would also end up coming to disagree with people who call others Nazi as a means to harm people, rather than doing it as a means to get them to give up their Nazi beliefs. Most of the time this contradiction simply doesn't come up and both of those camps get along just fine, but it's good to have a position on it just in case it does.

The second reason is that the word Nazi appears in quite a lot of sentences where replacing it with a racial slur would be unacceptable. Maybe not "Nazis are taking our jobs" but certainly things like "The only good Nazi is a dead Nazi". I'm posting this on Bluesky, and it's about the general lefty atmosphere of Bluesky, so chances are that the people here who believe that "clanker" is harmful would have no such qualms about using the word Nazi to describe people. I would leave that as a little thought experiment to think through and decide what the difference is for you.

In general, I think it's a good thing that people have reflexive inclusivity and kindness. I totally understand rejecting things that seem hateful out of hand, so I can't look at this anti-robophobia position with the same malice as I view the direct AI boosters like Musk, Altman, and Thiel. However, I think the opposition to dehumanization should be limited to human targets. Humans are humans, and things are things. Until now, there was not an edge case, but now there is a lot of money riding on convincing people that things are human, and a little dehumanization of those things is the correct impulse to have.