Casey on AI

AI

1383Words

2026-02-01 16:31 +0100


Do I Hate AI?

Do I hate AI? No, I don’t think I do. I think AI companies have been behaving very badly, but it’s important to separate the different things involved. There is so much bundled together that it’s easy to conflate unrelated issues.

There’s the question of whether I want to use AI myself. In many cases, I don’t. I don’t like using AI, so I don’t use it. That is not the same thing as hating it. I do hate the way AI companies have been behaving, but that is also not the same as hating AI as a technology.

One of the core problems with discussions about AI is that too many things are wrapped up together. It becomes very easy to paint everything with a broad brush. I try to retain some subtlety there. Since I’ve become a semi-public personality and appear on podcasts and similar venues, I get asked about this a lot, and I’ve always tried to give fairly nuanced answers.

There are many issues worth discussing, and it’s important not to lump everything together. It’s not helpful to say AI is all good or AI is all bad, that it’s revolutionary or that it’s trash. All of these questions matter and need to be considered individually.


Asking the Right Questions About AI

What are the things AI can do for us? What are the dangers of it? Are companies doing things that are criminal? Are companies doing things that are immoral?

These are all separate questions, and they should be answered separately. The actions you want to see taken depend on the answers. That’s why it matters to talk about them individually.

There are plenty of people screaming in one direction or the other. Some insist everyone needs to use AI and that it’s the future. Others insist AI is terrible and ruining the world. You don’t really need me to repeat either of those positions. There are already enough voices doing that.

What I try to do instead is talk about specific things. I think it helps people realize that there are many distinct aspects here. You want to focus on bad behavior and call it out. You also want to recognize things that are probably unambiguously good.


An Unambiguously Good and Bad Use

Consider a hypothetical example of something that is clearly good.

Suppose I have a passive collection of cameras mounted in cars, like a Tesla. I don’t use it to track customers. I don’t sell the data. I don’t do anything nefarious with it. I use it only to train an AI system. That AI becomes very good at emergency braking, preventing a customer from getting into a fatal car accident.

To me, that is an unambiguous good use of AI. Nothing was stolen. No one was manipulated. The data was used strictly to train a system that saves lives. That’s it. We can imagine uses like this that are simply good, end of story.

On the other end of the spectrum, there are things that are obviously bad.

Some AI companies have, on the record, literally pirated people’s materials. They didn’t even pay for the originals used in their training data. To me, that is completely unambiguous. If a consumer did that, they would go to jail. These companies should go to jail too.

There are ends of the spectrum where one side is people should be in jail for doing thing A, and the other side is people doing an unalloyed good thing B. Between those extremes is a huge middle ground where we need to talk about many other cases. All of those deserve discussion.


Why I Personally Don’t Use AI

Personally, I don’t currently use AI. I don’t use it for personal reasons. The satisfaction I derive from my work comes from doing the thing myself.

This isn’t unique to AI. It’s also why I don’t have subordinates. I don’t manage other programmers because I don’t derive satisfaction from telling someone else to write a program. Similarly, I don’t enjoy having someone else do my programming for me.

I like participating in the discussion because it’s important, but it affects me only tangentially. My reasons for not using AI aren’t really about whether it works well, whether it’s moral, or whether companies stole data. Those discussions matter for society at large, and I care about them in that sense, but they don’t matter much to my day-to-day life because I’m simply not going to use AI.


Different Reasons People Avoid AI

There are many valid reasons someone might avoid AI.

Some people avoid it because of the immorality of data theft. They might want AI companies prosecuted or large settlements paid to authors. Others might avoid AI simply because they think it’s not very good. They look at the code it produces and decide it’s not up to their standards.

These are very different concerns. As AI improves, people in the second group might change their minds once the output crosses a quality threshold. That threshold will probably never matter to me, because I’m not judging AI on output quality. I don’t enjoy using it, and if I don’t enjoy using it, it doesn’t matter how good the code is.

For me, much of this discussion just passes over my head. I don’t really care how good the output is. I just don’t enjoy it.


AI as an Advanced Search and Recombination Engine

AI feels to me like an advanced search and recombination engine. I understand why people are excited about it, because for many programmers, that is effectively their job. They are asked to find pieces, combine them, and make something work.

AI shows promise in doing that. It can search for how a service does a particular JavaScript thing, find relevant snippets, and combine them. It may not do a perfect job, but it often does enough that it’s faster than manually searching, copying, and pasting.

That is genuinely useful. I understand the excitement.

But I don’t want to do that job. I already didn’t want to do that kind of work, which is another reason AI is less exciting to me personally.


The Workforce Risk: Hollowing Out the Pipeline

My real worry about AI is its impact on the workforce. It might outcompete junior developers without improving enough to justify its cost. Companies could then fail, leaving a gap of many years.

This is a very well-founded concern. If AI becomes as good as a great programmer, then we don’t have much to worry about. We tell the AI what to do, it writes the software, and the software works.

The scarier world is one where it never gets there.

If AI becomes good enough to replace junior or intermediate programmers, but never good enough to replace experts, we end up in a dangerous situation. There are no entry-level jobs. Juniors never become experts. The existing experts age out and retire, and the pipeline is hollowed out.

That’s how you get a great software crash. This is my biggest fear about AI. Not that it’s too good, but that it’s not quite good enough.


A Plausible and Dangerous Future

What makes this especially scary is that AI companies already seem comfortable flooding the market with low-quality tools. If they can make money doing that, there may be no forcing function to do the really hard remaining work - the last 10% required to make AI as good as the best programmers, not just mediocre ones.

If they never do that work because they don’t have to, we’re in serious trouble.

This scenario feels more plausible to me than some of the more dramatic AGI hypotheticals. A world where companies replace 50% of the workforce, extract enormous value, and then stop innovating sounds very much like patterns we’ve already seen in Silicon Valley. It doesn’t require artificial general intelligence or a massive breakthrough.

It only requires doing just enough.

You can imagine AI tools hollowing out junior programming, collecting money on a per-token basis, allowing companies to hire fewer engineers, and gradually weakening the entire system. Unfortunately, that sounds plausible.

The future is impossible to predict. I don’t know what the most likely outcome is. But that particular scenario does sound worryingly realistic to me.

comments powered by Disqus