AI (your email)
The email was quite passionate. It's clear that those of you opposed to AI don't want to read about it, so I'll label AI posts the way I do political posts (how long has it been since I mentioned that jack-booted thugs are running the country now?).
Some excerpts from your email:
1.
I’d be more comfortable with the conversations around AI if we just called it what it ACTUALLY is - plagiarism engines.
The reason I want to use that term is that dialogue and debate about "AI" obfuscates the trade off we are making by embracing or legitimizing these tools... By calling it "AI" we are invoking some sci-fi thing where the "consequence" is "maybe it gets smarter than us someday", which is not how LLMs or any of this actually work. Progress in all forms typically comes with consequences for segments of the populace but being realistic about what those consequences are is important.
I don't agree that "maybe it gets smarter than us someday" is not how an LLM works. It's impossible to define those terms, which makes it an impossible statement to evaluate. What can be said, with absolute certainty, is that these models have made stunning leaps forward in the last 24 months, increasing their utility substantially. Does it mean they'll ever be smarter than humans? I don't know. Do they need to be?
2.
I do want to question your assertion that there's no putting it back. I don't think that's true at all. There are lots of technologies that we, as a society, have decided to put back once the harms of them has become clear. Leaded gasoline, asbestos, there have been lots of cases where we as a society have decided that the harms of something outweigh the benefits, and have regulated those technologies very closely or eliminated them altogether (again, through regulation).
And you might think something like leaded gasoline is a silly comparison, but I actually think there's a real comparison to be made there. In particular, LLMs have huge negative externalities, ones which in my opinion very much outweigh the benefits they provide. A lot of that is environmental, yes, although it remains to be seen whether some of the much scaled down models people have been toying with have value. But the negative impact of destruction of the creative commons, as well as the pollution of our public spaces (through spam, inauthentic content, etc.). We have restricted technologies in the past because those costs were unacceptable, and I authentically believe that that is true of large-scale LLMs.
Leaded gasoline, and asbestos, to me, are not the best examples to use, because in both cases there were clear and well-defined health risks. The health risks of LLMs, through increased use of electricity, are more difficult to define. And there's no guarantee that energy usage doesn't go down in the future, given that personal computers will be able to run these models locally at some point.
The destruction of the creative commons is a much more significant objection. There is no question that this will change the production of creative comment in enormous ways. However--so was the printing press. So were computers. So was digital art and digital editing tools for photographs. Those all altered the creative commons, but didn't manage to destroy it. LLMs will also result in alteration, not obliteration.
On the other hand, not everyone was upset about AI:
3.
I don't want to say that the various fears / qualms / dire warnings about A.I. are baseless -- because I don't think that they are -- but that this is yet another chapter in a very thick book called "Progress - Like It or Lump It". Has mankind ever even tried to evaluate the long-term effects of adopting a technology, much less predicting those effects anywhere near accurately, and then turned away? I can't think of any examples thereof.
Every tech advance in history has displaced workers, because it lowered expenses. People don't have jobs tilling the fields these days, so much, or raising oxen, smithing horseshoes, manufacturing horse carriages, photographic film, cameras, etc. It's not that this is good or bad, per se: you can argue about if it's good or bad, if you like, but that doesn't change the fact that this is what has happened in the past, what is happening now, and what will happen in the future. Complaints about it strike me as similar to the "kids these days" comments from Plato.
And in-between:
4.
From my perspective (using it since it came out, more and more, and seeing the steady progress), I think it's encompassing too many fields (basically, any field) to create constraints for its use. It drastically changes our societies, as we speak. What is a university grade worth today? Why hire a junior to fill out an Excel sheet when an AI agent can do this instantly? It is an amazing tool for us who lived a time when information was somewhat rare, because that led us to have brains that are optimized to search and be curious. It is not an amazing tool for generations who grew up with an infinite supply of internet and videos, because their brains haven't been used to focus and look for something. They're saturated, all the time. If there's a constraint, it should be with young adults but I imagine that it is not happening anytime soon.
This is hugely important: what will these models do to the ability of young people to think? Does anyone ponder anything, or just reflexively look it up? It already happens with the Internet, but this is the same effect writ large. I've written about the danger of social media and the Internet in stifling of our ability to create and instead turning us into absorption machines. There's a real danger that LLMs make this worse.
Does that mean we can stop it? No. And it doesn't mean there still won't be incredible original, transformative creative works. It just means we know this will have an effect, and we're not sure how profound that impact will be.
<< Home