Monday, April 07, 2025

A.I. Discussion (part one)

Sean's email (Thursday's post) was thoughtful and interesting.

If I understand his arguments, they are both ethical (plagiarism, job destruction, date center energy usage) and philosophical (A.I. is soulless). Let's go in turn.

Is A.I. plagiarism? I don't know if that's the right word. It's the sum of everything it's ever been exposed to, which is not dissimilar to humans. It's certainly imitative, which I think is more strictly accurate, and perhaps it's not possible for A.I. to create a truly groundbreaking creative work.

I don't think we know yet. How could we?

Will it destroy jobs in many industries? Yes. So, so many jobs. Does it use an inconceivable amount of energy? Also yes.

Are these reasons not to use it? No. Every disruptive technology--and this is highly disruptive---has resulted in higher energy usage (computers, as just one example) and huge job losses (factory automation, also as one example). 

"Soulless" is the philosophical objection. It's not unfair in the least, but (in the music world) this charge was also leveled at any form of digital editing software ever used. When Pro Tools came out, it was absolute anarchy. When digital editing software came out for images, same thing. Digital sound effects for films? Same. Now all of these tools (and many others) are standard in the entertainment industry.

We're not stopping A.I. Period. That battle was over as soon as the first LLM was introduced. Too many people will make too much money to stop their use. That's how it's always been with a new technology, stretching back for centuries. It's not going to change now.

He closes with this:

Reasonable people can disagree about the extent to which A.I. tools can be used ethically and effectively, but I don't think anyone can argue that there's any way to use these generative tools in particular without causing at least some harm.

This where I think the argument breaks down, because it's an impossible standard to meet. Nothing has ever been invented that didn't cause harm to at least someone. 

The question, for me, is not whether the A.I. toothpaste can be stuffed back into the tube. It can't. The question is whether we can create constraints for its use. This is where it gets tricky, because the profit incentive is potentially so high that it will be very difficult to draw an effective line. 

I don't want this to sound like I don't respect Sean's argument, because I do. It's a thoughtful email, and he raises entirely fair points. I just think the discussion at this point might turn away from whether A.I. should be used to how we can use it to make our life better.

Site Meter