Peter Coates
2 min readAug 26, 2023

--

I admire your spirited defense of humanity, but I think it's more complicated. These LLM's are essentially a one trick pony. They don't really understand anything--they're just repackaging the way countless articles have talked about a subject. I.e, there is no real reasoning going on. Yet even at that, they're pretty slick. The thing is, right now there are thousands of people beavering away at hooking LLM's up to innumerable other computational tricks. While it may be a while before it produces a Tolstoy, Flaubert, or Wharton, the ability to write a pretty good beach book is almost certainly just around the corner if it's not already in the lab. Most books are pretty formulaic. It won't be that hard to build a program that can generate an OK plot interactively with a person. The fine grained interactions between characters are also pretty formulaic--all novels in fact already depend on us humans recognizing stock human responses. Once you have that, generating the voice of an author is bread and butter for an LLM. The human "author" will probably tell it how to tune up the result, which will be available almost instantly for iterative improvement. This isn't sci-fi. Most of it works now. Sadly, the relationship of human authors to machine will probably be like the relationship of painters to photographers. Photography didn't kill painting, it just gradually drove it into a cultural backwater. To me, the real danger of AI is its potential power to convince us that we, ourselves, are just a bag of cheap tricks cleverly stitched together. Just mechanisms, of no special value. I think it's something that's already a huge trend in 20/21st C life, and AI will do the same on rocket fuel.

--

--

Peter Coates
Peter Coates

Written by Peter Coates

I was an artist until my thirties when I discovered computers and jumped ship for a few decades. Now I'm back to it. You can probably find some on instagram.

No responses yet