--

Let's go back to the main claim of this post: no emergent abilities. There are a lot of words here that could have been spared if we first acknowledge that we might be arguing over semantics. I don't believe sentience will emerge from LLMs. But, on the other hand, I think there is emergent abilities happening, meaning some kind of ability that is unmeasurable in small models that becomes measurable in larger models. Speaking of semantics, we can go all the way back to word embeddings to see this happening. Neural word embeddings saw the emergence of semantics that were unexpected based on our experience using simpler collocation statistics which only seemed capable of helping with syntax. The biggest recent models continue that cascade of emergence to higher-order patterns like world knowledge that were not detectable in smaller models.

--

--

Thomas Packer, Ph.D.
Thomas Packer, Ph.D.

Written by Thomas Packer, Ph.D.

I do data science (QU, NLP, conversational AI). I write applicable-allegorical fiction. I draw pictures. I have a PhD in computer science and I love my family.

Responses (1)