Discussion about this post

User's avatar
Lon Guyland's avatar

I “trolled” my company’s (corporate, in-house badged) “AI” by asking it questions whose answers strictly relied on elementary principles of propositional logic (embedded in a word problem). It failed. It also seemed to be rather poor at arithmetic in some cases.

Knowing a little about how “AI” works, It seems to me that claims that it can reason, in any familiar sense of that term, are simple wishcasting. Manipulation of vectors and matrices, while predicated on logic and arithmetic, cannot produce anything but vectors and matrices no matter how fast those manipulations can be performed — any apparent logical inferences are not derived but rather encoded in the inputs.

Expand full comment
skybrian's avatar

LLM’s usually generate text using a random number generator. Particularly funny trolls could happen by chance. Is it science to conclude much of anything from one sample from a stochastic process?

I suppose it depends what you’re doing. I do use “stop in the debugger at a random time” as a poor man’s profiler sometimes. It’s limited, but it often works to find simple, bad performance issues.

Maybe we should be asking when it makes sense to use random output at all? There are appropriate uses of randomness, but it’s for generating things to try, the “generate” phase of a generate-and-check algorithm. For AI chat, the computer generates hints and you manually check the result.

If we called them “random text generators,” maybe they would be used more appropriately.

Expand full comment
3 more comments...

No posts