5 Comments

LLM’s usually generate text using a random number generator. Particularly funny trolls could happen by chance. Is it science to conclude much of anything from one sample from a stochastic process?

I suppose it depends what you’re doing. I do use “stop in the debugger at a random time” as a poor man’s profiler sometimes. It’s limited, but it often works to find simple, bad performance issues.

Maybe we should be asking when it makes sense to use random output at all? There are appropriate uses of randomness, but it’s for generating things to try, the “generate” phase of a generate-and-check algorithm. For AI chat, the computer generates hints and you manually check the result.

If we called them “random text generators,” maybe they would be used more appropriately.

Expand full comment

I wouldn't oversell the "randomness" part. They need to be nudged to select the less likely token every now and then to avoid getting stuck or generating repetitive text, but this doesn't need to come from a random source - and more to the point, it doesn't make the output particularly random: they generate consistent answers to most questions. The differences are usually just wording, not the substance.

Expand full comment

Yes, sometimes the differences are just wording, but not always. Sometimes I get the right answer to a technical question, so I start thinking it’s pretty good, but I need to regenerate the answer for some reason and get a garbage answer. So it wasn’t smart at all, just lucky.

Or, more often, the opposite. The UI lets you retry bad results and there’s a reason for that, though it often doesn’t work.

LLM’s love to repeat patterns - they’re great at making lists. The randomness was added to make the text seem more human, and this is pretty directly misleading about what we’re dealing with, because repetition is an obvious tell that you’re dealing with a machine. Random number generators are used in video games for similar reasons.

Beyond pure randomness, thiere is also inconsistency in general. We don’t normally want to re-run the same prompt, we want to run a *different* prompt and be able to predict the result for unseen inputs.

Traditional software has mathematical properties that are supposed to hold true for unseen inputs. If they’re violated, it’s a bug. What can we say about AI chat that’s true all the time? A minor tweak to the prompt might have very different results.

Expand full comment

Let's forget for a moment that it's an LLM and we have a lot of hype for it. Let's replace them with, say, OS (operating system). After all, the same rules apply. The supplier mainly wants it to work in normal use. The end user just wants it to work, and there are always edgdelords who are willing to take advantage of any vulnerabilities, if only for trolling. Many of us make a living by making things not work as they should. After all, this was exactly the case with closed-source Microsoft Windows, which posed as a hegemon, just like OpenAI is now. Cult of the Dead Cow e.g. and countless others beat them to death for that very reason, and in the process made the world a safer place. Ultimately, they created the security market, which with great hype is now eating billions of dollars and eating its own tail (see CrowdStrike and others). Analogies can be multiplied. LLM is just a technology, similar to what OS used to be (I ignore the elements of copyright law, although interesting analogies can also be made here). We are in the initial development phase. Unfortunately, we live in disgustingly hysterical times where emotions prevail, not only in business. Let's use LLMs like we use a hammer. This is a deeply philosophical statement ;)

Expand full comment

I “trolled” my company’s (corporate, in-house badged) “AI” by asking it questions whose answers strictly relied on elementary principles of propositional logic (embedded in a word problem). It failed. It also seemed to be rather poor at arithmetic in some cases.

Knowing a little about how “AI” works, It seems to me that claims that it can reason, in any familiar sense of that term, are simple wishcasting. Manipulation of vectors and matrices, while predicated on logic and arithmetic, cannot produce anything but vectors and matrices no matter how fast those manipulations can be performed — any apparent logical inferences are not derived but rather encoded in the inputs.

Expand full comment