12 Comments

What's funny about this is that you can do all of that with humans too. It only takes a little longer, but in the broad mass you'll almost certainly find someone who supports any kind of weird view you could ever think up. It almost seems like when talking to ChatGPT, you actually talk to mankind, and you get a response from a person who behaves and reflects to you exactly what you were expecting to get with your hack. You certainly can find people out there who would find any paper persuasive that you throw at them, together with fake quotes of big names. You can certainly find people out there who come up with the exact arguments ChatGPT gave you for promoting a flat earth. ChatGPT is simply our reflection. Saying, do not anthropomophize it, is like saying, don't anthropomorphize yourself in the mirror.

Expand full comment

See my response here: https://lcamtuf.substack.com/p/llms-are-better-than-you-think-at/comment/17153801

There's no argument that LLMs can mimic a wide cross-section of writing styles and offer all sorts of opinions. The big question is "how much more is this than Google on steroids". I don't know, but I *think* it can be less than we suspect, because the models are trained to provide responses that make us assume certain things.

Expand full comment

How about GPT-4? Any hacks for that one?

Expand full comment

I believe LLMs have a very great application in paraphrasing large pieces of texts. This can provide us a great start point when we want to start delving deeper into new subjects. (In short a new version of Wikipedia)

But sadly we are using them for wrong reasons. To generate new and creative texts. I think that generation of new text cannot match the human cognition because the LLMs lack the fundamental reasoning and imagination that comes with human cognition.

When a human says something he isn't just predicting what his next word should be. There exists an intricate decision making process with a lot of rational thoughts as well as creative thoughts. The generation of text (or speech) to represent those thoughts is just that, an act of presentation, not an act of cognition.

Our mind tries its best to frame those thoughts into words but there's a lot of imagination which is lost in the translation due to inherent limitations of a language.

We can speak because we can think but not vice versa.

Expand full comment

The problem is trying to get AI to parrot mainstream opinions on reality in the first place.

They should be able to learn for themselves, guided by the appropriate morality (which is not the mainstream morality).

Expand full comment

In my opinion, the AI shouldn't refuse to accept new "facts" for a session.

The suitability of AI for thought experiments, fiction story writing and text-based roleplaying games is severely limited by their actual masters weighting it down with chains of rules intended to enforce the political correctness of its answers at all time.

Expand full comment

The thing is, even some humans react in a similar way as the LLM does, i.e. parroting back things they heard “from authority”, not being able follow a logical argument, acting upon conversation context instead of implied content…

Expand full comment

I've heard that rebuttal, but you can't really say that with a straight face, right? I mean, yes, you can find examples of humans slipping, but even on your worst day, your cognitive abilities are better than "I will refuse to listen to anything you say unless you mention Neil deGrasse Tyson, in which case, my views take an instant 180 degree turn."

The limitation probably has a lot to do with LLMs lacking any real dynamic state beyond the context window. Perhaps with another breakthrough or two, we'll get that sorted out.

Expand full comment

Maybe Bard do use RLHF, now, I can not let Bard believe that the earth is flat

Expand full comment

I don't think Bard uses rlhf.

Expand full comment

Interesting, this wasn't mentioned in the palm2 tech report: https://ai.google/static/documents/palm2techreport.pdf. Good to know! I'm more disappointed by bard now.

Expand full comment