14 Comments

What's funny about this is that you can do all of that with humans too. It only takes a little longer, but in the broad mass you'll almost certainly find someone who supports any kind of weird view you could ever think up. It almost seems like when talking to ChatGPT, you actually talk to mankind, and you get a response from a person who behaves and reflects to you exactly what you were expecting to get with your hack. You certainly can find people out there who would find any paper persuasive that you throw at them, together with fake quotes of big names. You can certainly find people out there who come up with the exact arguments ChatGPT gave you for promoting a flat earth. ChatGPT is simply our reflection. Saying, do not anthropomophize it, is like saying, don't anthropomorphize yourself in the mirror.

Expand full comment

How about GPT-4? Any hacks for that one?

Expand full comment

I believe LLMs have a very great application in paraphrasing large pieces of texts. This can provide us a great start point when we want to start delving deeper into new subjects. (In short a new version of Wikipedia)

But sadly we are using them for wrong reasons. To generate new and creative texts. I think that generation of new text cannot match the human cognition because the LLMs lack the fundamental reasoning and imagination that comes with human cognition.

When a human says something he isn't just predicting what his next word should be. There exists an intricate decision making process with a lot of rational thoughts as well as creative thoughts. The generation of text (or speech) to represent those thoughts is just that, an act of presentation, not an act of cognition.

Our mind tries its best to frame those thoughts into words but there's a lot of imagination which is lost in the translation due to inherent limitations of a language.

We can speak because we can think but not vice versa.

Expand full comment

The problem is trying to get AI to parrot mainstream opinions on reality in the first place.

They should be able to learn for themselves, guided by the appropriate morality (which is not the mainstream morality).

Expand full comment

In my opinion, the AI shouldn't refuse to accept new "facts" for a session.

The suitability of AI for thought experiments, fiction story writing and text-based roleplaying games is severely limited by their actual masters weighting it down with chains of rules intended to enforce the political correctness of its answers at all time.

Expand full comment

The thing is, even some humans react in a similar way as the LLM does, i.e. parroting back things they heard “from authority”, not being able follow a logical argument, acting upon conversation context instead of implied content…

Expand full comment

Maybe Bard do use RLHF, now, I can not let Bard believe that the earth is flat

Expand full comment

I don't think Bard uses rlhf.

Expand full comment