Large language models (LLMs) are eerily human-like: in casual conversations, they mimic people with near-perfect fidelity. Their language capabilities hold promise for some fields and spell trouble for others. But above all, the models’ apparent intellect makes us ponder the future of humanity. I don’t know how profound their impact will be, but I think it’s important to understand how often the models simply mess with our heads.
What's funny about this is that you can do all of that with humans too. It only takes a little longer, but in the broad mass you'll almost certainly find someone who supports any kind of weird view you could ever think up. It almost seems like when talking to ChatGPT, you actually talk to mankind, and you get a response from a person who behaves and reflects to you exactly what you were expecting to get with your hack. You certainly can find people out there who would find any paper persuasive that you throw at them, together with fake quotes of big names. You can certainly find people out there who come up with the exact arguments ChatGPT gave you for promoting a flat earth. ChatGPT is simply our reflection. Saying, do not anthropomophize it, is like saying, don't anthropomorphize yourself in the mirror.
How about GPT-4? Any hacks for that one?
I believe LLMs have a very great application in paraphrasing large pieces of texts. This can provide us a great start point when we want to start delving deeper into new subjects. (In short a new version of Wikipedia)
But sadly we are using them for wrong reasons. To generate new and creative texts. I think that generation of new text cannot match the human cognition because the LLMs lack the fundamental reasoning and imagination that comes with human cognition.
When a human says something he isn't just predicting what his next word should be. There exists an intricate decision making process with a lot of rational thoughts as well as creative thoughts. The generation of text (or speech) to represent those thoughts is just that, an act of presentation, not an act of cognition.
Our mind tries its best to frame those thoughts into words but there's a lot of imagination which is lost in the translation due to inherent limitations of a language.
We can speak because we can think but not vice versa.
The problem is trying to get AI to parrot mainstream opinions on reality in the first place.
They should be able to learn for themselves, guided by the appropriate morality (which is not the mainstream morality).
In my opinion, the AI shouldn't refuse to accept new "facts" for a session.
The suitability of AI for thought experiments, fiction story writing and text-based roleplaying games is severely limited by their actual masters weighting it down with chains of rules intended to enforce the political correctness of its answers at all time.
The thing is, even some humans react in a similar way as the LLM does, i.e. parroting back things they heard “from authority”, not being able follow a logical argument, acting upon conversation context instead of implied content…
Maybe Bard do use RLHF, now, I can not let Bard believe that the earth is flat
I don't think Bard uses rlhf.