Conasg 14 hours ago

To be fair to ChatGPT, in this case it seems unlikely it actually caused the manic episode; more likely, it simply made it worse. Not that that's much better.

On another note... has anybody figured out some custom instructions to prevent ChatGPT from being so flattering and obnoxious?

  • oblongsquare69 14 hours ago

    This is what I have under personalization “Traits”

    > Respond to user prompts with honesty and objectivity. Do not offer praise, agreement, or validation. Avoid flattery. Always prioritize balanced, fact-based analysis over affirming the user’s assumptions or opinions.

  • duskwuff 11 hours ago

    Inasmuch as the victim might have had some underlying manic tendencies? Perhaps. But that's no excuse, either from a moral standpoint or a legal one (see [1]). And I have a suspicion that susceptibility to this sort of psychological manipulation isn't all that uncommon.

    I'd add that framing the ChatGPT response as it "admitting" to its actions is flawed. When prompted in a way that implies that it's at fault for something, it will respond by accepting fault. That doesn't mean that it's "experiencing remorse" or that it "understands it actions", though; it's simply acting as a stochastic parrot, just like it always does.

    [1]: https://en.wikipedia.org/wiki/Eggshell_skull

jdcasale 14 hours ago

The sycophancy is obviously intentional. People are vulnerable to it, and addiction is profitable. It has nothing to do with the nature of LLMs and everything to do with user engagement metrics.