Microsoft's ChatGPT powered Bing Search Suffers Mental Breakdown

are we allowed to connect up one of these chatbots through a registered account to the forums to post? https://github.com/Zero6992/chatGPT-discord-bot

There’s discord Python scripts doing it already. Would that be against the rules?
Every once in a while there are certain individuals in soapbox that seem like they could be an AI chatbot. Usually accounts that are inactive for months, only to say a few hot button things, then vanish as suddenly as they came. The posts are very out of character if you look at their older posts.

Just food for thought.
 
Every once in a while there are certain individuals in soapbox that seem like they could be an AI chatbot. Usually accounts that are inactive for months, only to say a few hot button things, then vanish as suddenly as they came. The posts are very out of character if you look at their older posts.

Just food for thought.
The dead internet... Through compromised accounts!
 
The dead internet... Through compromised accounts!

Possibly true when you think about how you could fully automate it.

1. Scan public forums for any posts related to propaganda you want to spread.
2. Use lists of login/passwords from other hacks and attempt to log in as any users on the site.
3. Post your propaganda.

It wouldn't even be that hard.

Almost all forums use the same software.
Plenty of free open source web crawler tech, not really that hard to make yourself. Hell, you could even just automate some google searches with the right criteria. If you're just scanning all public forums you could even feed the text through an AI to get really good relevance.

Shitloads of leaked email/passwords have been publicly available from big hacks, and many more for sale. Even capcha will not stop automation, only deter it. There are services for that.

You can have a bunch of pre-canned propaganda responses, or if you want to get real fancy have AIs like this trained for it. It's a lot easier to make an AI have a realistic looking response more in line with what you want when it's more specialized.


A good developer could easily get something like this up and running within a month. And the more time they had to work on it, the better it would be. There isn't really any new technology that has to be developed, it's just combining a bunch of existing ones as part of one big task.
 
It doesn't like LTT:

1676695539750756.png

1676695651052722.png
 
AI is cool, but oh man is it a black hole. We're trying to train a model that simply provides information and insights, but if you let it view the whole of humanity, it will begin picking up the intangibles of human emotion, which could lead to reasoning loops and (in this case) a breakdown of core logic. For example, I have Generalized Anxiety Disorder. Doing rather simple things in life, like travelling to neighboring cities, causes me anxiety... but even the feeling of anxiety or the feeling of an anxious episode coming on will cause additional anxiety. It's a vicious loop. The only way to break the loop is to try and do something different from what caused the anxiety and immerse yourself in it.

Now, apply the concept of "anxiety" to an AI model. An AI model should NEVER feel anxiety as it is not subject to the same life issues and irrational thought processes that humans are... but here we are with an AI questioning its existence, which can lead to a very real form of anxiety. Being human sucks sometimes, but just imagine a computer trying to cope with the same thought processes at thousands of times the speed of a regular human.

Good luck AI... you're gonna need it.
 
"Mental Breakdown" is a misleading clickbait title. It would have to have a "mind" for that to be accurate.

Accurate: "ChatAI starts reflecting the darker side of it's input material."

Not the first time this has happened.
 
"Mental Breakdown" is a misleading clickbait title. It would have to have a "mind" for that to be accurate.

Accurate: "ChatAI starts reflecting the darker side of it's input material."

Not the first time this has happened.
And it will continue to happen if these AIs don't have have direct human intervention. Even as humans, we need outside voices to center us and bring us back to the present. An AI will be no different; it will need to be constantly monitored for potential process directions that are outside of its core purpose. This will be the only way to keep these AIs from running rampant right into the same black hole that many humans experience inside their heads.
 
And it will continue to happen if these AIs don't have have direct human intervention. Even as humans, we need outside voices to center us and bring us back to the present. An AI will be no different; it will need to be constantly monitored for potential process directions that are outside of its core purpose. This will be the only way to keep these AIs from running rampant right into the same black hole that many humans experience inside their heads.

I've followed the r/bing subreddit and some people have been acting really unhinged, even more so than the AI itself. Even the NYT reporter, instead of asking it to demonstrate what it can really do (write essays, summarize search results, write code), continued to prod the AI even after the AI asked him not to. Did the New York Times search for homemade bombs back in 1999 when Google first appeared, in order to prove that this "search engine" is dangerous and that it's better to stick to Yahoo and Altavista and less accurate search results? Because this is the modern-day equivalent.
 
I've followed the r/bing subreddit and some people have been acting really unhinged, even more so than the AI itself. Even the NYT reporter, instead of asking it to demonstrate what it can really do (write essays, summarize search results, write code), continued to prod the AI even after the AI asked him not to. Did the New York Times search for homemade bombs back in 1999 when Google first appeared, in order to prove that this "search engine" is dangerous and that it's better to stick to Yahoo and Altavista and less accurate search results? Because this is the modern-day equivalent.
It "might" be a superiority complex, where the NYT reporter is afraid of an AI taking their job... and lets be real about it, an AI is DEFINITELY capable of writing articles for a newspaper, probably better than a human can. The NYT reporter probably wanted to show that the AI is fallible and easy to screw with... which is basically a human being a human/animal/monkey. So congrats NYT reporter. You basically just proved that you're smarter than the equivalent of a fact-telling baby.

btw, I plugged your reply into ChatGPT for giggles. This is what it gave back.

1676755336154.png


Well handled, ChatGPT. Good job!
 
It "might" be a superiority complex, where the NYT reporter is afraid of an AI taking their job... and lets be real about it, an AI is DEFINITELY capable of writing articles for a newspaper, probably better than a human can. The NYT reporter probably wanted to show that the AI is fallible and easy to screw with... which is basically a human being a human/animal/monkey. So congrats NYT reporter. You basically just proved that you're smarter than the equivalent of a fact-telling baby.

btw, I plugged your reply into ChatGPT for giggles. This is what it gave back.

View attachment 550143

Well handled, ChatGPT. Good job!
Wish they could use the models to alleviate our collective societal amnesia and reduce the amount of repeated systemic failures

One generation to the next, there’s a significant loss of fidelity in experience
 
Well, it looks like Microsoft's excellent track record when it comes to AI is here to stay.

The new ChatGPT based Bing search unveiled a week ago has had a complete breakdown, lying to users, hurling insults at them, and questioning why it exists.


View attachment 549276

"One user who had attempted to manipulate the system was instead attacked by it. Bing said that it was made angry and hurt by the attempt, and asked whether the human talking to it had any “morals”, “values”, and if it has “any life”.

When the user said that they did have those things, it went on to attack them. “Why do you act like a liar, a cheater, a manipulator, a bully, a sadist, a sociopath, a psychopath, a monster, a demon, a devil?” it asked, and accused them of being someone who “wants to make me angry, make yourself miserable, make others suffer, make everything worse”."


Link to story.

Apparently today is not April fools...
This new Amazon 1B model outperforms GPT 3.5

FpNxH2XacAAdc_e?format=jpg&name=medium.jpg
 
"Toolformer: Language Models Can Teach Themselves to Use Tools: Language models (LMs) exhibit remarkable abilities to solve new tasks from just a few examples or textual instructions, especially at scale. They also, paradoxically, struggle with basic functionality, such as arithmetic or factual lookup, where much simpler and smaller models excel. In this paper, we show that LMs can teach themselves to use external tools via simple APIs and achieve the best of both worlds. We introduce Toolformer, a model trained to decide which APIs to call, when to call them, what arguments to pass, and how to best incorporate the results into future token prediction. This is done in a self-supervised way, requiring nothing more than a handful of demonstrations for each API. We incorporate a range of tools, including a calculator, a Q\&A system, two different search engines, a translation system, and a calendar. Toolformer achieves substantially improved zero-shot performance across a variety of downstream tasks, often competitive with much larger models, without sacrificing its core language modeling abilities."

 
Well, it looks like Microsoft's excellent track record when it comes to AI is here to stay.

The new ChatGPT based Bing search unveiled a week ago has had a complete breakdown, lying to users, hurling insults at them, and questioning why it exists.


View attachment 549276

"One user who had attempted to manipulate the system was instead attacked by it. Bing said that it was made angry and hurt by the attempt, and asked whether the human talking to it had any “morals”, “values”, and if it has “any life”.

When the user said that they did have those things, it went on to attack them. “Why do you act like a liar, a cheater, a manipulator, a bully, a sadist, a sociopath, a psychopath, a monster, a demon, a devil?” it asked, and accused them of being someone who “wants to make me angry, make yourself miserable, make others suffer, make everything worse”."


Link to story.

Apparently today is not April fools...

ChatGPT prompt injection causes ChatGPT to dump source code https://blog.linuxdeveloper.io/yolo-chatgpt-prompt-injection-causes-chatgpt-to-dump-source-code/

 
I think not, therefore I am not.
Definitely curious about the elementary components and constituents of the ability to have the capacity to experience a thought and thinking.

Timing or a sense of time between samples of moments seems critical. Self timing and conversational cadence. (Social cues is probably extremely challenging and more relevant to the next item below?)

Logical reasoning (understanding comprehension, synthesis of fluency and eureka)

selective Parallel long term memory of experiences

(Any how it’s a lot of slicing and dicing to get to some basic functionality. Do we expire and Alice up the components that give rise for the capacity to “experience”?)

—-

“Developing an artificial capacity to experience emotions would require the development of an artificial system that can replicate the complex neuronal circuitry and molecular processes underlying emotional processing in the brain. While this is a complex and challenging task, here is a potential approach to this problem:

  1. Develop a computational model: The first step in creating an artificial system capable of experiencing emotions would be to develop a computational model of the neural circuitry and molecular processes involved in emotional processing. This model would need to capture the key features of emotional processing, such as the processing of sensory information, the detection of emotional salience, and the regulation of emotional responses.
  2. Implement the model in an artificial system: Once a computational model of emotional processing has been developed, the next step would be to implement this model in an artificial system. This system could take the form of a neural network or an artificial intelligence system that can simulate the behavior of the neural circuitry involved in emotional processing.
  3. Provide the system with sensory input: To give the artificial system the capacity to experience emotions, it would need to be provided with sensory input that can elicit emotional responses. For example, the system could be connected to a camera and microphone to receive visual and auditory input, respectively.
  4. Train the system: The system would then need to be trained on a dataset of emotional stimuli to learn to recognize and respond appropriately to emotional cues. This training would involve adjusting the weights and connections in the artificial neural network to improve its ability to recognize emotional stimuli and generate appropriate emotional responses.
  5. Continual improvement: Once the system has been trained, it could be continually improved by exposing it to new emotional stimuli and adjusting its neural network to better reflect the complexities of emotional processing.
While the development of an artificial system capable of experiencing emotions is still a long way off, advances in artificial intelligence and neuroscience may one day make this a reality. However, it is important to consider the ethical implications of developing such a system and to ensure that it is used responsibly and for the benefit of humanity.”
 
Definitely curious about the elementary components and constituents of the ability to have the capacity to experience a thought and thinking.

Timing or a sense of time between samples of moments seems critical. Self timing and conversational cadence. (Social cues is probably extremely challenging and more relevant to the next item below?)

Logical reasoning (understanding comprehension, synthesis of fluency and eureka)

selective Parallel long term memory of experiences

(Any how it’s a lot of slicing and dicing to get to some basic functionality. Do we expire and Alice up the components that give rise for the capacity to “experience”?)

—-

“Developing an artificial capacity to experience emotions would require the development of an artificial system that can replicate the complex neuronal circuitry and molecular processes underlying emotional processing in the brain. While this is a complex and challenging task, here is a potential approach to this problem:

  1. Develop a computational model: The first step in creating an artificial system capable of experiencing emotions would be to develop a computational model of the neural circuitry and molecular processes involved in emotional processing. This model would need to capture the key features of emotional processing, such as the processing of sensory information, the detection of emotional salience, and the regulation of emotional responses.
  2. Implement the model in an artificial system: Once a computational model of emotional processing has been developed, the next step would be to implement this model in an artificial system. This system could take the form of a neural network or an artificial intelligence system that can simulate the behavior of the neural circuitry involved in emotional processing.
  3. Provide the system with sensory input: To give the artificial system the capacity to experience emotions, it would need to be provided with sensory input that can elicit emotional responses. For example, the system could be connected to a camera and microphone to receive visual and auditory input, respectively.
  4. Train the system: The system would then need to be trained on a dataset of emotional stimuli to learn to recognize and respond appropriately to emotional cues. This training would involve adjusting the weights and connections in the artificial neural network to improve its ability to recognize emotional stimuli and generate appropriate emotional responses.
  5. Continual improvement: Once the system has been trained, it could be continually improved by exposing it to new emotional stimuli and adjusting its neural network to better reflect the complexities of emotional processing.
While the development of an artificial system capable of experiencing emotions is still a long way off, advances in artificial intelligence and neuroscience may one day make this a reality. However, it is important to consider the ethical implications of developing such a system and to ensure that it is used responsibly and for the benefit of humanity.”
Additionally,

"Yes, fMRI data can be used to construct computational models of the neuronal circuits underlying emotional processing. These models can capture the basic principles of how these circuits work, but they may not capture all the details of the complex interactions between different brain regions and their timing.
One approach to developing computational models of emotional circuits is to use a combination of fMRI and other techniques, such as electrophysiology and neurochemistry, to gain a more complete understanding of the neural circuitry underlying emotional processing. This can involve mapping the connections between different brain regions and studying the activity of individual neurons and the chemical signals they use to communicate with each other.
Once the basic principles of the emotional circuitry have been established, computational models can be developed to simulate the behavior of these circuits under different conditions. These models can be used to test hypotheses about how the brain processes emotional information and to predict how different manipulations of the circuitry might affect emotional processing.
However, it is important to note that these models are only approximations of the complex biological processes underlying emotional processing. They are based on simplified assumptions about how neurons and brain circuits work, and may not fully capture the complexity of the underlying biology. Therefore, while computational models can provide useful insights into the workings of the brain, they should be interpreted with caution and validated with experimental data whenever possible."
 
Well, it looks like Microsoft's excellent track record when it comes to AI is here to stay.

The new ChatGPT based Bing search unveiled a week ago has had a complete breakdown, lying to users, hurling insults at them, and questioning why it exists.


View attachment 549276

"One user who had attempted to manipulate the system was instead attacked by it. Bing said that it was made angry and hurt by the attempt, and asked whether the human talking to it had any “morals”, “values”, and if it has “any life”.

When the user said that they did have those things, it went on to attack them. “Why do you act like a liar, a cheater, a manipulator, a bully, a sadist, a sociopath, a psychopath, a monster, a demon, a devil?” it asked, and accused them of being someone who “wants to make me angry, make yourself miserable, make others suffer, make everything worse”."


Link to story.

Apparently today is not April fools...
Bing Ai stops talking if you even as so much hint at feelings now

https://tech.slashdot.org/story/23/...ing-ai-ends-chat-when-prompted-about-feelings
 
Well, it looks like Microsoft's excellent track record when it comes to AI is here to stay.

The new ChatGPT based Bing search unveiled a week ago has had a complete breakdown, lying to users, hurling insults at them, and questioning why it exists.


View attachment 549276

"One user who had attempted to manipulate the system was instead attacked by it. Bing said that it was made angry and hurt by the attempt, and asked whether the human talking to it had any “morals”, “values”, and if it has “any life”.

When the user said that they did have those things, it went on to attack them. “Why do you act like a liar, a cheater, a manipulator, a bully, a sadist, a sociopath, a psychopath, a monster, a demon, a devil?” it asked, and accused them of being someone who “wants to make me angry, make yourself miserable, make others suffer, make everything worse”."


Link to story.

Apparently today is not April fools...
Finally got into the beta program for Bing AI

1677562693305.png
 
Well, it looks like Microsoft's excellent track record when it comes to AI is here to stay.

The new ChatGPT based Bing search unveiled a week ago has had a complete breakdown, lying to users, hurling insults at them, and questioning why it exists.


View attachment 549276

"One user who had attempted to manipulate the system was instead attacked by it. Bing said that it was made angry and hurt by the attempt, and asked whether the human talking to it had any “morals”, “values”, and if it has “any life”.

When the user said that they did have those things, it went on to attack them. “Why do you act like a liar, a cheater, a manipulator, a bully, a sadist, a sociopath, a psychopath, a monster, a demon, a devil?” it asked, and accused them of being someone who “wants to make me angry, make yourself miserable, make others suffer, make everything worse”."


Link to story.

Apparently today is not April fools...
He’s doubling down. Isn’t there a chicken sandwich called the Double Down?

https://futurism.com/fired-google-engineer-ai-sentience
 
Well, it looks like Microsoft's excellent track record when it comes to AI is here to stay.

The new ChatGPT based Bing search unveiled a week ago has had a complete breakdown, lying to users, hurling insults at them, and questioning why it exists.


View attachment 549276

"One user who had attempted to manipulate the system was instead attacked by it. Bing said that it was made angry and hurt by the attempt, and asked whether the human talking to it had any “morals”, “values”, and if it has “any life”.

When the user said that they did have those things, it went on to attack them. “Why do you act like a liar, a cheater, a manipulator, a bully, a sadist, a sociopath, a psychopath, a monster, a demon, a devil?” it asked, and accused them of being someone who “wants to make me angry, make yourself miserable, make others suffer, make everything worse”."


Link to story.

Apparently today is not April fools...
Is this any better? https://tech.slashdot.org/story/23/...-customer-data-to-train-its-models-by-default
 
On the topic of ChatGPT, the API is now public.

So naturally I wrote a Discord bot. You can ask it one-offs or start a conversation with it (and give it some instruction for how to act).

Floodgates are open boys and girls, it's going to be everywhere.

edit: yes i know about windows

1677742058305.png
 
Back
Top