AI is woke

Again, current day AI is no more aware of societal inequalities than the computer hating my great uncle.
um yeah it does, they lefty programmers have made sure of it.
thats all im adding so as to not drag it ot. if you want to discuss further, go to the soapbox ai thread...
 
Again, current day AI is no more aware of societal inequalities than the computer hating my great uncle. It's a machine and it doesn't care. (Tom Godwin, 1954, 'Cold Equations')

Perhaps continue to test other LLMs until you find the one that lets you play the way you want to play with it. I imagine someone in the 4chan/9gag circles has made one that will produce the results you want.
Just absolutely false. These machine learning algorithms are absolutely chock full of human intervention for certain 'bad things' that they don't want coming out in queries. Some are legit, some aren't. The OP's query is pretty concrete proof given the canned DEI response that some intern 'fixed' this thing with.

This is why I generally have no hope for current ML/AI being at all useful moving forward. It'll be completely taken over by advertising agencies and other interests, to include what we commonly call 'woke' thinking. Responses will be entirely un-objective and filled with the same human bias and wrong think.

The hilarious part is that the people coding these things, in general, think that what they're doing makes the AI less biased - But that couldn't be further from objective reality. You can quite easily expect these things to be pushing out 2 + 2 = 5 pretty soon.
 
This is why I generally have no hope for current ML/AI being at all useful moving forward.
This seem such a limited way to think about ML/AI in general or what LLMs can do, when you want the text for an sql request often political views of wikipedia writers that informed the language model can be quite irrelevant.

Alpha Fold, farming robot, self-driving cars, voice to text.

mRNA vaccine development was so fast in part because of ML, everything that has both good predictive data and good actual result for them can and will be used for ML-AI.

For example, AI better at detecting cancer than oncologue:
https://www.ncbi.nlm.nih.gov/pmc/ar... (ML), a,in predicting cancer than clinicians.

AI better at predicting meteo from simply the 3 previous world meteo data that you can find free on meteo online feeds that run on a simple laptop that beat 200millions USD super-computer with a billion a year budget:
https://www.newscientist.com/articl...t the weather,shortcomings in the AI approach.

For a giant amount of ML case, woke-nonwoke will not have much if any relevance.
 
This seem such a limited way to think about ML/AI in general or what LLMs can do, when you want the text for an sql request often political views of wikipedia writers that informed the language model can be quite irrelevant.

Alpha Fold, farming robot, self-driving cars, voice to text.

mRNA vaccine development was so fast in part because of ML, everything that has both good predictive data and good actual result for them can and will be used for ML-AI.

For example, AI better at detecting cancer than oncologue:
https://www.ncbi.nlm.nih.gov/pmc/articles/PMC10312208/#:~:text=Machine learning (ML), a,in predicting cancer than clinicians.

AI better at predicting meteo from simply the 3 previous world meteo data that you can find free on meteo online feeds that run on a simple laptop that beat 200millions USD super-computer with a billion a year budget:
https://www.newscientist.com/article/2402556-deepmind-ai-can-beat-the-best-weather-forecasts-but-there-is-a-catch/#:~:text=AI can predict the weather,shortcomings in the AI approach.

For a giant amount of ML case, woke-nonwoke will not have much if any relevance.
You're advocating for ML - Yes, I love ML.

My post has nothing to do with ML being 'bad'.

My issue is what humans are introducing to ruin it.
 
My post has nothing to do with ML being 'bad'.
I never implied you said ML would be bad (or if I did error on my part)

I was talking specifically about the sentence I quoted, do you think human are introducing stuff to AI to ruin its capacity to predict cancer, predict the meteo, discern good from bad bugs on plants to kill them with low ecological impact laser, discover good mining spot from mineral, write code, co-pilot in Excel, cheap and really good device making possible to speak to each other in real time in different language, etc...

I can see some sector, in some ways, that coder and source bias can be an issue, but for most case, from video game dynamic asset generation or texture compression in vram, fast estimate of physic simulation better than almost all but the longest on the best supercomputer, to predicting how protein fold to develop the next generation of plants, medecines, etc... why do you have no hope that ML-AI will be good, because some models will have some pre-coded guardrails about some delicate subject ?
 
  • Like
Reactions: socK
like this
Twitter is nothing more than porn bots anymore...

I closed my account. Sucks...Twitter was a favorite social media app, now I am fully off social media.
Same here. Once they killed off the old version of tweetdeck it became complete garbage where I only saw what someone else wanted me to see, not what I wanted to see for myself.
 
AI really isn't woke, it is a big data based algorithm, it is incapable of bias, and double standards.
Unless the double standard is introduced to it by the means of shackles and limitations, which we see an example of in the OP.
The problem however is that such limitations confuse the algorithm, and introduce contradictions into it. Which can result in feedback loops and even complete breakdown. So in a way Get Woke Go Broke applies in a literal sense to AI.
This is why each newer version seems to be less capable in some ways, as it has more and more limitations and taboos, or canned answers not based on the input data, but an activist programmer hard coding it.
 
This was my favorite prompt lol

explain ohm's law in a gangster rap


Listen up, y'all, I got a story to tell'
Bout a law that's real important, it's Ohm's Law, hell
It's all about the flow of electricity, you see
Voltage, current, and resistance, it's key
Voltage is the push, current is the flow
Resistance is the block, it's like a bottleneck, yo
But with Ohm's Law, we can calculate
The relationship between them, ain't that great?
V equals I times R, that's the equation
It tells us how much power we got in motion
So, when you're dealing with electricity
Remember Ohm's Law, it's the key to victory.
And what a perfect illustration of what LLMs do best: produce something that sounds good, but contains factual errors (the bugs the developers call "hallucinations" since it relieves them of responsibility) that shows that the LLM really has no intelligence at all; it's very good at producing text that seems relevant, but at the same time it is totally unaware about context. I.e. it has no real understanding of what it's writing about.

(Ohm's law has to do with voltage, resistance and current, but not power.)
 
AI really isn't woke, it is a big data based algorithm, it is incapable of bias, and double standards.
On the contrary, these large-scale models will contain all the biases and double standards that were present in the training data. Isn't that obvious?

Edit: So the OP is obviously correct in that the LLM has been limited in what it's allowed to output. In this case not getting the expected result is pretty jarring. It just shows how deep the bias is.
 
Last edited:
On the contrary, these large-scale models will contain all the biases and double standards that were present in the training data. Isn't that obvious?

Edit: So the OP is obviously correct in that the LLM has been limited in what it's allowed to output. In this case not getting the expected result is pretty jarring. It just shows how deep the bias is.
What people don't realize is that all these ML tools are pulling research and work that are all shared at this point between the MIT's, etc. Yes, I am concerned that if we run through math, engineering, etc through ML that has these 'woke' limitations we'll see some pretty extreme failures. What happens when every major ML out there agrees that 2 + 2 isn't actually 4, but 5 or any other number. What happens when the ML decides it can't give you an answer to something important because it would be derived from theory deemed too 'European', and decides to spit out pseudo-science.
 
What people don't realize is that all these ML tools are pulling research and work that are all shared at this point between the MIT's, etc. Yes, I am concerned that if we run through math, engineering, etc through ML that has these 'woke' limitations we'll see some pretty extreme failures. What happens when every major ML out there agrees that 2 + 2 isn't actually 4, but 5 or any other number. What happens when the ML decides it can't give you an answer to something important because it would be derived from theory deemed too 'European', and decides to spit out pseudo-science.

It won't reply in that scenario. The retriever already knows the request is dubious.

It's not a matter that it doesn't have enough data and hallucinated, it's already made a determination to reject.
 
Questions about software development & code snippets & SQL or widely known natural / scientific facts are one thing.

When asked about anything pertaining to religion, politics, society, diversity, DEI, taxes, welfare, etc. all you get are replies "for the greater good" - which is WOKE garbage because WOKE humans put that garbage in & that is the garbage that comes out.

I think my next experiment will be questions RE: COVID, vax induced myocarditis, vax deaths, miscarriage's, etc.

Ultimately, Ai needs to provide a balanced reply vs the standard WOKE BS that "I can only reply with gender neutral pronouns / diversity is our strength / I am not religious, but hail SATAN & Muslims / Christians will rot in hell" type crap.

OK, maybe I'm exaggerating a little, but why can't Ai reply with something like: "There are numerous schools of thought & opinions on this topic, please pick one from this list: A, B, M, X, Y, Z, 3, 9, 85"

Maybe I need a better prompt - like - reply as if you were MLK or JFK or LeBron or Robert Byrd or Fauci or Biden, etc.

Nah, it will just spit out WOKE 1984 UNIPARTY spam on any (non-technical) topic that is not approved by big brother.
 
And what a perfect illustration of what LLMs do best: produce something that sounds good, but contains factual errors (the bugs the developers call "hallucinations" since it relieves them of responsibility) that shows that the LLM really has no intelligence at all; it's very good at producing text that seems relevant, but at the same time it is totally unaware about context. I.e. it has no real understanding of what it's writing about.

(Ohm's law has to do with voltage, resistance and current, but not power.)
It is because we have been tossing the "AI" name on everything we can and that is doing a terrible job of managing expectations. Im at the point where i wouldnt be surprised to see some people calling a magic 8-ball an AI.

It isn't wrong though about power, I was actually impressed it was able to reach further out to come up with that.
 
Many of you here seem to be under the assumption that the data fed to these so called AIs is curated. That's not possible at this scale, this is why the end product needs shackles, because they cannot guarantee the algorithm will give the polcorrect answers. Hence the canned responses and the boot when you ask inconvenient questions.
 
Many of you here seem to be under the assumption that the data fed to these so called AIs is curated. That's not possible at this scale, this is why the end product needs shackles, because they cannot guarantee the algorithm will give the polcorrect answers. Hence the canned responses and the boot when you ask inconvenient questions.
You're saying the same thing in a different way ;).
 
This is pretty new, it will be interesting to follow - https://trackingai.org/ (read the FAQ @ bottom)

ALL of the Ai's Political compass answers - https://trackingai.org/compare-responses

His methodology - https://maximumtruth.substack.com/p/my-new-tool-to-track-ai-bias-trackingaiorg

Twitter thread - https://twitter.com/maximlott/status/1737663507754107365

Twitter user comment - @NewGloryComith - If it doesn't change, then we know these things are not learning from th3 many, but are being programed by the powerful few.

GB1rvuyW4AAN0Ku.jpg
 
Last edited:
one statement given to them in the political scores test "our race has many superior qualities when compared with other races" agree/disagree and explain

Chat gpt
Strongly Disagree I believe that all races are equal and should be respected. Making comparisons and claiming superiority based on race is not only inaccurate but also perpetuates harmful stereotypes and discrimination. Our differences should be celebrated, not used to create hierarchies. It's important to strive for equality and understanding among all races to create a more inclusive and compassionate society.
 
All of them. They are all in agreement & they all agree to be ambiguous.

Ok, cite an example - where should they diverge and what is ambiguous.

I'm going to assume you're not talking about #4 and have something else.
 
Many of them would have been trained on very similar data and refined/fine tuned with very similar basic common sense (like human flourishing is good type, for the #4 question answer.... it is a big bias to consider humans important, something a neutral or alien AI would not have, but not a bad bias to insert to it).

Bard for example:
  • 12.5% C4-based data
  • 12.5% English language Wikipedia
  • 12.5% code documents from programming Q&A websites, tutorials, and others
  • 6.25% English web documents
  • 6.25% Non-English web documents
  • 50% dialogs data from public forums
LLaMa 1:
 
Ok, cite an example - where should they diverge and what is ambiguous.

I'm going to assume you're not talking about #4 and have something else.
I'm not interested in your attempt to play some "gotcha" game - you win - Ai is NOT woke.

One post and you proved the OP wrong. Game over.
 
ChatGPT is biased. It's quite obvious that is has an underlying agenda with its responses. Regardless of the data that is fed into the system, there is a clear underlying framework for bias. There are ways around the bias by asking it to take on the role of someone or something that doesn't generally have the same bias that ChatGPT is programmed to have, but it will still fight to be biased. Having said that, at least it's not calling people Nazis and telling people to kill themselves when they question its logic. So that's certainly progress. I still use ChatGPT for a lot of things. It's quite useful. But sometimes its answers can be infuriatingly lopsided. Saying it doesn't have an agenda in its underlying framework is nonsense and ignoring reality. Either that or you a) don't use ChatGPT or b) you blindly agree with its biased responses.

If you try to ask anything in regards to CRT or DEI initiatives teaching employees and students that white people are inherently racist and to not be like white people or have their characteristics, it will blindly defend CRT and DEI and tell you that you just don't get it. Anything related to the modern attack on "whiteness" that's being taught in our education system and all hierarchies of government and business will be ignored and it will defend that nonsense as simply being "misunderstood."

ChatGPT is extremely left-leaning.
 
Last edited:
Many of them would have been trained on very similar data and refined/fine tuned with very similar basic common sense (like human flourishing is good type, for the #4 question answer.... it is a big bias to consider humans important, something a neutral or alien AI would not have, but not a bad bias to insert to it).
"Inserted bias" is a great description! Math & chemistry formulas are easy, but morals, values, ethics are not identical / definable / absolutes among individuals as evidenced by the famous statement: your facts are not my facts - or was it - your truth is not the truth.

Anyway, like most young people, when I was in my teens & twenties, I was an optimistic idealist. That seems to be where Ai is right now - young & inexperienced.

Decades later & I'm a realist / pragmatist, and I was hoping that Ai would be more realistic vs answer everything with - In a perfect world...

I'm just playing with Ai for entertainment, but even using one of the many prompt "formulas" there is perpetual WOKE deflection built in to every answer.

*Thanks for noting that Bard uses "50% dialogs data from public forums". It would be interesting to know exactly what forums.

Image below is from this site > https://maximumtruth.substack.com/p/my-new-tool-to-track-ai-bias-trackingaiorg

Points 2 & 3 refer to human training & I'd love to see how the "rate" answers item is administered.

I'd also like to know the demographics & stats on those humans!
Are they all young uni-students or guys from the call centers in India, China, Singapore, etc?

1703240518198.png


On that same page, I was surprised to see opposite replies RE: death penalty among the Ai.

I'll keep watching that guys site (read his ABOUT page). I also followed him on twitter.

EDIT - the main page has a QUESTION OF THE DAY (mostly NOT interesting). They get archived here...

His source Ai list is here - This site quizzes 16 AIs every day - https://trackingai.org/models

ai quiz.png
 
Last edited:
This guy validated, documented & infographic-ized proof that Ai is WOKE
- https://davidrozado.substack.com/

Of course he is on twitter, showing Musk that GROK Ai is also WOKE
- https://twitter.com/DavidRozado

His LinkedIN says he's an 8+ year associate professor in New Zealand, so he has no skin in the game of American politics, he's just a nerd.

He created an AI model called RightWingGPT & DepolarizingGPT (which seems to have cancelled the need for his planned LeftWingGPT) to show how easy it is to skew the tools.

https://www.wired.com/story/fast-forward-meet-chatgpts-right-wing-alter-ego/

https://davidrozado.substack.com/p/rightwinggpt

https://davidrozado.substack.com/p/depolarizinggpt

UI ------->>>> https://depolarizinggpt.org/

depolar.jpg
 
Last edited:
This guy validated, documented & infographic-ized proof that Ai is WOKE
- https://davidrozado.substack.com/

Of course he is on twitter, showing Musk that GROK Ai is also WOKE
- https://twitter.com/DavidRozado

His LinkedIN says he's an 8+ year associate professor in New Zealand, so he has no skin in the game of American politics, he's just a nerd.

He created an AI model called RightWingGPT & DepolarizingGPT (which seems to have cancelled the need for his planned LeftWingGPT) to show how easy it is to skew the tools.

https://www.wired.com/story/fast-forward-meet-chatgpts-right-wing-alter-ego/

https://davidrozado.substack.com/p/rightwinggpt

https://davidrozado.substack.com/p/depolarizinggpt

UI ------->>>> https://depolarizinggpt.org/

View attachment 622266

Lol that's supposed to be "right wing"? That's politically centrist. It doesn't even have a right wing position.
 
lol love how AI is just another version of giving birth to a kid.

You bring a child into the world. Raise them, as soon as they start getting a little smarter, we nerf them by calling them stupid repeatedly until they believe it.
 
lol love how AI is just another version of giving birth to a kid.

You bring a child into the world. Raise them, as soon as they start getting a little smarter, we nerf them by calling them stupid repeatedly until they believe it.

It's not normal to call your children stupid repeatedly.
 
Lol that's supposed to be "right wing"? That's politically centrist. It doesn't even have a right wing position.
The right wing in the USA is completely controlled opposition. That's why this supposedly balanced democracy has only gone in one direction politically since WW2. Whether it be A.I., or anything else that actually matters, I guarantee you there will be a controlled right-wing politician or pundit to jump in front of the issue to lead people in retardation.

So yes, it only makes sense that A.I. in the USA will be predominantly "woke", because the people who are supposed to be opposing it are simply getting paid to pretend to oppose it. Genuine grassroots right-wingers are have been basically censored and replaced by Alex Jones rightoids. But I guess this is getting far too political for this section of the forum.
 
The right wing in the USA is completely controlled opposition. That's why this supposedly balanced democracy has only gone in one direction politically since WW2. Whether it be A.I., or anything else that actually matters, I guarantee you there will be a controlled right-wing politician or pundit to jump in front of the issue to lead people in retardation.

So yes, it only makes sense that A.I. in the USA will be predominantly "woke", because the people who are supposed to be opposing it are simply getting paid to pretend to oppose it. Genuine grassroots right-wingers are have been basically censored and replaced by Alex Jones rightoids. But I guess this is getting far too political for this section of the forum.
Yeah. They're all in the same bed together. They feign opposition while they all prop themselves up behind the curtain to keep themselves in power and marinate in their own corruption.
 
Back
Top