Why bother installing OS? AI can just simulate it for you.

This AI shit is really getting out of hand.

I've been a tech enthusiast my entire life, but this AI and machine learning stuff, I hate it.

I hate it with a passion. If I were king of the world, AI and machine learning would be completely banned in any and all applications.

I'm absolutely horrified how many people display complete comfort in outsourcing parts of their lives to AI which by its very definition cannot be trusted to be accurate.

It doesn't matter how good it gets, it will never be accurate enough for me to trust it with even simple stuff like my calendar, let alone for anything more serious (and potentially life threatening) like operating a vehicle.

Both developers and users are jumping head first into an era of machine learning without any precautions what so ever.

I will never trust anything which I either don't completely control, or for which the parameters or algorithms that generate them have been statically validated based on known models in advance.

There ought to be laws preventing the use of black box models for anything and everything of significance.
 
People can't be trusted to plug in a video card power cable all the way or not use a hair dryer in the shower, they need more AI in their lives making choices for them.
 
There ought to be laws preventing the use of black box models for anything and everything of significance.
Why? There are plenty of things completely out of our control. And plenty of things that we do not understand (including ourselves). Build the blackbox, test a wide range of inputs and verify the outputs, make sure it is statistically unlikely to fail.
 
You can simulate Windows 95 on a Sega Genesis, doesn't mean anyone took it serous.
 
I don't completely dissagree with Zarathustra[H] and I work in the field. People are so ****ing lazy nowadays that almost anything that makes their lives require less effort they'll buy into. Anyone remember that one picture with the guy not saluting in the crowd to Hitler? Be that guy.
 
Why? There are plenty of things completely out of our control. And plenty of things that we do not understand (including ourselves). Build the blackbox, test a wide range of inputs and verify the outputs, make sure it is statistically unlikely to fail.

Statistics doesn't work unless you are using a truly random sample, AND you make sure your metrics and inputs are actually good.

There was a story a while back (but either my google-fu is failing me right now, or special interests have tried to bury it, as I can't find it) where they tested AI backbox models capability to screen patient x-rays for tuberculosis.

Despite passing statistical tests, the model suffered horrendous misdiagnosis problems in many cases, because it factored in such things that were unintended. Upon troubleshooting it was found that it was using things like the brand and model of X-ray machine as a predictor of who had tuberculosis.

Tuberculosis is more common in third world. So are cheaper and older X-Ray machines. So the blackbox model assigned a higher likelihood of a positive tuberculosis diagnosis if a cheaper or older X-ray machine was used.

This is not an isolated incident. It is an inherent flaw in blackbox models, which is why FDA typically does not allow software to be treated as a black box.

And sure, you can start massaging th eblackbox model, preventing it from using X-ray machine model directly, but who knows what else is hiding in the image that ist is using incorrectly. Is the image grainy or low resolution? Maybe grainy images are a preductor of older and cheaper x-ray machines. If you go down this path you just wind up playing whack-a-mole as you find more and more spurious correlations in the model.

It is extremely important to have a hypothesis as to WHY something is happening, such that a test can be properly structured to test it. Blackbox models just correlate data. Just tale a look at the Spurious Correlations website to see how great of an idea that is.

The whole scientific method breaks down if all you use is a black box.

By all means, use black boxes to screen raw data for potential ideas, then analyze the outcomes hypothesize why the outcomes are what they are, and test those hypotheses in structured tests, but never ever ever under any circumstance rely directly on black-box data for anything of importance.
 
By all means, use black boxes to screen raw data for potential ideas, then analyze the outcomes hypothesize why the outcomes are what they are, and test those hypotheses in structured tests, but never ever ever under any circumstance rely directly on black-box data for anything of importance.
I mean.. you already flipped your position from your previous post :p. Sounds like what you are saying is that there may be use-cases where these black boxes can be useful, but a lot of applications shouldn't use them.
 
This AI shit is really getting out of hand.

I've been a tech enthusiast my entire life, but this AI and machine learning stuff, I hate it.

I hate it with a passion. If I were king of the world, AI and machine learning would be completely banned in any and all applications.

I'm absolutely horrified how many people display complete comfort in outsourcing parts of their lives to AI which by its very definition cannot be trusted to be accurate.

It doesn't matter how good it gets, it will never be accurate enough for me to trust it with even simple stuff like my calendar, let alone for anything more serious (and potentially life threatening) like operating a vehicle.

Both developers and users are jumping head first into an era of machine learning without any precautions what so ever.

I will never trust anything which I either don't completely control, or for which the parameters or algorithms that generate them have been statically validated based on known models in advance.

There ought to be laws preventing the use of black box models for anything and everything of significance.
The spice must flow.
 
I mean.. you already flipped your position from your previous post :p. Sounds like what you are saying is that there may be use-cases where these black boxes can be useful, but a lot of applications shouldn't use them.

Yeah, part of my first message was me just being a grumpy old man.

Take my previous post and add a line like "If people can't seem to resist using black box models for direct decisionmaking, then" before "AI should be banned" and it probably better reflects my actual thinking on the subject.

AI and machine learning can be a very useful tool for sifting through massive quantities of data looking for patterns. Not going to lie about that. We just have to be cautious how we use the outputs.

I think of it like using "Design of Experiements" in a screening study (like a Plackett-Burman design). These are extremely useful for sifting through large numbers of variables looking for whether or not they have an impact on the output, but you wouldn't rely on a screening study for your final conclusions. They just aren't powered for that. You use the screening study to narrow down which variables to focus on, and then do proper studies on those variables.

If people use black box machine learning models in this way, as a tool to help narrow their focus, I think it could be helpful, without the many pitfalls involved. Unfortunately there ware way too many people trying to push for black box machine learning models to be used for direct decision-making, and I'd argue that regardless of how good they get, even in a million years, that should never under any circumstance be relied upon for for anything important.

By all means, use it for frivolous fun toys like a chatbot, or artist renderings of funny pictures, but never for direct decisions regarding anything of importance.

Don't make medical decisions based on direct black-box machine learning models, don't drive a car based on direct AI outputs, don't even use a voice assistant to schedule a calendar entry based on AI interpretations, (that is, if your time and your appointments are important to you)
 
don't drive a car based on direct AI outputs,
What will your opinion be if in particular conditions self driving cars are found to have much lower accident rates then human drivers? Like let's say in the city of San Francisco there are several studies that show self driving cars that use black box machine learning components in their system have much lower accident rates then human drivers in San Francisco.

Would you not recommend people in San Francisco to use those self driving cars?
 
That's cool I guess. AI is more of a fun novelty than anything else. It can do a few useful things well, but for the most part the tech as a whole has been over-promised and under-delivered. AGI is a pipe dream.
 
Last edited:
What will your opinion be if in particular conditions self driving cars are found to have much lower accident rates then human drivers? Like let's say in the city of San Francisco there are several studies that show self driving cars that use black box machine learning components in their system have much lower accident rates then human drivers in San Francisco.

Would you not recommend people in San Francisco to use those self driving cars?

I would challenge those results based on considering an inadequate dataset. They might get good results under ideal conditions, but conditions are not always ideal.

The existing models tend to be better at obeying speed limits than people do, and their many sensors tend to be better at tracking multiple obstacles than people are. They also never become distracted.

But when they screw up, they tend to screw up massively and in unpredictable ways.

I have a car with a semi-autonomous driving mode, and it has taught me to never trust autonomous driving modes.
 
Statistics doesn't work unless you are using a truly random sample, AND you make sure your metrics and inputs are actually good.

There was a story a while back (but either my google-fu is failing me right now, or special interests have tried to bury it, as I can't find it) where they tested AI backbox models capability to screen patient x-rays for tuberculosis.

Despite passing statistical tests, the model suffered horrendous misdiagnosis problems in many cases, because it factored in such things that were unintended. Upon troubleshooting it was found that it was using things like the brand and model of X-ray machine as a predictor of who had tuberculosis.

Tuberculosis is more common in third world. So are cheaper and older X-Ray machines. So the blackbox model assigned a higher likelihood of a positive tuberculosis diagnosis if a cheaper or older X-ray machine was used.

This is not an isolated incident. It is an inherent flaw in blackbox models, which is why FDA typically does not allow software to be treated as a black box.

And sure, you can start massaging th eblackbox model, preventing it from using X-ray machine model directly, but who knows what else is hiding in the image that ist is using incorrectly. Is the image grainy or low resolution? Maybe grainy images are a preductor of older and cheaper x-ray machines. If you go down this path you just wind up playing whack-a-mole as you find more and more spurious correlations in the model.

It is extremely important to have a hypothesis as to WHY something is happening, such that a test can be properly structured to test it. Blackbox models just correlate data. Just tale a look at the Spurious Correlations website to see how great of an idea that is.

The whole scientific method breaks down if all you use is a black box.

By all means, use black boxes to screen raw data for potential ideas, then analyze the outcomes hypothesize why the outcomes are what they are, and test those hypotheses in structured tests, but never ever ever under any circumstance rely directly on black-box data for anything of importance.
I’ve been arguing internally that we need to move to more explainable models with a curated feature space.

I get pushback because they explainable AI and curated feature space don’t perform as well as the model they are using and improving.

My retort is that it might perform better on your training data set, but a trainable AI with a curated feature space will never make incredibly dumb decisions based on “edge cases”.

It’s a loosing battle
 
I’ve been arguing internally that we need to move to more explainable models with a curated feature space.

I get pushback because they explainable AI and curated feature space don’t perform as well as the model they are using and improving.

My retort is that it might perform better on your training data set, but a trainable AI with a curated feature space will never make incredibly dumb decisions based on “edge cases”.

It’s a loosing battle

That is very sad to hear. Please keep up the good fight though!
 
chatgpt is cool imagine the in game npc ai conversations it can lead to

npc shit talk in gwent about to make you cry on a deep personal level lol
 
I would challenge those results based on considering an inadequate dataset. They might get good results under ideal conditions, but conditions are not always ideal.

The existing models tend to be better at obeying speed limits than people do, and their many sensors tend to be better at tracking multiple obstacles than people are. They also never become distracted.

But when they screw up, they tend to screw up massively and in unpredictable ways.

I have a car with a semi-autonomous driving mode, and it has taught me to never trust autonomous driving modes.
"Any well thought-out study will never find that self driving cars (existing or future models) are safer than human drivers". Is that a good summary of your response to my question?
 
"Any well thought-out study will never find that self driving cars (existing or future models) are safer than human drivers". Is that a good summary of your response to my question?
I am not a fan of fully automated self-driving cars that have more power than a golf cart.
That said I am very much a fan of automated driving assist for medical emergencies and such.
The car detects the driver fell asleep, OK decreases speed, turns on warning flashers, and finds a safe shoulder to automatically pull over to.
Heart attack, stroke, etc... the same procedure but the vehicle calls 911 for assistance.
I can see a strong argument in favor of self-driving, mobility assistants for the visually impaired or other individuals with medical conditions that would not allow them to safely operate a vehicle but offer them more freedom than public transport can provide as I think we can all agree for most those systems are straight up garbage. Again in these cases low speed, like 60 kph for around town, nothing highway/freeway capable.
 
It is extremely important to have a hypothesis as to WHY something is happening, such that a test can be properly structured to test it. Blackbox models just correlate data. Just tale a look at the Spurious Correlations website to see how great of an idea that is.
Divorce rate and margarine consumption, I could see the link.
 
This AI shit is really getting out of hand.

I've been a tech enthusiast my entire life, but this AI and machine learning stuff, I hate it.

I hate it with a passion. If I were king of the world, AI and machine learning would be completely banned in any and all applications.

I'm absolutely horrified how many people display complete comfort in outsourcing parts of their lives to AI which by its very definition cannot be trusted to be accurate.

It doesn't matter how good it gets, it will never be accurate enough for me to trust it with even simple stuff like my calendar, let alone for anything more serious (and potentially life threatening) like operating a vehicle.

Both developers and users are jumping head first into an era of machine learning without any precautions what so ever.

I will never trust anything which I either don't completely control, or for which the parameters or algorithms that generate them have been statically validated based on known models in advance.

There ought to be laws preventing the use of black box models for anything and everything of significance.
You know, replace "AI and machine learning" with "Human" and you have a valid argument.
 
I would challenge those results based on considering an inadequate dataset. They might get good results under ideal conditions, but conditions are not always ideal.

The existing models tend to be better at obeying speed limits than people do, and their many sensors tend to be better at tracking multiple obstacles than people are. They also never become distracted.

But when they screw up, they tend to screw up massively and in unpredictable ways.

I have a car with a semi-autonomous driving mode, and it has taught me to never trust autonomous driving modes.
A great counter-example here is the video (although I think there are multiple of these floating around now) of a Tesla following a tractor trailer carrying traffic lights on a freeway. The "self-driving, aware AI" in the Tesla kept identifying the completely inert traffic lights as traffic lights even though just any notion of context would indicate that traffic lights don't appear on a freeway. There are many more examples of this where these over-fitted models just fail completely because they are not "aware" or "intelligent" in any sense. Worse, when something does go wrong it's nigh impossible to debug and trace why it went wrong. The only solution is feeding more data to try to reinforce "good" outcomes and prevent "poor" outcomes - which again leads to overfitting and constraining. Basically a self-feeding, incestuous pile of bullshit.
 
I definitely lean on the extremely skeptical side of things towards all this "mordern AI" - which is really just statistics based on morally bankrupt data collection.
Here's a pretty good take on why this trend is overall braindead and ultimately negative:
https://ploum.net/2022-12-05-drowning-in-ai-generated-garbage.html

and some comments on that article from a more "technical"-oriented board than [H]:
https://news.ycombinator.com/item?id=33864276
What? Trying to trap people into irrelevant technicalities and whataboutisms isn't "technical" enough for you?
 
What? Trying to trap people into irrelevant technicalities and whataboutisms isn't "technical" enough for you?
Hm? Are you referring to me trying to trap people?

edit: if so you misunderstood me. I'm trying to understand people's positions. I find that most arguments on the internet (and in real life) occur between people that don't even understand each other's points.
 
Last edited:
Hm? Are you referring to me trying to trap people?

edit: if so you misunderstood me. I'm trying to understand people's positions. I find that most arguments on the internet (and in real life) occur between people that don't even understand each other's points.
It's not so much this thread as it is past threads and discussions.

Edit: So no, I didn't have you in mind when writing that.
 
Yeah, part of my first message was me just being a grumpy old man.

Damn AI kids! Get off my lawn!

In all seriousness, I do agree with some of your points. We're really in the dawn of true AI and man we have a long way to go. Yet we have big business who wants this yesterday so they can make more money...to hell with the casualties. The downside of capitalism. Not like we've ever been there before.....errr....

I do believe that we will have things like self driving cars (for example). I want self driving cars and I love driving. I want to get all the idiots off the road...and brother, there's a lot of idiots out there. I've lost count of how many times I've nearly crashed on my way to work because of someone doing something insanely stupid. People are dangerous behind the wheel and I'll freely admit that I've made my share of mistakes too.

That being said, I recognize that AI isn't ready for this yet. I'd rather they get it right first. However, I'm actually perfectly ok with someone dying in a crash caused by AI error. Yes this sounds insane, but think of how many people die each year due to people being stupid in their cars. If the AI does stupid significantly less, I'm all for it.

Sure we have loads of problems to sort out first. For example, what if someone hacks it? Tricks the input somehow? A particular situation where it crashes the car EVERY time? It's still a computer after all. Give it time, and hopefully we will not rush it.

I see a lot of potential for assisting researchers or doctors. I don't think AI should replace either, at least not in our lifetime.

Makes me think of I robot. Not the movie, the book. Only Asimov book I liked lol.
 
No OS? No problem! ChatGPT can emulate a linux shell for you. It can emulate lynx for you. It can emulate python for you. It can even emulate YOU for you (ok maybe it cant do that, not sure yet).

https://www.engraved.blog/author/jonas/

Some of the comments are also interesting.

But note that StackOverFlow is banning the use of it because it is being found that it is very very inaccurate..
 
ChatGPT doesn’t pubpish code with useable context. We have been going on about this for a week.

Stable Diffusion also has context issues in that json refinement of your query starts to reveal odd weighting in image sort.

Just use the product and forget the talking heads that clearly have no depth using a tool.

The plain writing examples are already the subject of a number of Python and R developers per projects bc, it sucks.

GPT4 is something I’ll watch, and the various APis will drive me to build another desktop eventually. Hopefully no one will just decide to productinize existing gpt3 based toys.
 
But note that StackOverFlow is banning the use of it because it is being found that it is very very inaccurate..
Yeah, we dealt with this using CodeWhisperer, copilot, various reskinned forks you see pop up everywhere.

I feel bad for the kids that pay for o tour of whatever kind right now just to discover the output is crap. If it’s not crap, then the goals were such low hanging fruit they should have produced the output in their own.

Thinking a payload with a bunch of if statements isn’t anything specific.

It’s like the talking heads have no core cs knowledge and some have shared screenshots of “code” or malware” or resource config that wouldn’t apply to anything if you ran it.

PS writing vm configs off a cli has been around since PXE on Unix. We have been launching resources for a long time. ML isn’t a very good use case for it. Getting ChatGPT to write me a Vue or Next simple mapping spa….it won’t run. I had it declare a couple functions with the same name a couple times.
 
Last edited:
Back
Top