• Some users have recently had their accounts hijacked. It seems that the now defunct EVGA forums might have compromised your password there and seems many are using the same PW here. We would suggest you UPDATE YOUR PASSWORD and TURN ON 2FA for your account here to further secure it. None of the compromised accounts had 2FA turned on.
    Once you have enabled 2FA, your account will be updated soon to show a badge, letting other members know that you use 2FA to protect your account. This should be beneficial for everyone that uses FSFT.

ChatGPT linked to declining academic performance and memory loss in new study

So was teaching a physics lab today, and one of the big goals was to recognize error. One of my students came to me with a ChatGPT log, with stuff talking about how the force due to gravity was fairly constant, but according to general relativity it can vary from place to place... I just had a major face palm incident and wanted to throw his phone at the wall. Luckily cooler heads prevailed... and I am still employed.
I haven't been in academia in decades but I can imagine what all this AI stuff has done to the imagination of students.

Funny off topic story: I went back to grad school and was a lab TA for one of my formerly undergrad profs. I caught a couple of frat bros cheating and, due to circumstances, confronted them myself (we were not supposed to do that). I told the prof what happened around lunch time and he really didn't react saying he would let me know. At about 6, he shows up at my office and closes the door. In a very mild manner, he discussed what to do about it, and then red faced with that pulsing blood vessel in his forehead just came out and shouted "But doesn't it fucking piss you off they think you are too stupid to figure it out!!!!!!" It was a LMAO moment for me. That prof had never talked like that before, and he never did again in the time I was there, but clearly he was PA-Hissed.
 
Last edited:
Duh

“While the discussion on generative artificial intelligence, such as ChatGPT, is making waves in academia and the popular press, there is a need for more insight into the use of ChatGPT among students and the potential harmful or beneficial consequences associated with its usage. Using samples from two studies, the current research examined the causes and consequences of ChatGPT usage among university students. Study 1 developed and validated an eight-item scale to measure ChatGPT usage by conducting a survey among university students (N = 165). Study 2 used a three-wave time-lagged design to collect data from university students (N = 494) to further validate the scale and test the study’s hypotheses. Study 2 also examined the effects of academic workload, academic time pressure, sensitivity to rewards, and sensitivity to quality on ChatGPT usage. Study 2 further examined the effects of ChatGPT usage on students’ levels of procrastination, memory loss, and academic performance. Study 1 provided evidence for the validity and reliability of the ChatGPT usage scale. Furthermore, study 2 revealed that when students faced higher academic workload and time pressure, they were more likely to use ChatGPT. In contrast, students who were sensitive to rewards were less likely to use ChatGPT. Not surprisingly, use of ChatGPT was likely to develop tendencies for procrastination and memory loss and dampen the students’ academic performance. Finally, academic workload, time pressure, and sensitivity to rewards had indirect effects on students’ outcomes through ChatGPT usage.”

Source: https://www.psypost.org/chatgpt-lin...mic-performance-and-memory-loss-in-new-study/
This summary was written by two different LLM bots trying to debate each other on CBS. ;P
 
This summary was written by two different LLM bots trying to debate each other on CBS. ;P
Likely not but I appreciate that their summery boils down to Students who cram and get their work done at the last minute are more likely to use ChatGPT. Students who cram the night before and do their work at the last minute are less likely to retain that information for long periods as less of it goes from short term to long term memory.
ChatGPT makes it easier to get better grades as it does a better job of condensing information so people who are short on time are more likely to use it.
Therefore ChatGPT is the problem…

All the right data and the wrong conclusion.

ChatGPT in their own study is an irrelevant variable, it didn’t show change in student behaviour it only improved the outcome.

At best the study shows it reinforces bad behaviour.
 
Last edited:
Old guy speaketh....

You know this view of "AI" leveraging is how we felt about people that couldn't answer questions with "Google" (web index searching).

Slippery slope.... we used to have "to know" just using our own brain, or working together with multiple people brains.

Don't get me wrong. I'm guilty as well. My memory isn't as jam packed as it once was. However, I haven't lost the ability to code things up without aid. But, when stumped, I Google or prompt an LLM.
 
https://arstechnica.com/information...-chatgpt-to-the-ti-84-enabling-easy-cheating/

“Secret calculator hack brings ChatGPT to the TI-84, enabling easy cheating

Tiny device installed inside TI-84 enables Wi-Fi Internet, access to AI chatbot.”​


1727093728146.png
 
https://arstechnica.com/information...-chatgpt-to-the-ti-84-enabling-easy-cheating/

“Secret calculator hack brings ChatGPT to the TI-84, enabling easy cheating

Tiny device installed inside TI-84 enables Wi-Fi Internet, access to AI chatbot.”​

I mean the only place a calculator really helps is something like a math class (or maybe physics), and you can program every formula you couldn't be bothered to remember into those things anyways which is often more than what you need for help to cheat in a test. Bottom line, if you see someone whip out their calculator for a history test, maybe extra scrutiny get paid for that person.

Plus, I can imagine genZ trying to type out sentences on a calculator for ChatGPT to get the help they want, their minds would crumble to dust as they are transported back to a time before they were born trying to send a text to someone using a flip phone keypad.
 
I mean the only place a calculator really helps is something like a math class (or maybe physics), and you can program every formula you couldn't be bothered to remember into those things anyways which is often more than what you need for help to cheat in a test. Bottom line, if you see someone whip out their calculator for a history test, maybe extra scrutiny get paid for that person.

Plus, I can imagine genZ trying to type out sentences on a calculator for ChatGPT to get the help they want, their minds would crumble to dust as they are transported back to a time before they were born trying to send a text to someone using a flip phone keypad.
I mean I programmed the periodic table into mine, along with most of the formulas and such needed for physics and chemestry.

Helped with all tests until the finals which required the administrators to blank the devices beforehand. Knowing that was going to happen you just needed to actually study, they didn’t hide the fact they would do this, daily reminders for weeks.

Yet you would be (likely aren’t) shocked out many students apparently didn’t know and broke down at the front of the room when they did.
 
Don't get me wrong. I'm guilty as well. My memory isn't as jam packed as it once was. However, I haven't lost the ability to code things up without aid. But, when stumped, I Google or prompt an LLM.
Lot of people that have started in the era of web copy-pasting/started template/good LSP code completer/compiler with legible error output would have too, good LLM it would be even more the case.

Frankly with zero help, like people of the 90s that were able to code all day long without any (including the compiler feedback) in notepad, that would get rare.

At best the study shows it reinforces bad behaviour.
I feel this is a lot what the study is about, at least there a temporal (before-after gpt was used by the students in question), the result is not presented too and they speak a lot versus student who did not instead of focusing on the before-after change, could be unspectacular.

Helped with all tests until the finals which required the administrators to blank the devices beforehand.
We had a popular program in assembly (the one using basic would always have a tell of something being different) that launched a menu similar to the Ti-83 menu of reseting-deleting everything in your calculator to make it look like they were empty, for that.


Historical precedant for something like that are hard to interpret, because usually people talking against the effect on student of reading-writing and after that calculator were ~100% right on the downside, the Greeks philosophers warning that making people able to read and write would shatter people memory were quite right (it took until papers got cheap enough in the 18-19th century to happen), we did lose a lot versus school system still using mental calculus and abacus for math and there is a cost.

The upside of reading-writing or computer for maths tend to be underrated too
 
The research explores the impact of ChatGPT usage among university students. Study 1 developed a reliable scale to measure usage, while Study 2 identified factors influencing it, such as academic workload, time pressure, and sensitivity to rewards. The study found that students under more stress were more likely to use ChatGPT, but those motivated by rewards were less inclined. However, frequent use of ChatGPT was linked to increased procrastination, memory loss, and lower academic performance, suggesting potential negative consequences despite its convenience.
A similar research study could be performed on students who study frequently vs those who cram the night before, and you would likely get similar results.
ChatGPT and AI aren't the problems here, behavior and methodology is
 
Old guy speaketh....

You know this view of "AI" leveraging is how we felt about people that couldn't answer questions with "Google" (web index searching).

Slippery slope.... we used to have "to know" just using our own brain, or working together with multiple people brains.

Don't get me wrong. I'm guilty as well. My memory isn't as jam packed as it once was. However, I haven't lost the ability to code things up without aid. But, when stumped, I Google or prompt an LLM.

So, I'm a tweener on this.

I am so happy we don't live in the bad old days when your inquisitive mind could come up with a question, and it would just have to go unanswered, as you could get a bullshit answer from an adult who probably didn't know either, or you could look up a very short and inadequate answer in an encyclopedia (if you were rich enough to have a set at home) or traipse down to a local library and look it up in their encyclopedia, and when their answer was inadequate spend the rest of the day flipping through god awful Card Catalogues and having to read a full book just to answer a short question.

1738782283496.png


I don't miss these god awful things at all. I would have loved to participate in a card catalog burning ceremony at the end of their reign, once they were replaced by searchable databases.

And while I am nostalgic for doing searches using serial unix terminals (they actually really worked, gave you everything you needed and nothing you didn't) being able to look things up on the internet from home, and later from anywhere on a phone has been transformative, and I would never desire to go back to the before times.

Imagine thinking of a question like "I wonder what the longest bridge in the world is" or "I wonder how long jaguars live in the wild" or "I wonder what the top speed of the Lamborghini Countach was" or "I wonder what the top flight ceiling of a Boeing 777 is" or "I wonder what the maximum intercept range of a RIM-66 SM2 missile is" and then just having to go about your day without knowing. I can't go back to that dark, dark time

Throughout my high school and college career (mid 90's to early 2000's) all the good instructors realized you no longer had to waste time and effort memorizing facts you could just instantly look up in the future, and had things like open book tests. The important part was that you actually understood the subject enough to use it not that you memorized a damn book. I totally agreed with this approach to learning.

To this day I have not memorized the formulas for heat transfer from thermodynamics, but I sure as hell remember the concepts, and if I need to do some heat transfer calculations, I can just look those old formulas up in my textbooks I still have in my bookshelf right behind me (or more likely just google them) and just get right to it.

But having things auto-summarized and done for you by AI is just one step too far. And - of course - it is having repercussions.

It's just one step along the way of us becoming the useless humans from the Wall-E film.

1738782546496.png


I foresee a future not to far from now where humanity goes from being mostly useless consumers of entertainment and fashion to completely useless consumers of entertainment and fashion.

Most of us already put style ahead of substance, and the "AI revolution" (if you want to call it that) is only going to make this way, way worse.

This is why I have been completely avoiding any and all "language models" or "personal assistants". I know it is convenient, and convenience usually trumps all other things, and I don't want to be tempted to be spoonfed untrusted information and gradually degrade into a useless person. I refuse.

I think there are valuable things AI and machine learning can be used for in early stage scientific settings to sift through massive amounts of data looking for patterns. Things which would otherwise be impossible or infeasible, but this mass pushing of AI language models for consumer consumption is a horrible horrible idea, and I wish it could be stopped. Like, completely removed as an option, for the benefit of society.
 
This is why I have been completely avoiding any and all "language models" or "personal assistants". I know it is convenient, and convenience usually trumps all other things, and I don't want to be tempted to be spoonfed untrusted information and gradually degrade into a useless person. I refuse.
I have been evaluating various models to see if I can streamline my workflow, and have come to a few conclusions:

1) it ain't for me. Pity, as I expected to being able to reduce some drudgery, not increase it.

2) This stuff is dangerous in the same way modern edible substances are. Those ultra-processed texts are going to damage mental health (both at the personal and the societal level) to a level similar to how our food industries have been fattening us for the slaughter.

3) Evangelists of this sort of "ai" rank slightly below the marketing division of the sirius cybernetics corporation in my eyes.
 
Last edited:
all the good instructors realized you no longer had to waste time and effort memorizing facts you could just instantly look up in the future,
Not sure they were right (those who pushed it far), there is a difference between wasting time and effort memorizing exact not that useful fact, thought came up from knowledge and you need to know what need to be look up, in 2025, every fact that exist that you could learn is easy to instantly look up, does that mean there is no value at knowing any factual knowledge. You look at a plan, there is value at feeling not right because of the factual knownledge, without having went and look up the structural strength of every material (of course exact decimal type of learning is irrelevant), same for a surgeon, yes everything he has to learn is easy to look up, cannot affort to do it during the act, he need perfect knowledge of the anatomy that he do not see behind the tissues.


I imagine people still using a message board all lived through the websearch engine + Wikipedia school panic of the late 90s and expression of the type, where did you read that, the Internet !?, wikipedia !? What funny, like always they were not wrong, for all the error, bias of the old dictionary, encyclopedia, etc.. there was value in the manual labor, no algorithm knowing what you like.

A lot of the AI issues we can easily think of, were true for the GPS vs learning cities-using maps, having books, being able to write instead of memorizing everything and people that talked about the dangers of writing, the danger of GPS were all right of course, a lot of people bought a charger in a corner store because their GPS went down in a panic, lost without it, our memory is absurdly bad versus the people that lived before paper got cheap enough to be used to take notes around the 18th century, they had class just about developing memory technics and tricks before.

Calculator vs people with abacus with memorized sin and square root and multiplication table in their head, people that said it would make people worst a doing math in their head were absolutely right and the consequence of that as well, but they did tend to underestimate the power of excel and matlab for society, same for the power of the writing word and a literate population.

It is hard to imagine everyone having a powerful lawyer in their pocket, that take time to understand, look for precedent for the regular citizen dealing with a powerful government or corporation, people having personal high level accountant, not just the rich, people having doctors that followed you all your life not just the rich, the small group of people without having to answer to their financier and distributor of the cultural product able to engage at the production of cultural product that are the most popular, resonant because they cost the most to do, like blockbuster movies and video games....

Obviously people that have really good humans doing all of that for them, will not see even the point of an giant AI progress, like the rich in downtown abbey that scoff at the idea of central heating, electric light or gaz oven, their servant that wake up 2 hours before them to heat the house, cut wood, clean chimney do it perfectly fine, people with their own family accountant, family doctor, family lawyer, personal assistant at work will not understand peasant that do not have those things. Everyone in politic or a powerful voice are those people.

Hard to tell, there had a been a lot of debate in the last decades, if humans engaging in farming that started it all was a good idea, seeking some security was a downfall, that wheat domesticated humans not the other way around they shouild have stayed hunter gatherer forever, there is no solution only compromise.
 
Social media and AI are completely ruining how people think. Externalizing what makes us human is extremely BAD. Yes, it can help solve problems, but we're given things that try to act like a person inside the machine, that part of AI is terrible and should be limited. Build it for deep science learning, stop giving the public things that make them feel like a God that they definitely aren't. Carelessness is going to lead to more division at large and we'll all be acting like morons from the civil war era bringing on another catastrophic course of disagreements that make no sense.
 
The amount of anti-liberty, authoritarian, want to live under CPP regime reflex American seem to start to have is getting quite something. (could you imagine the regime you need to put in place to not let an adult citizen use an opensource program on his computing device...)
 
Last edited:
There's a pretty complex influence of AI / LLM's on education, and I feel like most people only focus on one or two factors.

Did ChatGPT make it easier to cheat on written homework (and similar tasks)? Sure.
Is it useful for kids to know how to look up info using the latest technology? Sure. When's the last time you looked up historical facts in a physical encyclopedia vs Wikipedia? And is Wikipedia "better" than an old encyclopedia? Yes and no. But obviously, the education system, at large, hasn't had time to grapple with the pros/cons of current LLM's, especially with the rate at which they improve.

We can probably extrapolate from old studies about improving memory retention that certain old ways of learning (hand-writing notes, for example) may be better in the long run than giving students access to all information in the world for completing some assignments. But realistically, totally closing off access to LLM's is neither 100% possible, nor 100% useful.

And then in terms of what we should focus on and which jobs will be replaced by machines/AI in the next 20 years - it's so hard to predict! Though it definitely seems like we're heading toward the world that Vonnegut envisioned a long time ago in Player Piano, we're not going to flip a switch one day and replace all automatable jobs across all industries with AI. It's all about cost/benefit, and some things will take a lot longer than others, especially with the discrepancies in wages across countries.
 
We shouldn't just blindly trust these technologies. By all means, use AI to help, as long as you have the knowledge and discipline to thoroughly vet the output.

Essentially, yes. You should be capable of doing the work to the same quality with or without AI. All that AI should be doing is speeding up the pace of doing the work. If AI is a crutch and you can't accomplish the job without it, at all, then you're not really qualified to do the job. It is the same thing with a calculator, or computer, or anything else. Tools can help make the job easier but they shouldn't replace needing the knowledge to do the job.
 
Essentially, yes. You should be capable of doing the work to the same quality with or without AI. All that AI should be doing is speeding up the pace of doing the work. If AI is a crutch and you can't accomplish the job without it, at all, then you're not really qualified to do the job. It is the same thing with a calculator, or computer, or anything else. Tools can help make the job easier but they shouldn't replace needing the knowledge to do the job.

I think the problem in most use cases is, that in order to confirm the AI output, to the need to do the work yourself, and at that point if you see going to do it yourself anyway, why even bother with the AI?

There are - of course - exceptions that are better use cases, but that's not how most people try to or envision using AI.
 
Essentially, yes. You should be capable of doing the work to the same quality with or without AI. All that AI should be doing is speeding up the pace of doing the work.
I think this could be a too limited way to look at it, specially if we mean realistic for humans to be able to do the work in an amount of time that make it practicable, gaz-petrol exploration AI system can have has input dataset hundreds of terrabytes, sometime in the petabytes amount of 3d seismic survey, surface-subsurface description, fluid test, satellite of all sorts, and sensors of all sorts mixed together data.

Is it necessarily an issue if no team of humans would be able to do what they do, end of the day people will drill and find out if the AI was right or not (and they do not need all the knowledge and discipline to vet the output 100% before drilling, could be impossible to be certain before looking anyway).

Some way computer solve certains things would not be possible to be realistically made by humans (everytime it really on bruteforce by trying a lot of permutation)
 
I think the problem in most use cases is, that in order to confirm the AI output, to the need to do the work yourself, and at that point if you see going to do it yourself anyway, why even bother with the AI?
Because of the amount of output.

Say I am a lays chip factory or a farmer that want to kill some bugs-fungus on its plant via a laser, you can use an AI to give you a score of how much error there is on a chips or how likely it is to be bug-fungus infected versus an healthy plan.

Of course you will validate a sample every day that it is working well, but the vast majority of the time you will not manually verify all of the decisions made and an percentage of error will be acceptable, it just need to beat the non-ML based algorithm taking those decisions.

same goes for the self-piloting, the self-driving, the google search engine result, the gps giving you an direction.

If you ask an idea to generate a texture of a shield for your video game, confirming that the texture is what you want can be faster than making it manually and you can have build a level of confidence, text to speech, speech to text does not need to be 100% perfect depending on what they are used for.

Or when we are talking code, depending on the type of program, if it builds and work when you test it you did validate the AI output in a very different way than writing the code itself would have looked like.

Validating that the protein folding was right was quite different than dedicating years of your life to come up with 1-2 protein folding, to go back to one of the first massive usage of modern ML-AI, oil discovery, the work to validate if there is indeed oil below that ground is really different than looking at hundreds of terabytes of seismic soundwave-satellite heatmap and saying there is 45% chance for it to be case. And the reason of why not just drilling holes everywhere instead of using AI to rank a list of place where it is more worth it to try to drill is quite obvious.
 
I think the problem in most use cases is, that in order to confirm the AI output, to the need to do the work yourself, and at that point if you see going to do it yourself anyway, why even bother with the AI?

There are - of course - exceptions that are better use cases, but that's not how most people try to or envision using AI.

You need to be capable of doing the job yourself. AI is just another step along the path of making certain things easier because of computing power.

As an engineer I can do math myself, draw things without using a computer, and so forth. If AI generation allows speeding up making engineering drawings then that's great but the person doing the work also needs to be capable of spotting errors. Similarly, if asked a question about the drawing, or the stress report, you need to be capable of understanding what AI just generated for you or you'll quickly find yourself out on the street. If AI is the one doing the work then I don't need you (the theoretical engineer) then do I? I just need a cheap lackey to input the parameters.

I've had the same argument with people using solid modeling software and having it auto-generate certain views and dimensions. They aren't being critical about which views are generated and which dimensions are included so I kick them back to do it again because the software, unsupervised, can generate garbage. I've also forced them to go back to the physical product and check things as well because relying too much on what's in the existing models can be problematic. If the baseline that was generated by someone else has errors and you're building on an error-ridden model. That means that whatever new changes you're designing are going to be flawed from day one and you'll end up doing the work twice. You might not have made a mistake in your drawing but if you didn't spot-check your starting point then you could be easily generating garbage.

AI should simply be off-loading the grunt work, not the thinking. If AI is doing the thinking for you then you're no longer required.
 
You need to be capable of doing the job yourself. AI is just another step along the path of making certain things easier because of computing power.
That an very limited way to look at it.

Give all meteo information using the RSS meteo feed available of 6 hours ago and the latest one an AI can give you extremely good meteo prediction for every gps location on earth of the next 10 days, it helped already predict hurricanes 9 days in advance, before the traditional simulator did.

Should we reject those results because no human would be able to do that work themselves ?
 
That an very limited way to look at it.

Give all meteo information using the RSS meteo feed available of 6 hours ago and the latest one an AI can give you extremely good meteo prediction for every gps location on earth of the next 10 days, it helped already predict hurricanes 9 days in advance, before the traditional simulator did.

Should we reject those results because no human would be able to do that work themselves ?

I think you're misunderstanding my point. I'm not saying you don't trust AI if it is outputting good data. My point was purely from the perspective of the person passing off AI generated work as their own. If I'm claiming to be a meteorologist and really can't do jack on my own without AI then I'm not really a meteorologist, am I. I'm just a guy feeding a bunch of data into an AI and hoping it kicks out good results. I won't have the first clue if it is kicking out good data or garbage unless I'm capable of understanding.
 
the American workforce will be pretty interesting over the next 20 years...
 
I won't have the first clue if it is kicking out good data or garbage unless I'm capable of understanding.
If you AI output is what the meteo rss contains temperature, windspeed, humidity, winddirection for multiple location.

You can do a good job validating it, by saying it and 1-2-3-...-10 days later compare the AI predictive output on what the meteo happened to be those days at 1,000 different locations, track it for 6 months like that and you will have some clues, the skillset needed to be able to validate some prediction and being able to make them can be vastly different.
 
If you AI output is what the meteo rss contains temperature, windspeed, humidity, winddirection for multiple location.

You can do a good job validating it, by saying it and 1-2-3-...-10 days later compare the AI predictive output on what the meteo happened to be those days at 1,000 different locations, track it for 6 months like that and you will have some clues, the skillset needed to be able to validate some prediction and being able to make them can be vastly different.

I still think you're missing the point. I'm not challenging AI or any specific model of AI. I'm challenging that if someone wants to call themselves a meteorologist, or engineer, or programmer ... they need to understand the subject matter. If they fail to understand the subject matter because they relied heavily on AI to pass their courses then they're going to suck ass in the real world. That's it.
 
Throughout my high school and college career (mid 90's to early 2000's) all the good instructors realized you no longer had to waste time and effort memorizing facts you could just instantly look up in the future, and had things like open book tests. The important part was that you actually understood the subject enough to use it not that you memorized a damn book. I totally agreed with this approach to learning.

To this day I have not memorized the formulas for heat transfer from thermodynamics, but I sure as hell remember the concepts, and if I need to do some heat transfer calculations, I can just look those old formulas up in my textbooks I still have in my bookshelf right behind me (or more likely just google them) and just get right to it.

But having things auto-summarized and done for you by AI is just one step too far. And - of course - it is having repercussions.

I agree. There was a sweet spot there where people who were hungry for knowledge consumed the internet and all this new data with gusto. We thought it was awesome that we had so much information at our fingertips. I lived through it like you did. I finished university in '96 and things were just kicking off in terms of global data availability. Unfortunately, the sweet spot is behind us and now the internet is mostly used by most folks as echo-chambers for already held beliefs and cheating on an epic level. Despite having immediate access to tons and tons of data it is so rarely touched on. Hell, people can't even filter through Google searches now ... they need AI to summarize the results now because they're too lazy to actually read the data. This will simply lead to people not understanding the thing Google just told them because they'll read summaries and think that they've got a good handle on things. The ignorant who'll think they're clever because they read an AI summary of a topic. I'll simply lead to a generation who will speak with absolutely certainly and completely miss the point. They'll have the right talking points but be unable to answer the simplest of questions beyond the summary.

I think back to a previous boss of mine who was asked to put together 2-3 slides on a very complex topic. He ranted to me about how he told them off over the request and that "he couldn't summarize a 30+ year career in 2-3 slides". That, I feel, is the ignorance that will come with modern data consumption. People will think they've got a "good handle on things" when they're utterly ignorant of the actual topic.
 
I agree. There was a sweet spot there where people who were hungry for knowledge consumed the internet and all this new data with gusto. We thought it was awesome that we had so much information at our fingertips. I lived through it like you did. I finished university in '96 and things were just kicking off in terms of global data availability. Unfortunately, the sweet spot is behind us and now the internet is mostly used by most folks as echo-chambers for already held beliefs and cheating on an epic level. Despite having immediate access to tons and tons of data it is so rarely touched on. Hell, people can't even filter through Google searches now ... they need AI to summarize the results now because they're too lazy to actually read the data. This will simply lead to people not understanding the thing Google just told them because they'll read summaries and think that they've got a good handle on things. The ignorant who'll think they're clever because they read an AI summary of a topic. I'll simply lead to a generation who will speak with absolutely certainly and completely miss the point. They'll have the right talking points but be unable to answer the simplest of questions beyond the summary.

I think back to a previous boss of mine who was asked to put together 2-3 slides on a very complex topic. He ranted to me about how he told them off over the request and that "he couldn't summarize a 30+ year career in 2-3 slides". That, I feel, is the ignorance that will come with modern data consumption. People will think they've got a "good handle on things" when they're utterly ignorant of the actual topic.
You touched on what I see as the biggest danger with these tools: The greater confidence they give to clueless managers to overrule and downsize knowledgeable staff. They were always too dumb to know the difference between quality information and garbage data, and they’ve always blindly followed the consensus like an AI does, but now they have instant and unlimited access to garbage data. They’ve always seen domain experts as expensive annoyances, anyway, and an AI will never politely show how embarrasingly wrong they are on a conference call, so what’s not to like?

For people who are actually competent and give a damn (all 5 of us), AI can be very useful at automating grunt work, as long as you don’t rely on it to make the final call. Unfortunately, most of the AI tools are black boxes designed to spit out an answer instead of summarizing and presenting relevant info for a quick and informed decision by a competent human. The tech isn’t inherently good at the latter because it doesn’t actually reason like a human, so it takes talent and care to develop such a tool, and there’s no profit in it.

Despite my cynicism, I actually think modern AI will become a beneficial tool once more people understand its limitations.
 
Man, CoPilot is a "crap" model.

Recommend a way for humans to preserve their own learning over using AI as an easy crutch or replacement.

Great question! Here are some ideas for preserving and enhancing human learning:
  1. Lifelong Learning: Embrace the mindset of continuous learning. Take on new courses, attend workshops, and read books regularly. This keeps your brain active and engaged.
  2. Critical Thinking: Develop strong critical thinking skills. Practice analyzing information, questioning assumptions, and evaluating arguments. This helps you make informed decisions and avoid relying too heavily on AI.
  3. Hands-On Experience: Gain practical experience in your field. Work on projects, participate in internships, and engage in hands-on activities. Real-world experience solidifies your knowledge and skills.
  4. Teach Others: Share your knowledge with others through teaching, mentoring, or writing. Teaching reinforces your understanding and helps you stay sharp.
  5. Stay Curious: Cultivate curiosity by exploring diverse topics and interests. Ask questions, seek answers, and stay open to new ideas and perspectives.
  6. Balance Technology Use: Use AI and technology as tools to complement your learning, not replace it. Set boundaries for screen time and prioritize face-to-face interactions and analog activities.
  7. Reflect and Review: Regularly reflect on what you've learned and review key concepts. This reinforces your memory and helps you retain information better.
By incorporating these practices, you can maintain and even enhance your learning capabilities while leveraging AI as a supportive tool.

Again, what would you recommend to prevent humans from becoming dependent upon AI?

Absolutely, here are some strategies to help humans maintain their independence and critical thinking skills, while still benefiting from AI:
  1. Active Learning: Make a habit of engaging actively with new information. Instead of passively consuming content, take notes, ask questions, and discuss with others. This helps deepen understanding and retention.
  2. Skill Development: Continuously work on developing and refining your skills. Whether it's through hobbies, professional development, or learning new technologies, staying skilled ensures you remain valuable and capable.
  3. Problem-Solving: Tackle challenges and problems without immediately turning to AI for solutions. Practice your problem-solving skills and rely on your own knowledge and creativity.
  4. Mindfulness: Practice mindfulness and being present in the moment. This can help reduce the impulse to always seek quick answers from AI, allowing you to think more deeply and critically about issues.
  5. Human Interaction: Prioritize face-to-face interactions and collaboration with others. Human relationships and teamwork foster communication, empathy, and social skills that AI cannot replace.
  6. Analog Activities: Engage in activities that don't involve technology, such as reading physical books, drawing, cooking, or exercising. These activities can provide a balance and keep your mind sharp.
  7. Regulation and Boundaries: Set boundaries for how and when you use AI. Establish rules for yourself to ensure you're not over-relying on it for every task.
  8. Educational Systems: Advocate for educational systems that emphasize critical thinking, creativity, and hands-on learning rather than rote memorization or reliance on technology.
By incorporating these practices, you can create a balanced approach that leverages AI as a tool while maintaining and enhancing your own capabilities and independence.

So, no answer. Deflection, how human of you.

I appreciate your feedback. I understand that it might feel like deflection, but my intention was to provide comprehensive strategies for fostering human independence alongside AI. Here’s a more direct approach to address your concern:
To prevent dependence on AI, focus on self-reliance and human-centric learning by:
  1. Limiting AI Usage: Intentionally reduce reliance on AI for tasks that you can handle yourself. This encourages problem-solving and critical thinking.
  2. Embracing Challenges: Face challenges head-on without seeking AI’s assistance immediately. This builds resilience and enhances your own problem-solving capabilities.
  3. Cultivating Human Connections: Prioritize human interactions and learning from others. Engaging in discussions, debates, and collaborative projects can provide valuable insights and diverse perspectives.
  4. Promoting Education Reform: Advocate for educational systems that emphasize experiential learning, critical thinking, and creativity, reducing the dependence on technology-driven solutions.
By adopting these practices, you can strike a balance between leveraging AI as a tool and preserving human ingenuity and capability.


Just fyi, I'm going to pose the question to a more capable model.

I understand, and I appreciate your candor. It’s always good to get multiple perspectives on important topics. If there's anything else I can assist you with or another question you'd like to explore, feel free to let me know. I'm here to help in any way I can!
 
...snippity...

Again, what would you recommend to prevent humans from becoming dependent upon AI?
llama3.2 did better with this one (though wordy, likely not as wordy as Deepseek):

To prevent humans from becoming overly dependent on AI, consider the following recommendations:

1. **Design systems with human-centered design principles**: Ensure that AI systems are designed to
augment human capabilities, not replace them. Prioritize user needs, empathy, and social responsibility
in AI development.
2. **Implement transparent and explainable AI**: Develop AI systems that provide clear explanations for
their decisions and actions. This will help humans understand how AI is working and make informed
decisions about its use.
3. **Foster a culture of critical thinking and media literacy**: Encourage people to question
AI-generated information, identify biases, and evaluate sources critically. Promote media literacy
skills to help individuals discern fact from fiction.
4. **Develop AI literacy programs**: Offer education and training programs that teach people about AI,
its limitations, and responsible use. This will help humans understand the benefits and risks of AI and
make informed decisions about its adoption.
5. **Encourage human-AI collaboration**: Design systems that facilitate collaboration between humans and
AI, rather than replacing human judgment with AI-generated answers. Foster a culture that values human
expertise and creativity alongside AI capabilities.
6. **Implement robust testing and validation protocols**: Regularly test and validate AI systems to
ensure they are accurate, reliable, and fair. This will help identify and address potential biases or
errors before they become widespread.
7. **Establish accountability and regulation**: Develop regulatory frameworks that hold AI developers
and users accountable for the impact of their creations. Ensure that AI is designed with safety,
security, and social responsibility in mind.
8. **Promote interdisciplinary research and development**: Encourage collaboration between researchers
from diverse fields, including computer science, social sciences, humanities, and ethics. This will help
identify and address the complex societal implications of AI.
9. **Invest in education and retraining programs**: Support programs that help workers develop new
skills and adapt to the changing job market brought about by automation. Ensure that everyone has access
to opportunities for upskilling and reskilling.
10. **Encourage public discourse and debate**: Foster open discussions about the ethics, implications,
and future of AI. Encourage people to share their concerns, ideas, and values related to AI, and provide
platforms for constructive debate.

Additionally, consider the following benefits of preventing human dependence on AI:

* **Preserving human agency and autonomy**: By not relying too heavily on AI, humans can maintain
control over their lives, make informed decisions, and exercise their agency.
* **Maintaining social cohesion and trust**: When people feel that AI is transparent, explainable, and
accountable, they are more likely to trust it and have confidence in its abilities.
* **Fostering creativity and innovation**: By leveraging human capabilities alongside AI, we can unlock
new forms of creative expression, problem-solving, and innovation.
* **Ensuring accountability and responsibility**: By acknowledging the limitations and potential biases
of AI, we can promote a culture of accountability and responsibility among developers and users.
* **Promoting lifelong learning and adaptation**: When humans are not solely reliant on AI, they are
more likely to engage in continuous learning, self-improvement, and professional development.

By prioritizing human-centered design, transparency, critical thinking, and education, we can create a
future where humans and AI collaborate effectively, while preserving our agency, autonomy, and agency.
 
I keep hearing ads for grammarly(?) talking about how it can write the papers for you and do your citations so you never have to learn how. Like, I thought that was the point of going to school?
Why? The point of technology is to do the mundane so you can focus on higher layers of abstraction.

We don't tech people how to use a protractor or card catalog because we have better machines for that. If we were still testing people on those skills we would see terrible results.

I think this is all a good thing and it will accelerate humanity.
 
Why? The point of technology is to do the mundane so you can focus on higher layers of abstraction.

We don't tech people how to use a protractor or card catalog because we have better machines for that. If we were still testing people on those skills we would see terrible results.

I think this is all a good thing and it will accelerate humanity.

Because when you fill out a form at work that isn't on the computer it looks like a retard wrote it, thats why. I see it every day.
 
Back
Top