• Some users have recently had their accounts hijacked. It seems that the now defunct EVGA forums might have compromised your password there and seems many are using the same PW here. We would suggest you UPDATE YOUR PASSWORD and TURN ON 2FA for your account here to further secure it. None of the compromised accounts had 2FA turned on.
    Once you have enabled 2FA, your account will be updated soon to show a badge, letting other members know that you use 2FA to protect your account. This should be beneficial for everyone that uses FSFT.

Critical Sudo Vulnerabilities Let Local Users Gain Root Access on Linux, Impacting Major Distros

MrGuvernment

Fully [H]
2FA
Joined
Aug 3, 2004
Messages
22,638
Another open source massive exploit left for years.. Remember the OpenSSL bug that was around for 10 years before it was discovered..

This is why when people preach "open source is more secure because people are reading over the code all the time", falls flat on it's face...

Most people who use open source couldn't tell you what most of the code does anyways..

Note - I run Linux as my primary home OS for years now (incase anyone wants to think I am a Windows fanboy)

https://thehackernews.com/2025/07/critical-sudo-vulnerabilities-let-local.html

Cybersecurity researchers have disclosed two security flaws in the Sudo command-line utility for Linux and Unix-like operating systems that could enable local attackers to escalate their privileges to root on susceptible machines.

A brief description of the vulnerabilities is below -

  • CVE-2025-32462 (CVSS score: 2.8) - Sudo before 1.9.17p1, when used with a sudoers file that specifies a host that is neither the current host nor ALL, allows listed users to execute commands on unintended machines
  • CVE-2025-32463 (CVSS score: 9.3) - Sudo before 1.9.17p1 allows local users to obtain root access because "/etc/nsswitch.conf" from a user-controlled directory is used with the --chroot option
Sudo is a command-line tool that allows low-privileged users to run commands as another user, such as the superuser. By executing instructions with sudo, the idea is to enforce the principle of least privilege, permitting users to carry out administrative actions without the need for elevated permissions.

The command is configured through a file called "/etc/sudoers," which determines "who can run what commands as what users on what machines and can also control special things such as whether you need a password for particular commands."

Stratascale researcher Rich Mirch, who is credited with discovering and reporting the flaws, said CVE-2025-32462 has managed to slip through the cracks for over 12 years. It is rooted in the Sudo's "-h" (host) option that makes it possible to list a user's sudo privileges for a different host. The feature was enabled in September 2013.

However, the identified bug made it possible to execute any command allowed by the remote host to be run on the local machine as well when running the Sudo command with the host option referencing an unrelated remote host.

"This primarily affects sites that use a common sudoers file that is distributed to multiple machines," Sudo project maintainer Todd C. Miller said in an advisory. "Sites that use LDAP-based sudoers (including SSSD) are similarly impacted."

CVE-2025-32463, on the other hand, leverages Sudo's "-R" (chroot) option to run arbitrary commands as root, even if they are not listed in the sudoers file. It's also a critical-severity flaw.

"The default Sudo configuration is vulnerable," Mirch said. "Although the vulnerability involves the Sudo chroot feature, it does not require any Sudo rules to be defined for the user. As a result, any local unprivileged user could potentially escalate privileges to root if a vulnerable version is installed."

In other words, the flaw permits an attacker to trick sudo into loading an arbitrary shared library by creating an "/etc/nsswitch.conf" configuration file under the user-specified root directory and potentially run malicious commands with elevated privileges.

Miller said the chroot option will be removed completely from a future release of Sudo and that supporting a user-specified root directory is "error-prone."

Following responsible disclosure on April 1, 2025, the vulnerabilities have been addressed in Sudo version 1.9.17p1 released late last month. Advisories have also been issued by various Linux distributions, since Sudo comes installed on many of them -

Users are advised to apply the necessary fixes and ensure that the Linux desktop distributions are updated with the latest packages.
 
I noticed 3 or 4 days ago Cachy/arch updated sudo to 1.9.17.p1-1.1.
I noticed as sudo updates are pretty rare. Guess that explains that.
Hopefully the slow distros push the new version in a timely manner.
 
This is why when people preach "open source is more secure because people are reading over the code all the time", falls flat on it's face...
To be fair, more secure doesn't mean immune. And it isn't a blank statement as if making something open source is automatically more secure. It depends on several factors like active oversight etc. The transparency brought by open source is usually a plus, caveats do apply.
 
To be fair, more secure doesn't mean immune. And it isn't a blank statement as if making something open source is automatically more secure. It depends on several factors like active oversight etc. The transparency brought by open source is usually a plus, caveats do apply.
The idea here is that once a vulnerability if found then it can quickly be patched. Windows has had known vulnerabilities that took a while to be patched. If you think 10 years is bad, then wait till you hear about the 20 year old unpatched Windows vulnerability. MrGuvernment just got hard thinking he finally has proof that open source isn't better when it very much is.

"A Google security researcher has just disclosed details of a 20-year-old unpatched high-severity vulnerability affecting all versions of Microsoft Windows, back from Windows XP to the latest Windows 10."
 
The idea here is that once a vulnerability if found then it can quickly be patched. Windows has had known vulnerabilities that took a while to be patched. If you think 10 years is bad, then wait till you hear about the 20 year old unpatched Windows vulnerability. MrGuvernment just got hard thinking he finally has proof that open source isn't better when it very much is.

"A Google security researcher has just disclosed details of a 20-year-old unpatched high-severity vulnerability affecting all versions of Microsoft Windows, back from Windows XP to the latest Windows 10."
It's true when this one is detailed as "slipping through the cracks" they are saying no one really realized it existed. It wasn't detailed 12 years ago and its just being fixed now. No one was exploiting it, no one was worried about it. When discovered it was quickly fixed. That isn't a major issue. All vulnerabilities exist for some amount of time until someone takes notice. Sometimes that is a few days, sometimes its 12 years.

The difference with Open source. Is when its discovered it gets fixed as quickly as possible. No one is doing a cost vs risk analysis and choosing to just say F it. Which is exactly what MS does now and then.
 
It's true when this one is detailed as "slipping through the cracks" they are saying no one really realized it existed. It wasn't detailed 12 years ago and its just being fixed now. No one was exploiting it, no one was worried about it. When discovered it was quickly fixed. That isn't a major issue. All vulnerabilities exist for some amount of time until someone takes notice. Sometimes that is a few days, sometimes its 12 years.

The difference with Open source. Is when its discovered it gets fixed as quickly as possible. No one is doing a cost vs risk analysis and choosing to just say F it. Which is exactly what MS does now and then.
I'm going to give a couple of examples of this very situation. One is Pegasus the Israeli developed spyware that was infiltrating iPhones without anyone knowing. It was eventually figured out by Kaspersky which one employee had said that if iOS wasn't closed source then the vulnerability would have been discovered sooner. "Because of this, threats often go undetected by the device users." People knew something was up for years, but didn't know how it was done. Another example is from the Dolphin developer delroth who had some very interesting thoughts on his experience with drivers. Nvidia was the best and at the time was closed source drivers. He never had any problems with Nvidia's drivers. MESA was good, and better than AMD's and Intel's drivers at the time. The main thing was how quickly the developers worked to correct bugs in the drivers, once reported. "More than that, all of the bugs we found in Mesa were promptly fixed after we reported them through the correct channel." Where AMD at the time was basically ignoring them.

Everything has bugs, but the main difference is how quickly those bugs get found and fixed. Didn't take long for a Microsoft employee to figure out something was off with XZ Utils and the offending code was quickly found and fixed.
That is like pointing at DukenukemX as conclusive evidence for the switch II being a failure.
try not to be malaka.jpeg
 
Last edited:
Neither if these are memory problems, so rewriting in Rust wouldn't have helped.

(just in case anybody was seriously thinking about this)
 
The idea here is that once a vulnerability if found then it can quickly be patched. Windows has had known vulnerabilities that took a while to be patched. If you think 10 years is bad, then wait till you hear about the 20 year old unpatched Windows vulnerability. MrGuvernment just got hard thinking he finally has proof that open source isn't better when it very much is.

"A Google security researcher has just disclosed details of a 20-year-old unpatched high-severity vulnerability affecting all versions of Microsoft Windows, back from Windows XP to the latest Windows 10."
lol

hardly got a hardon, again I run Linux as my main OS, and out of the box it is far above Windows for security, and when did I state open source isn't better than closed source, I didn't...so don't put words in my mouth.

My point was more specific to people who preach that open source is more secure because it is open source, when we know that is not always true. There has always been, for as long as I can remember, across the internet this perception that people sit around all night and day reading over open source code to make sure it is secure and exploits are caught and fixed instantly!
 
lol

hardly got a hardon, again I run Linux as my main OS, and out of the box it is far above Windows for security, and when did I state open source isn't better than closed source, I didn't...so don't put words in my mouth.

My point was more specific to people who preach that open source is more secure because it is open source, when we know that is not always true. There has always been, for as long as I can remember, across the internet this perception that people sit around all night and day reading over open source code to make sure it is secure and exploits are caught and fixed instantly!
I seem to mostly hear that from people saying that's not the case, but then again I stay out of reddit and campy forums...for the most part.
 
I seem to mostly hear that from people saying that's not the case, but then again I stay out of reddit and campy forums...for the most part.
It tends to be similar groups who still spout Apple is secure and flawless, you cant be hacked, you don't need av/security products on Apple devices.

Just old belief's that still get spread around.
 
I seem to mostly hear that from people saying that's not the case, but then again I stay out of reddit and campy forums...for the most part.
You hear this from mostly people on Reddit that you stay out of... for the most part? I think that makes you a Reddit user.


lol

hardly got a hardon, again I run Linux as my main OS, and out of the box it is far above Windows for security, and when did I state open source isn't better than closed source, I didn't...so don't put words in my mouth.

My point was more specific to people who preach that open source is more secure because it is open source, when we know that is not always true. There has always been, for as long as I can remember, across the internet this perception that people sit around all night and day reading over open source code to make sure it is secure and exploits are caught and fixed instantly!
One of the many reasons it's more secure is because any bad actors can at least be found out. There is a reason why Linux is used in servers and why Android is considered more secure than iOS. Especially when you consider the Pegasus situation. Nobody knows what's going on with closed source software. Is there a backdoor for the government? We know the UK government requires it. Is Apple and Google collecting data without your knowledge? It's all blind faith.

View: https://youtu.be/HIkBIfst8oA?si=3alPzumqvDK1RSao
 
You hear this from mostly people on Reddit that you stay out of... for the most part? I think that makes you a Reddit user.
View attachment 740430


One of the many reasons it's more secure is because any bad actors can at least be found out. There is a reason why Linux is used in servers and why Android is considered more secure than iOS. Especially when you consider the Pegasus situation. Nobody knows what's going on with closed source software. Is there a backdoor for the government? We know the UK government requires it. Is Apple and Google collecting data without your knowledge? It's all blind faith.

View: https://youtu.be/HIkBIfst8oA?si=3alPzumqvDK1RSao

No, I (mostly) don't go to reddit, so mostly I hear what MrGov said.
 
You hear this from mostly people on Reddit that you stay out of... for the most part? I think that makes you a Reddit user.

One of the many reasons it's more secure is because any bad actors can at least be found out. There is a reason why Linux is used in servers and why Android is considered more secure than iOS. Especially when you consider the Pegasus situation. Nobody knows what's going on with closed source software. Is there a backdoor for the government? We know the UK government requires it. Is Apple and Google collecting data without your knowledge? It's all blind faith.

View: https://youtu.be/HIkBIfst8oA?si=3alPzumqvDK1RSao

Yes, I use Reddit, there are useful sub-reddits that have great info, while 99% of reddit is a toxic waste dump of people who can not think past the bandwagon they are stuck on :D

Bad actors can at least be found out, yes, but again, a bug for 10 years...OpenSSL was 10 years...my point still stands, while Linux is more secure out of the box, it does not have people reading over every line of code 24/7 to make sure an update didn't add in some new bug.

https://www.linkedin.com/posts/lawr...p&rcm=ACoAABZ9brcBG8QVh6hvd7PDPCaJlOvGyxe-FGM

This kind of sum's it up also..

1751990155200.png
 
One of the many reasons it's more secure is because any bad actors can at least be found out. There is a reason why Linux is used in servers and why Android is considered more secure than iOS. Especially when you consider the Pegasus situation. Nobody knows what's going on with closed source software.
Android is not considered more secure than iOS, quite the opposite (rightly or wrongly) and that demand was not about the mobile OS but the servers.

Android is a very broadterm how many people consider their smartTV, tv sticks or Android based smartFridge more secure than an up to date iPhone to do their banking transaction ? and it is not like there is no confirmed case of Pegasus working on Android device.
 
Last edited:
It tends to be similar groups who still spout Apple is secure and flawless, you cant be hacked, you don't need av/security products on Apple devices.

Just old belief's that still get spread around.

It isn't a question of needing or not needing.

The question is whether the additional attack surface from a large software package like a so-called security package is overall the larger risk. A/V companies do very stupid things. And even if they didn't they would still present a large attack surface.

tavis-ormandy-symantec-2016-unpack-malware-in-the-kernel.jpeg
 
Yes, I use Reddit, there are useful sub-reddits that have great info, while 99% of reddit is a toxic waste dump of people who can not think past the bandwagon they are stuck on :D
Don't worry, I use Reddit too. I'm not one of those people who think Reddit users are all worthless human beings. Just some... most of them.
Bad actors can at least be found out, yes, but again, a bug for 10 years...OpenSSL was 10 years...my point still stands, while Linux is more secure out of the box, it does not have people reading over every line of code 24/7 to make sure an update didn't add in some new bug.
Don't ignore the 20 year old Windows bug. Keep in mind that police have tools to get into phones because we don't entirely have control over this. Android is open source, but unless you can replace your OS with another then the attack vector still remains. If we treat things like a black box then we shouldn't be surprised if someone else has control over our hardware. Open source just makes things less like a black box.
What's funny is that this upsets people because we know who worked on it. Do we know who worked on the command prompt for Windows? Do we know who made Notepad for Windows? Do you know who made MSDOS? Tim Paterson took six weeks to make MSDOS, and he got paid $25,000. Todd Miller maintaining sudo seems less of a problem when you look at software historically. You'd want a team of people working on something as important as MSDOS and sudo, but money.
Android is not considered more secure than iOS, quite the opposite (rightly or wrongly) and that demand was not about the mobile OS but the servers.
Yes it is and it's an example of closed vs open. Not stock Android but nobody really uses stock Android. Especially when you talk about GrapheneOS.

View: https://youtu.be/-hlRB2izres?si=NuvDFxQmcNU2sUyQ
Android is a very broadterm how many people consider their smartTV, tv sticks or Android based smartFridge more secure than an up to date iPhone to do their banking transaction ? and it is not like there is no confirmed case of Pegasus working on Android device.
Pegasus is less likely to effect Android compared to iOS. "Android’s reputation for securing its fragmented ecosystem is not good—the widely held view is that iPhone’s are much safer. But you can buy an Android and lock it down fairly easily." Google actually took the time to stop Pegasus and informed users, where Apple had no idea how it was happening for years. You can't figure out which parts of iOS are vulnerable to attacks since the OS hides that. You also can't look at source code to see where the vulnerability is.
 
Yes it is and it's an example of closed vs open. Not stock Android but nobody really uses stock Android. Especially when you talk about GrapheneOS.
No it is not, lot of people refuse to plug their smarttv to their home network fearing them, android is a vast family of product that goes from very little trusted to more trusted. But most people trust iOS more than they do Android security wise (which being considered is not that interesting of a metric), I for one trust my iPhone more than my android smartTV, do banking/stock strading on one and would hesitate on the others. There is a lot of things that can happen in between some main branch of android and an android machine vs an iOS->Iphone.

Pegasus is less likely to effect Android compared to iOS
Who would know such a thing....

. You also can't look at source code to see where the vulnerability is.
Which tend to be overrated of a factor (versus the issue like liltle known in person/vetting contributor some project has), people do look at assembly of iOS a lot to find pattern, but people that put lot of effort looking at other people code for vulnerability for free do not tend to be the type to tell others if they find one.
 
No it is not, lot of people refuse to plug their smarttv to their home network fearing them,
That's a different situation and one where these type of devices are more cancer than malware.
android is a vast family of product that goes from very little trusted to more trusted. But most people trust iOS more than they do Android security wise
They shouldn't.
(which being considered is not that interesting of a metric), I for one trust my iPhone more than my android smartTV,
I would too, because it's a smartTV. Wouldn't matter what OS it's running, because it's job is to sell you junk while collecting data. As far as I know, I can't jailbreak them and replace it with LineageOS or something else.
Who would know such a thing....
Kaspersky would. They found Pegasus after all. The consensus is that iOS has better security out of the box than most Android devices, but quick tweaks will make Android superior.
"Due to the closed nature of iOS, there are no (and cannot be any) standard operating-system tools for detecting and removing this spyware on infected smartphones."

"We believe that the main reason for this incident is the proprietary nature of iOS. This operating system is a “black box”, in which spyware like Triangulation can hide for years. Detecting and analyzing such threats is made all the more difficult by Apple’s monopoly of research tools – making it a perfect haven for spyware."

  1. Android OS Phones


    • Pro: Highly configurable; you can fully control your privacy settings.
    • Con: Lack of standardization means weak "out of the box" security.
    • Tip: Best if you are comfortable with adjusting security settings and tools.

  2. Apple (iOS) Phones


    • Pro: Consistency and reliability; you know what you are getting.
    • Con: Not invulnerable to malware; heavily dependent on Apple security practice. Also, while Apple products are generally priced higher than the Android, they don't guarantee 100% security and are still vulnerable to malware and hacking.
    • Tip: Probably the simplest choice for "pretty good" security.
Which tend to be overrated of a factor (versus the issue like liltle known in person/vetting contributor some project has), people do look at assembly of iOS a lot to find pattern, but people that put lot of effort looking at other people code for vulnerability for free do not tend to be the type to tell others if they find one.
Usually when a problem is discovered it's not the code they look at first, but they symptoms of a problem and then look at code. Recently, Linus Torvalds did find problems in the code first and he had... choice words for the person. There are examples where closed source software has also had code injected and the company wasn't aware of it. The infamous SolarWinds Orion Platform is one such example as it's closed source and the attackers were able to inject code by gained access to the networks. CCleaner also had a similar problem back in 2017. Even Apple has this same problem with Apple XCodeGhost in 2015. Because this was XCode, this ended up in many apps in the app store.

If free code is dangerous then why does everyone use Linux? Why does Microsoft use Linux for Azure and not their own Windows OS? Even Apple uses Linux for their servers. MacOS uses Darwin which is open source. Besides the obvious answer that it's free, it also has lots of eyes on it. Deepseek is open source because if this were closed source, then nobody would use it in fear of hidden Chinese malware. How would we know if DeepSeek AI is sending data back to China, besides looking at data packets? Open source creates trust. That's why Google releases source code for their web browser and Android OS because they want companies to trust them... and use their stuff. Most web browsers are Chrome based for this reason and this does work towards Google's advantage. Google was able to get every Chrome based browser to switch over to Manifest V3. While Apple forces browsers to use WebKit on iOS, and like it.


View: https://youtu.be/Zu42F-XMNC4?si=isLoEoG4iPyZ_xvt
 
Yes it is and it's an example of closed vs open. Not stock Android but nobody really uses stock Android. Especially when you talk about GrapheneOS.

View: https://youtu.be/-hlRB2izres?si=NuvDFxQmcNU2sUyQ


On a side note, what madman uses Gesture controls on Android!? Androids gesture controls are utter crap, hell I hate iphone controls too but Androids glued on equivalent is worthless and added only because they wanted to follow a trend. The 3 button system, which Android was designed for from ground up, works exactly as you expect without guesswork and some of the Linus' would not have existed.
 
Don't worry, I use Reddit too. I'm not one of those people who think Reddit users are all worthless human beings. Just some... most of them.

Don't ignore the 20 year old Windows bug. Keep in mind that police have tools to get into phones because we don't entirely have control over this. Android is open source, but unless you can replace your OS with another then the attack vector still remains. If we treat things like a black box then we shouldn't be surprised if someone else has control over our hardware. Open source just makes things less like a black box.

What's funny is that this upsets people because we know who worked on it. Do we know who worked on the command prompt for Windows? Do we know who made Notepad for Windows? Do you know who made MSDOS? Tim Paterson took six weeks to make MSDOS, and he got paid $25,000. Todd Miller maintaining sudo seems less of a problem when you look at software historically. You'd want a team of people working on something as important as MSDOS and sudo, but money.

Yes it is and it's an example of closed vs open. Not stock Android but nobody really uses stock Android. Especially when you talk about GrapheneOS.

View: https://youtu.be/-hlRB2izres?si=NuvDFxQmcNU2sUyQ

Pegasus is less likely to effect Android compared to iOS. "Android’s reputation for securing its fragmented ecosystem is not good—the widely held view is that iPhone’s are much safer. But you can buy an Android and lock it down fairly easily." Google actually took the time to stop Pegasus and informed users, where Apple had no idea how it was happening for years. You can't figure out which parts of iOS are vulnerable to attacks since the OS hides that. You also can't look at source code to see where the vulnerability is.

I am not ignoring a 20 year old Windows bug, but this discussion is not about Linux vs Windows, never was for me, it is about the false trust people have thinking Linux is so super secure because it has eyes on the code 24/7 as do other open source projects, which is not the case.

If it was the case, Linux would never have bugs or exploits that go on for years and years....

Microsoft is a whole other world, we know they do not care about security, they release an OS full of holes, then try to sell you their security products to protect those holes (business world).

Apple tried the same when it was found out they had in their own knowledge articles (which once it went public they quickly removed any traces of them), telling people to install up to 2 AV products to be protected, but then spun it as "That is to protect you from spreading infections to windows users" crap, meanwhile Apple products, safari and such, for years, were the first ones to be exploited in Pwn2Own and other hacker events..
 
It isn't a question of needing or not needing.

The question is whether the additional attack surface from a large software package like a so-called security package is overall the larger risk. A/V companies do very stupid things. And even if they didn't they would still present a large attack surface.
Certainly, any additional software on a system adds potential for exploit, but for the average user, that protection and attack surface is a better chance to roll with than using no protection at all...
 
Usually when a problem is discovered it's not the code they look at first, but they symptoms of a problem and then look at code.
Exactly what I mean, way more often than not when an issue is found in an openSource project it was not by looking at the code, it was using it/testing a program that use it and finding something strange, exactly like it would have happened for a close library.
Recently, Linus Torvalds did find problems in the code first and he had... choice words for the person.
That way more akin to a code review like what happen to a closed source project than an open source project looking at the code people mean when they talk about openSource code. I.e. same would have happen here in a closed source and a lot of Linux kernel strength come from the quality of those maintainers, that could be the same people inside a company.

If free code is dangerous then why does everyone use Linux?
I am not sure why/how the need for that kind of hyperbole there is a difference between free/open code and android (can I look at all the code running on all android device of course not, my Amazon firestick is android is it 100% freecode ?)

Wouldn't matter what OS it's running, because it's job is to sell you junk while collecting data.
Which would tend to be the job of a lot of android devices connected to the internet.

They would be the first to tell us that they do not know what they do not know (and that it is by definition impossible to know how much Pegasus achieve to do and that it will change every year), do we really think Iran leadership, scientifics, Hezbollah, etc... were not incredibly well tracked/spied on the last few years ? And their issues was they were too much apple to apple ecosystem users ?
 
Last edited:
On a side note, what madman uses Gesture controls on Android!? Androids gesture controls are utter crap, hell I hate iphone controls too but Androids glued on equivalent is worthless and added only because they wanted to follow a trend. The 3 button system, which Android was designed for from ground up, works exactly as you expect without guesswork and some of the Linus' would not have existed.
I've been using gestures on Android since like Android 10. So much better than the buttons.
 
lol

hardly got a hardon, again I run Linux as my main OS, and out of the box it is far above Windows for security, and when did I state open source isn't better than closed source, I didn't...so don't put words in my mouth.

My point was more specific to people who preach that open source is more secure because it is open source, when we know that is not always true. There has always been, for as long as I can remember, across the internet this perception that people sit around all night and day reading over open source code to make sure it is secure and exploits are caught and fixed instantly!
Both open source and closed source have bugs.
One way to evaluate which is better would be to compare the ratio of users to ransomware payments (count or dollars) on one platform versus the other over a period of time.
i.e. if Ubuntu has 10,000,000 users worldwide and $1,000,000 of ransomware payments versus Windows 10 with 1,000,000,000 users worldwide and $100,000,000,000 of ransomware payments over the same time frame, guess who wins?

Question may still be, does the number of users = the number of hackers? ...or is there a big shift at a certain point?

...but I would still like to see the outcome of my hypothetical evaluation above.
 
Which tend to be overrated of a factor (versus the issue like liltle known in person/vetting contributor some project has), people do look at assembly of iOS a lot to find pattern, but people that put lot of effort looking at other people code for vulnerability for free do not tend to be the type to tell others if they find one.
Not just that but security issues that are obvious in source code rarely happen because, well, programmers and reviews look at that source, notice it, and fix it before it ever goes out. Usually security issues these days are non-obvious, which is why they are able to survive. It is an interaction that nobody foresaw which means looking at the code isn't helpful, you have to beat on the final product and see what happens.

It always amazes me the number of amateurs who think that if they looked at the source for something bugs would just jump out at them like a glaring highlight.

One way to evaluate which is better would be to compare the ratio of users to ransomware payments (count or dollars) on one platform versus the other over a period of time.
Nah, ransomware isn't very useful when it comes to looking at security as it rarely attacks technical security holes, it relies on the user running a program they shouldn't. Also there's plenty of it that never tries to get admin/root because why bother? 99.9% of the systems out there, the user that is running it is the only user (or at least primary user) of the system. Encrypt their files and you've done all you need to do. Their data is what they care about, not the OS. So even a highly secure OS does nothing to help, for that you need application white listing.
 
Both open source and closed source have bugs.
One way to evaluate which is better would be to compare the ratio of users to ransomware payments (count or dollars) on one platform versus the other over a period of time.
i.e. if Ubuntu has 10,000,000 users worldwide and $1,000,000 of ransomware payments versus Windows 10 with 1,000,000,000 users worldwide and $100,000,000,000 of ransomware payments over the same time frame, guess who wins?

Question may still be, does the number of users = the number of hackers? ...or is there a big shift at a certain point?

...but I would still like to see the outcome of my hypothetical evaluation above.

I've dealt with a ransomware attack.

The initial attack vector wasn't the result of a vulnerability. The attack was not particularly sophisticated. A LOT of these attacks happen for really dumb and pathetic reasons.

The initial vector was getting on a server that had RDP open to the internet. Leadership refused to let IT kill off this shit ass 200 year old terminal server or even attempt to secure it further (like RDP Gateway with MFA) in any way because it would "complicate" things for the idiots who refused to stop using it.

Eventually, a bad actor got on it. After some time, they managed to finagle their way into the local administrator account with a shitty password. This old ass VM didn't have some of the things that was enforced via GPO everywhere else, like LAPS, denying logins to certain groups, etc. So, they still couldn't get anywhere on the network, but they had enough privileges to harvest creds of whoever logged into it.

Eventually some dipshit logged into it as a domain admin for some reason beyond earthly logic. This would have been impossible and would have been denied on almost any other server, but alas... instant privilege escalation to domain admin. And lucky for the attackers, the person whose credentials they just stole also set their VMware administrator password to the same thing because they didn't like using their password manager.

The attackers tried to encrypt both data on the VMs by logging into them, and then the actual data on the VMware hosts themselves. This was completely hit or miss and they did not get everything before the attack was stopped. The actual encryptor (and decryptor) was basically just a janky python script.

Recovery wasn't difficult - the problem is the threat of them releasing data.
 
I've dealt with a ransomware attack.

The initial attack vector wasn't the result of a vulnerability. The attack was not particularly sophisticated. A LOT of these attacks happen for really dumb and pathetic reasons.
The largest one we got hit with wasn't a vulnerability either. They e-mailed an executable to the Dean who opened it and ran it. It proceeded to then encrypt everything it had access to, didn't try for admin, which meant it encrypted the administrative data share, since he had access to that. Of course that was on a file server with snapshots so I just reverted it and we went on with life.

Most of them are pretty simple like this, yours was comparatively sophisticated, though still not that sophisticated.

Eventually some dipshit logged into it as a domain admin for some reason beyond earthly logic. This would have been impossible and would have been denied on almost any other server, but alas... instant privilege escalation to domain admin. And lucky for the attackers, the person whose credentials they just stole also set their VMware administrator password to the same thing because they didn't like using their password manager.
This is something a lot of people don't seem to get about APT type attacks: the real threat is the actor getting a password to a high level account and this is usually NOT done through vulnerabilities, nor do Linux/MacOS have any inherent mitigations against this. This is the kind of thing that requires things like just-in-time-administration and just-enough-administration, separation of administrative accounts for different purposes, random local admin account passwords, "clean keyboard" admin systems, etc, etc. It requires, more or less, an information security bureaucracy.

You can have the most technically secure, locked down, Linux systems in the world and if the root password on all of them is the same and an attacker can get you to enter it, you are screwed. It is real, real common to try to get in to a user system, do something to generate a helpdesk call, and capture that account. If there's a proper information security policy and the account is limited, then they get nothing. But if that password is the admin password to that and a ton of other systems, well they then can spread around the org.
 
Last edited:
  • Like
Reactions: socK
like this
Exactly what I mean, way more often than not when an issue is found in an openSource project it was not by looking at the code, it was using it/testing a program that use it and finding something strange, exactly like it would have happened for a close library.
In Pegasus case, people knew data was getting stolen but didn't know how or why for years. Same goes for all the other examples I gave. With open source you can immediately look at the code and figure out what was going on. In XZ Utils situation, it was a Microsoft employee who knew something was up and because XZ Utils is open source, was able to track down when and what version started the problem. From there, we knew who the bad actor was. You can not do this with closed source.
I am not sure why/how the need for that kind of hyperbole there is a difference between free/open code and android (can I look at all the code running on all android device of course not, my Amazon firestick is android is it 100% freecode ?)
Because you said, "but people that put lot of effort looking at other people code for vulnerability for free do not tend to be the type to tell others if they find one.". If your code was on display for would be back actors to take advantage of it, then why use open source at all? You make it sound like it's a very bad idea.
Which would tend to be the job of a lot of android devices connected to the internet.
This is more of the problem with Android itself instead of open source. Android as an OS doesn't always end up in devices that freely allow it's users to replace it. A lot of those devices do end up with very outdated Android versions. This is why I plan to build my own car stereo with Crankshaft which turns RPI's into Android car stereos. I've installed an Android head unit into nearly all my cars and they end up with severely outdated Android versions because the manufacturer never updates them. I got a couple of RPI3's laying around doing nothing.
They would be the first to tell us that they do not know what they do not know (and that it is by definition impossible to know how much Pegasus achieve to do and that it will change every year),
Isn't that because iOS is locked down?
do we really think Iran leadership, scientifics, Hezbollah, etc... were not incredibly well tracked/spied on the last few years ?
Is this an excuse or something?
And their issues was they were too much apple to apple ecosystem users ?
I'm not worried about them. Let them worry about them. I worry about me.
 
Because you said, "but people that put lot of effort looking at other people code for vulnerability for free do not tend to be the type to tell others if they find one.". If your code was on display for would be back actors to take advantage of it, then why use open source at all? You make it sound like it's a very bad idea.
No that just an obvious "downside" and the type of people that actually look at code for no particular reason, has for the example above, no we do not know who the bad actors was for XZ utils, it was a false/anonymous account doing it, in a closed source we would maybe not have known (if there was a trial maybe) but the close source company would have, they would have found it exactly the same way the Microsoft employee did after receiving the ticket.

This is more of the problem with Android itself instead of open source.
Of course, we were talking about the statement Android being considered more secure than iOS, a lot of android device run a lot of non open-source (or non up to date as you say) it was really not just an open vs close source type of talking point. The reason iOS is considered more secure by most is the thigh hardware-update-high cost of all device that run it.

Isn't that because iOS is locked down?
That because no one can know what they do not know, no one would ever say I know that up to date Android on a samsung phone is not something even the best Mossad team can listen too, how would you even start ? And certainly not just by reading code, there is millions of them and even if it was possible to do what kind of hubris someone would need to have to ever state something like that.

Is this an excuse or something?
Excuse ? what do you mean, I am just stating the people in the android world were compromised a lot by pegasus like technology in the recent years.

I worry about me.
I do not think you have to worry about Pegasus.
 
Back
Top