Open letter to Microsoft :D

I hate 10 just as much as when it first came out but MS is pushing for games to be 10 only. In some cases the games are made with DX12 only, so that is a must anyways.
I understand people hating it, since MS makes it super easy. But yeah, it would just be easier to switch over and learn how to make it more like 7 and get rid of the bloat, telemetry, adware, etc, so forth.

True, I would also say it is more reason people should expand their horizons and current options, there are plenty of choices out there. There are benefits and drawbacks to almost all of the choices. If people want to use older software/technology, they have to deal with poor support and more possible security risks, but that may not matter to them for what they are doing. That is their own choice. Going to newer stuff may mean less options, could mean more information gathering, could mean ads, etc. It is a matter of tradeoffs and matching solutions to requirements.

Personally I think there are a lot of people just make choices based on FUD, rather than analyzing their requirements and the best solutions for them. But customers are customers, they will choose how they want to choose.
 
I hate 10 just as much as when it first came out but MS is pushing for games to be 10 only. In some cases the games are made with DX12 only, so that is a must anyways.
I understand people hating it, since MS makes it super easy. But yeah, it would just be easier to switch over and learn how to make it more like 7 and get rid of the bloat, telemetry, adware, etc, so forth.

This. I mean it’s not hard to get a gray market LTSC on these forums or eBay. If someone wants to debate the ethics of it, just remember how ethical it is to mine your data.
 
I think there are 2 things that are going to be interesting to watch in the future with Windows 7 people not wanting to upgrade.

1 - Windows 7 support officially ends at the end of this year. Using an unsupported OS is pretty frowned upon here. If you opened a thread saying you are using Vista or XP, you'll get pages of replies of backlash.

2- Driver support. Skylake and Ryzen cpus and their respective chipsets are officially unsupported by Windows. Skylake is now what.. 3 generations old? Ryzen is going on to their 3rd generation. I am building a 9900k system this weekend and there are no Windows 7 drivers for my chipset making it impossible to install and use Windows 7 on it. What happens when all the people who refuse to move to Windows 10 upgrade or buy new hardware?


It doesn't really matter if you like Windows 10 or not, Microsoft has no plans on a new OS. There will not be a Windows 11. Your options are to either use an unsupported Windows 7 (or 8.1 if you really want to delay it), use Windows 10, use Linux, or use OSX.

I have an Intel i3-6100 Skylake (in my backup machine) running Windows 7 on a Gigabyte GA-H170M-D3H motherboard. I think you're referring to the CPUs that came out AFTER Skylake.

I did read Microsoft was working on a new OS. Although, I don't know if I can remember on which site. I didn't bother saving the link however since they said Microsoft doesn't know themselves whether they'll release it or not. I think they called it "Core OS" and it was supposedly going to be used across all platforms (like phone, tablet, PC...).

To be clear, I'm asking Microsoft to add to their NEXT OS a feature where you could click on an icon and have a Win7 desktop. You'd have the option of switching desktop layouts.
 
I was speaking about the example. The situations are completely different, I even alluded to a few ways they are different. I am sorry you cannot grasp the difference.
I'm sorry i was not the one having issues with the example I understod the poster quite fine. but nice try to "turn the tables" :rolleyes:
 
I'm sorry i was not the one having issues with the example I understod the poster quite fine. but nice try to "turn the tables" :rolleyes:

Obviously not, because the situations were completely different. :rolleyes:

As I pointed out.

Vaccine: Hidden information from public, no one was aware, leaked afterwards.
Telemetry: Microsoft openly admits collecting data, provides information on how data is being mined and used.

Vaccine: People now don't want to get vaccines which could lead to their deaths.
Telemetry: People don't want to use Microsoft which leads to...them using something else.

And those are just the main differences, so yeah, completely different.
 
Last edited:
chockomonkey, while #1 is a PITA indeed, it can be stopped easily in at least Win10 Pro and above. "No matter what you set in the settings"... is wrong unless you only meant Settings (app).
#2 is too small price to pay as others have said. Annoying - sure, stupid - sure, but once I set things up I don't bother with those double control panels anymore, or at least mostly once in a year.
That said, of course there is some degradation in UX but overall I have no drama and the new start menu is better for me, even if initially I had mixed feelings. After all, what matters is if you can do your work as before. I can. Updates disabled or postponed for at least few months and at least for the last 2 years, it seems Ok with no drama.
Wait, are you saying there's another windows update ui behind the one in settings? I could have sworn they removed the win7 UI.
 
I have used both and prefer Windows 7. A Win7 build with new technology would destroy Win10's market share overnight. I actually like being in charge of my computer, instead of my computer telling me how it's gonna be.

A windows 7 with new technology would be Windows 10 :D:D:rolleyes::rolleyes:
 
^^^ Wasn't this the point of Windows 8? One OS to rule them all?

And for those the claim Win 7 has telemetry, only if you installed the specific updates that included it.

The really sad thing is that it would only take a few simple steps to make Win 10 far less irksome. 1. Lose the telemetry. 2. Give the user more control over updates 3. Quit installing crap without user OK 4. Improve the user settings menu. 5. Quit acting like the computer belongs to Microsoft.
 
  • Like
Reactions: Ryom
like this
I really don't get why everyone seems to have so much love for win7. I hated win 7.... wasn't a big fan of 8 either... or 10.

Really why are you guys so in love with an OS that forces you to use a 20 year old file system well known to F up all the time. That has no proper inode system and forces the OS update system to reboot all the darn time, no to mention take 10s longer then it should to update things. Cause again no inode system means over writing and having to replace and delete older files ect. There is a reason that windows needs double the free space on a drive to perform an update... if an update is a GB it needs 2GB of space to install. (I know we all have lots of space... and most of us are on SSDs and not worried about fragmentation). Still its a terrible choice for an OS in 2019.

Do all you windows guys not have Android and Mac phones... notice how updates download and install a lot faster. The phone OSs get it right.

Never mind all the annoyances of telemetry... and not really having full control over basic OS stuffs.

If your going to ask MS for anything for windows really... ask them to dump windows. I'm being serious... trash it all and start over. My suggestion would be take a Linux back end (or BSD as Apple has done) and create a "windows" desktop environment. They can keep things like Direct X closed source and windows DE only if they like. So they could still try and lock gamers into windows. They can even keep windows APIs closed source so software still only runs on their windows DE. However all the back end stuff.... they would both save the money of developing and see its quality go way up. They also instantly clean up their driver mess. (all drivers would just go to the kernel team and MS wouldn't have to worry about signing nothing anymore more savings).

End users would notice no difference. It would still look like windows... it would just be 10x more secure, be much less prone to viruses and malware (as long as they enforce proper account privilege setup).. and GBs of updates would take seconds to install once they where downloaded. Would a windows OS that shared some of the best bits of MacOS really be a bad thing ?
 
I really don't get why everyone seems to have so much love for win7. I hated win 7.... wasn't a big fan of 8 either... or 10.

Really why are you guys so in love with an OS that forces you to use a 20 year old file system well known to F up all the time. That has no proper inode system and forces the OS update system to reboot all the darn time, no to mention take 10s longer then it should to update things. Cause again no inode system means over writing and having to replace and delete older files ect. There is a reason that windows needs double the free space on a drive to perform an update... if an update is a GB it needs 2GB of space to install. (I know we all have lots of space... and most of us are on SSDs and not worried about fragmentation). Still its a terrible choice for an OS in 2019.

Do all you windows guys not have Android and Mac phones... notice how updates download and install a lot faster. The phone OSs get it right.

Never mind all the annoyances of telemetry... and not really having full control over basic OS stuffs.

If your going to ask MS for anything for windows really... ask them to dump windows. I'm being serious... trash it all and start over. My suggestion would be take a Linux back end (or BSD as Apple has done) and create a "windows" desktop environment. They can keep things like Direct X closed source and windows DE only if they like. So they could still try and lock gamers into windows. They can even keep windows APIs closed source so software still only runs on their windows DE. However all the back end stuff.... they would both save the money of developing and see its quality go way up. They also instantly clean up their driver mess. (all drivers would just go to the kernel team and MS wouldn't have to worry about signing nothing anymore more savings).

End users would notice no difference. It would still look like windows... it would just be 10x more secure, be much less prone to viruses and malware (as long as they enforce proper account privilege setup).. and GBs of updates would take seconds to install once they where downloaded. Would a windows OS that shared some of the best bits of MacOS really be a bad thing ?
Because it was super easy to use and it worked great? No metro. It had choices that the user could make? I guess you do not like freedom of choice?
 
Because it was super easy to use and it worked great? No metro. It had choices that the user could make? I guess you do not like freedom of choice?

I do which is why I choose to not use MS products. Having a slightly more open OS isn't what I'm after. Asking for MS to remove things like telemetry at this point is just pointless. MS makes more money now selling web ads then they do selling windows. MS ads sell despite MS being no where close to a leader in search because they can micro target ads. No telemetry... no micro targeting. Its that simple. Telemetry isn't going anywhere... which means they sure as heck can't give people direct control of much anymore.... lest they loose those sweet sweet Russian ad dollars.

If you want freedom.... choose Linux proper.
 
I do which is why I choose to not use MS products. Having a slightly more open OS isn't what I'm after. Asking for MS to remove things like telemetry at this point is just pointless. MS makes more money now selling web ads then they do selling windows. MS ads sell despite MS being no where close to a leader in search because they can micro target ads. No telemetry... no micro targeting. Its that simple. Telemetry isn't going anywhere... which means they sure as heck can't give people direct control of much anymore.... lest they loose those sweet sweet Russian ad dollars.

If you want freedom.... choose Linux proper.
If my Rift and games all ran under Linux, I would be running Linux faster than hell!
 
It's ridiculous to blame the FS for Windows' requirement to restart after some updates. Period on that.

Also comparing android/iphone phone update being faster than on Windows is also very very ridiculous. No need to tell why.

The difference for end users if they use linux behind the scenes with "Windows DE" can be insignificant today too - most users won't complain much for basic things like basic web surfing whether they use windows or some linux distro. Once they start depending on more deep compatibility issues, even for "basic" browsing when they need to install some component like electronic signature or anything, or some pro software that works only on Windows or... 100s of other such "small" things, things would become not so indifferent. Otherwise they can use any todays linux distro like Ubuntu or Mint. I use Mint for tests and it is Okay for basuc things for people with no further learning.
 
If my Rift and games all ran under Linux, I would be running Linux faster than hell!

I hear ya I do. More people need to pressure game hardware and software devs to support Linux direct or at the very least use open standards.

https://www.protondb.com/

We are getting there... of course I think you may have bought the wrong VR set. I think DK1 and 2 can work but CV1 is still a no go.

Plenty of people using HTC headsets under Linux and using steamplay... games like Superhot VR are gold status via proton.
 
It's ridiculous to blame the FS for Windows' requirement to restart after some updates. Period on that.

Also comparing android/iphone phone update being faster than on Windows is also very very ridiculous. No need to tell why.

The difference for end users if they use linux behind the scenes with "Windows DE" can be insignificant today too - most users won't complain much for basic things like basic web surfing whether they use windows or some linux distro. Once they start depending on more deep compatibility issues, even for "basic" browsing when they need to install some component like electronic signature or anything, or some pro software that works only on Windows or... 100s of other such "small" things, things would become not so indifferent. Otherwise they can use any todays linux distro like Ubuntu or Mint. I use Mint for tests and it is Okay for basuc things for people with no further learning.

But its the file systems fault. Full stop. lol

Linux and BSD... and every *nix OS (which yes includes ChromeOS, Android, and iOS) update almost instantly.... because they use a proper inode system and file systems that support that. *nix operating systems simply download the new files... and then change the inode link. No deleting old files... no complicated partial file over writes (which is something MS 100% still does like its 1990)

You know what happens on a linux system that looses power half way through an update ? The answer is 99.9% of the time nothing. Restart and complete update. No BS "don't turn off your computer" msgs... or blue screens for 20 min while the OS appends GBs of files.

As is proper *nix simply points the file table to the new files... and continues on with its work.

As far as your point about web browsing... and a need for windows ? odd choice of argument. lol There is nothing that chrome windows does that chrome on Linux doesn't do. If anyone is still relying on anything MS specific to do with the web.... they are morons, even MS says so.
 
Apparently you didn't need anything more complicated that some authors still require Windows OS and browsers, some even IE-only. That's why I talked about basic usage, lol. One way or another, often you end up being dependent on some Windows software, format, api or both. It's not what you or I like or arguing in some forum about being moron or not. Web browsing was just one example. As I said, basic browsing can be done everywhere. We have here some public adminsitration services that still work better on Windows machines, they use cryptographic devices and electronic signatures and other means to access some services. Just a small example, but there are others.
"*nix operating systems simply download the new files... and then change the inode link. No deleting old files"
- If you download a new file while the old is still there, then you both have at some time both file contents on disk. If the file download fails for some reason halfway through, you don't lose the "old" file, right. While the download was still Ok, you had half of the new file on disk, and full old file.
Reallocating an inode or deleting an old file is implementation detail. If Windows could break the "hard" link between the process in memory and file on disk (at least as a starting point), then we'd expect restartless updates. For desktops, restarts after some updates is not so a big deal. For servers is, but for clustered setups and backups, it's also not a very big deal after all. And not all updates are critical or relevant for every installation. We have some Server's not restarted (or updated) for more than a year, not a big deal, most updates are irrelevant for the concrete case. When there is a critical exploit and it affects some machines, the we update. So far, we had no drama over restarts once in a blue moon. Yes, frequent need to restart when you update Windows is one of its flaws, noone could possibly argue about that.
Windows is a VERY complex OS full of legacy over 30+ years on the desktop. With all its flaws as a result. And of course political burden all over the place, this is inevitable.
 
Think your describing yourself. Plenty of people around [H] that don't use windows anymore... or use windows for nothing other then games. I doubt anyone would be calling my usage web or otherwise basic.

Talking about web stuff... ya last I checked the vast majority of the net is still running on Linux servers including all of Microsofts own cloud services. I have written front ends for call centers and medical databases and other tools I frankly hardly understood. (I know a guy who does lots of the back end big database stuff and he has hired me a few times to re do front end type things he just didn't have the time to do). 100% Linux solutions.... are many of the users of those things running windows? I'm sure... but its hardly required. The companies that coded activex crap and other MS web junk standards have mostly at least gotten rid of that crap... and even MS at this point says using that stuff is cancer. My friend who does a lot of back end work isn't always tasked with replacing the front ends... but the ones I have done for him mostly companies where canning ya stuff that often leaned on the old MS crap. (to be completely frank... MS terrible web standard history and their attempted hijacking of the internet is always on my mind when I use anything MS. it may bias me a bit at times... but in general I think I would still see MS as pure evil anyway. lol)

As for inodes... vs MS b-tree file system I don't think you really understand the difference. Do some reading and illuminate yourself.
https://en.wikipedia.org/wiki/Inode

To sum up why its a big deal that Linux and *nix in general use inode and MS doesn't. inodes are not files they are meta data that references files... they point to files, but the file can change while the inode number remains unchanged. If I have an inode for X library... even if I move that file to a completely different drive the inode number stays the same, software using that inode doesn't know the difference if the file system moves the file location (which is why file systems like ext4 and btrfs and others can self defrag... the file system can re arrange data anyway it likes). So on a *nix system if a package manager needs to update X library from 1.1 to 1.2... it downloads 1.2 and simply takes the inode number for 1.1 and changes what its pointing at. (and updating the meta data of course) MS... and all its FAT derived file systems are not capable of that, as meta data is directly tied to the file itself. This allows running software to update libraries on the fly. Extending that... the major enterprise distors RHEL and SLES have both added hot patching of kernels recently as well. Inodes make that possible.... hot kernel patching is not something that is even technically possible on a windows server. (not advocating that as a great solution for big iron... still with recent versions of RHEL and SLES in theory a server could run forever, and still always be on the latest kernel)

There is very little need to reboot Linux but that is hardly the only advantage. The difference between Inode and FAT/NTFS btree meta is why a small update in windows can sometime take multiple minutes and sometimes much longer (everyone here has had a 30+ min windows update at some point)... while I can update 100s of packages on my Linux install in a couple seconds.
 
Last edited:
Still, implementation detail. You have an image on the disk (file) and a (running) process in memory. Processes in Windows are tied to their image on disk, some of its functionalities (may) depend on reading from the image on the disk dynamically while executing. How linux deals with that? When you replace the image on disk, the running process in memory should be unloaded and reloaded fully and this has millions of implications and dependent processes and possibly open files/shares/network_activities..., how is this happening in Lilnux? (Just asking, I don't know how this is dealt with there).

As to people and developers. I was talking about client-side. What developers use in the back-end is not the point here. ActiveX was just one example, I hate that too and if I have to use, I use virtual machines. There are many other possible activities involving client-side USB drives and drivers that just plain do not work on anything than Windows, and I find this embarrassing too. I don't use IE or Edge too, since the XP -era. FWIW I use whenever possible open source or free software, which happens to be mostly multi-platform (which often is embarrassing too because it is mostly Java based and terribly slow compared to native binary) but the argument here is pointless just as in a flame war, because there are many factors keeping some or most Windows users from switching successfully to Linux with no dual-booting etc. I don't argue whether NTFS is better or not compared to existing *nix FS'es. NTFS is good enough for what it does now, and it's in its 6th iteration now I think.

As to updates taking 30 minutes sometimes. You can't be serious this relates to the FS. Yesterday I updated one Win2019 and two Win10's 1803 and 1809. Updates checked, installed for 2-3 minutes and one minute restart. I will admit WU was much faster with these now than with the older Win2016 I updated yesterday too (it wasn't updated for few more months though). Windows 7 was updating for 10-24 hours few years back when they had problems with WU when Win10 was out. This was no indication that Win7's FS was flawed to the point Win7 updated for nearly 24 hours.
 
Still, implementation detail. You have an image on the disk (file) and a (running) process in memory. Processes in Windows are tied to their image on disk, some of its functionalities (may) depend on reading from the image on the disk dynamically while executing. How linux deals with that? When you replace the image on disk, the running process in memory should be unloaded and reloaded fully and this has millions of implications and dependent processes and possibly open files/shares/network_activities..., how is this happening in Lilnux? (Just asking, I don't know how this is dealt with there).

As to people and developers. I was talking about client-side. What developers use in the back-end is not the point here. ActiveX was just one example, I hate that too and if I have to use, I use virtual machines. There are many other possible activities involving client-side USB drives and drivers that just plain do not work on anything than Windows, and I find this embarrassing too. I don't use IE or Edge too, since the XP -era. FWIW I use whenever possible open source or free software, which happens to be mostly multi-platform (which often is embarrassing too because it is mostly Java based and terribly slow compared to native binary) but the argument here is pointless just as in a flame war, because there are many factors keeping some or most Windows users from switching successfully to Linux with no dual-booting etc. I don't argue whether NTFS is better or not compared to existing *nix FS'es. NTFS is good enough for what it does now, and it's in its 6th iteration now I think.

As to updates taking 30 minutes sometimes. You can't be serious this relates to the FS. Yesterday I updated one Win2019 and two Win10's 1803 and 1809. Updates checked, installed for 2-3 minutes and one minute restart. I will admit WU was much faster with these now than with the older Win2016 I updated yesterday too (it wasn't updated for few more months though). Windows 7 was updating for 10-24 hours few years back when they had problems with WU when Win10 was out. This was no indication that Win7's FS was flawed to the point Win7 updated for nearly 24 hours.

If you have used Linux you know that updates take seconds.... unless your building from source or something.

No flames just sharing some info. I do consider NTFS a dumpster fire.... frankly I doubt MS has the talent to replace it anymore. The folks that designed the intial NTFS mostly lifted what was good about HPFS developed by IBM for MS/IBM OS/2... the few actual MS brains on NTFS, people like Gary Kimura are long retired now. MS has done their best to keep it current but its age shows, its missing plenty of features that most other OS file systems take for granted. Not even getting into things like Apples new file system with copy on write ala ZFS and BTRFS (admittedly Linuxs copy on write btrfs has its issues). Anyway not to jump down that rabbit hole.

Linux uses inodes. Think of it this way... regardless of what file system Linux is using, it keeps a record of all its files. These records are inodes. Most file systems (like EXT4 / btrfs and others) don't assign inodes at file creation... the OS links inodes to files. XFS is one file system that works a bit differently... it was desgined way back by silicon graphics (its current development is maintained by red hat I believe) anyway it has the ability to spawn new inodes dynamically. Some Linux file systems can actually run out of inodes .... it would never happen on a desktop, but on say an email server with 100s of email users, in such cases XFS is one of the better option as its less likely to run out of inodes and it can more easily do something called inline storage (storing small files actually inside the inode file crating instant look up for very small files). Its why for instance why the enterprise class SUSE distro defaults to XFS for data partitions, it has some speed advantages when using many many smaller files, and is unlikely to run out of inodes even when using massive storage solutions and millions of files.

I don't mean to ramble on the bottom line is the OS keeps a inode number which is a table which stores meta data and file soft and hard links. So say inode #3470061 may contain meta data for a Linux library file, it would also have a soft link to the file itself. So if I start say a email client that uses that library it calls a library from /usr/share that soft link calls inode #3470061 and its directed to a file location. Now if my package manager starts and needs to update that library... it says ok its linked to inode #3470061 the package manager downloads the new version of the file, and changes the meta data and file soft link relating to that inode. Now if my mail client was running at the time it would just happily go on using the old info using the older version the inode pointed it to when it called it. When I restart that email client and it requests the library the system again points it to 3470061 only it now points to the new meta data and file. This is why most package managers have auto clean functions... and some people suggest going in and manually triggering package cache cleanups now and then.

So going back to windows this is partly why windows doesn't have as neat of an upgrade process. NTFS and the windows system have an identification number of files but that number isn't static like an inode. If you delete a file and create a new one even with the same name it is going to have a different ident number. There is no way to update a library that is in use. (there is a reason why MS still suggests you shut everything down when running updates... and why some updates require a reboot and that you stare at the update progress before windows boots up its desktop). MS is simply not capable of updating on the fly in the same way any of the *nix derived operating systems are. It also means updating a lot more info for any one file they change.... as MS has things being called from mulitple settings files ect. For reasons of sanity MS will use different file names... meaning all those configs need to be changed. (perhaps MS could one day convert most libraries to some form of soft link look up but it would be a complete over haul and still not quite as clean) Linux of course has lots of config files as well... but at the end of the day they all point to soft links say in /usr/lib so by simply updating the inode, they are basically just changing where a soft link is pointing to. So when say software calls something like /usr/lib/libGL.so .... /usr/lib/libGL.so is nothing more then a soft link to the inode. [ not related but this is one source of a lot of confusion for newer advanced Linux users... when they see one Distro keeps this file in X location this one keeps it in Y, often those locations are nothing but soft links that pull the same file which is also stored in the correct place. So it may seem Red hat or Ubuntu or who ever is not following the FHS for file locations but turns out their "odd" location is nothing but a soft link to the proper one.. that confusion goes further when someone say updates a .conf file in one location and wonders why the one in another location saved the updated info as well. Its just the same file with more then one soft link ]
 
Last edited:
I really don't get why everyone seems to have so much love for win7. I hated win 7.... wasn't a big fan of 8 either... or 10.

Really why are you guys so in love with an OS that forces you to use a 20 year old file system well known to F up all the time. That has no proper inode system and forces the OS update system to reboot all the darn time, no to mention take 10s longer then it should to update things. Cause again no inode system means over writing and having to replace and delete older files ect. There is a reason that windows needs double the free space on a drive to perform an update... if an update is a GB it needs 2GB of space to install. (I know we all have lots of space... and most of us are on SSDs and not worried about fragmentation). Still its a terrible choice for an OS in 2019.

Do all you windows guys not have Android and Mac phones... notice how updates download and install a lot faster. The phone OSs get it right.

Never mind all the annoyances of telemetry... and not really having full control over basic OS stuffs.

If your going to ask MS for anything for windows really... ask them to dump windows. I'm being serious... trash it all and start over. My suggestion would be take a Linux back end (or BSD as Apple has done) and create a "windows" desktop environment. They can keep things like Direct X closed source and windows DE only if they like. So they could still try and lock gamers into windows. They can even keep windows APIs closed source so software still only runs on their windows DE. However all the back end stuff.... they would both save the money of developing and see its quality go way up. They also instantly clean up their driver mess. (all drivers would just go to the kernel team and MS wouldn't have to worry about signing nothing anymore more savings).

End users would notice no difference. It would still look like windows... it would just be 10x more secure, be much less prone to viruses and malware (as long as they enforce proper account privilege setup).. and GBs of updates would take seconds to install once they where downloaded. Would a windows OS that shared some of the best bits of MacOS really be a bad thing ?

I don't love Windows 7 or any Microsoft product for that matter. Personally for me the honeymoon was over the moment Microsoft rolled out Product Activation back on Windows XP. The only version of Windows I could ever say I loved was Windows 2000 but even that was used for less thab a year by me due to having poor game compatibility at the time. Therefore there's no reason you to think it seems we all love Win7. It's just a gateway to videogames to me and that's all.
 
Windows 8.1 would be all good, except for the little behind the scenes movement of Microsoft forcing manufacturers to cease driver development for Windows 8.1.. No conspiracy here at all
 
Ah yes, about the 2GB needed for 1GB update... what's the problem with that, I mean why do they require doubled space? Maybe they roll out their updates in the form of encrypted archives which they need to deflate locally?! I doubt it has to do with the FS's implementation.
As to updating Linuks.. come on, seconds to update... you can speak to someone who didn't touch anything but Windows.
For what its used, NTFS is perfectly fine, Windows is used on ATMs, on CAD machines, in the enterprise... if it was so deadly inferior, everyone would just use Linux everywhere, it's free after all.
 
Ah yes, about the 2GB needed for 1GB update... what's the problem with that, I mean why do they require doubled space? Maybe they roll out their updates in the form of encrypted archives which they need to deflate locally?! I doubt it has to do with the FS's implementation.
As to updating Linuks.. come on, seconds to update... you can speak to someone who didn't touch anything but Windows.
For what its used, NTFS is perfectly fine, Windows is used on ATMs, on CAD machines, in the enterprise... if it was so deadly inferior, everyone would just use Linux everywhere, it's free after all.

Yes the same people that choose IBM for years cause no one every got fired for choosing IBM. I'm sure have selected the best product for the job at all times. :)

As for enterprise... ya MS has been loosing that market for more then a few years now. They lost servers almost completely at this point. The enterprise desktop market is not a hands down MS domain anymore. The only real space they have left is the desktop market. I guess paying off OEMs for years has its advantages.

And yes MS update system issues has a lot to do with their inferior 25 year old file system.

I don't feel like digging but if you look around you can find plenty of current (annon posts) and ex MS employees detailing just how terrible the NTFS code is. The issue for MS has been constant now for 10+ years... they loose out on the best talent to companies like Google, Amazon and the other newer "cooler to a kid out of school" tech giants. So there new hires are mostly less then... and their old brains from back in the 80s/90s have all been retiring. Like I posted earlier there are 5 MS employees that developed NTFS 1.0... they are all gone. A couple of them where well regarded computer science professors... they are all rich off their MS shares an long gone.

As one annon MS developer posted a few years back... the new kids at MS never touch the old code. One they get no glory for slightly improving code written by folks retired for 20 years. They don't get a pat on the back for improving code that isn't from their dept. And when they are working on what they have been tasked the best way to get a promotion (and deal with the ancient spagetti code where the writers have been gone so long their managers don't even remember them) is to write new features. Its why power shell is a thing instead of cmd.exe having been improved. Its why they created ReFS by basically taking NTFS stripping it and trying to rebuild it. Its been a complete failure.

I won't go find the post from the MS dev that basically said all that... and ya that dev proved who he was. Anyone that has used anything MS the last 10 or so years knows its true. Just think of anything from MS you use.... do they improve anything anymore or just shovel as many new features into the code as possible ?

As for you continuing to doubt the Linux update process... ya seconds to update. If you don't believe me go install any of the major distros. I'm not talking about download times. Sure it might take you awhile to download a GB of updates or a few seconds depending on your connection. I'm talking about download done and installing time. Yes seconds are all it takes unless your package manager is building from source or something.
 
Talking about powershell... I kinda hate it :) . But that's me.
Yes, NTFS is old. Yes, there is a great deal of legacy and yes, MS is a big elephant-like company with 30 years of development on Windows and such. So it's maybe a miracle Windows is still so potent and works better with every version ;) . As long as all software I/we use works well and better on Windows than on Linux with all the Wine BS where most of our software works partially or erratically, it's of no matter how technologically behind is NTFS :) . If it works, then it's good.
I still didn't get any answer about how NTFS is at fault when you have to download something new first, check it, deflate it etc. before you go ahead and replace the working copy. All I got is company-wise information how old NTFS guys got retired and how new devs don't like touching old code (I know this very fact from first-person experience being a dev myself).
Giving cmd as example is not that good. cmd is sufficiently simple and small so even rewriting it from scratch is nothing for a company like MS. It's more about consistency and compatibility. Once you develop a desktop OS for more than 30 years being used by billions, you start to know how all these correlate.
Again, I don't argue whether NTFS is perfect or better than any other FS. It's used on the most popular OS where the most abuse is done by users and all kind of sotwares and as such I dare to say it's freaking doing its job more than adequately :) .
Once all my stuff works out of the box as good as in Windows with all the consequences and compatibilities, in Linux, I'll use that without a blink. But I expect more than a sheer choice of 1234 DEs and some central repository where I install/update apps with two clicks. Telling the truth as a power user this is a negligible advantage for me, it's almost of zero significance.
 
I did explain why NTFS is at fault. By explaining how every other OS does things properly.

*nix uses inodes windows and ntfs don't use or support either. That is the solution to speedy updates. Its pretty much that simple.

inodes mean the os can attach ONE number to a library... and no matter how many times you download a brand new version of that file the OS and File system still see it as the same index number (inode) There is zero need to change every OS and software config file that points to it. (Linux is also capable of sysbolic linking files across file partitions and file systems if required)

If a linux program uses say opengl as an example, and calls LibGL.so... the os is pointing to which ever version is the newest. Microsoft spends minutes+ "configuring updates" which in most cases means scrolling through its registry looking for all the configurations it needs to update. Even if MS names a new version of a library the exact same thing... and even if the planets align and the file size is identical. The old file and the new file will be identified differently by the file system.... making for more meta data to update. Linux (and *nix) on the other hand simply copys the new file and changes the inode meta data. Which takes a fraction of a second. (anyone that has used Linux has watched their updater scroll through a couple 100 packages in a few seconds as all its really doing is updating the links to the packages.... most *nix file systems such as ext4 xfs zfs ect from there will reorganize their internal data stores as required, they almost all use allocate-on-flush as well as other methods to write the new data in an organized unfragmentted way. I know fragmentation isn't as big a concern these days with the growth of flash storage. Still fragmentation is still an issue with NTFS and its MBR method of file allocation.

Windows simple can't have instant updates unless MS created their own version of an inode system or came up with some novel new solution no one else has thought of yet. Instant updates in the same way that MacOS / iOS / Android / Linux / BSD / Unix / Solaris and every other *nix os I'm forgetting do are simply a no go on a MS NT based OS running NTFS or ReFS.

And I won't even start in on NTFS complete lack of modern features. Its old and its put in its time. Its just obvious that MS has no plans currently to update their file system. Their ReFS experiment was half arsed and painful to watch. Apple has done exactly what MS should be able to do... they released a brand new file system with all the modern stuffs one would expect such as copy on write, snapshots, single inode writes, checksum parity lookups ect ect ect... and they rolled it out to ALL their products with no issues. All the types of storage tech that Windows users have to get by running Freenas, or buying a consumer nas unit running some flavor of unix or linux with zfs or btrfs. When ever I hear a poor windows user recount their painful backup stories I want to cry... man snapshot and move on.
 
Last edited:
That still doesn't explain the issues with updates, I'm sorry.
On NTFS you delete a file instantly by marking the space unused etc. On NTFS you move files instantly on the same volume. A process that refers to c:\123.xyz would refer to the same file after its updated. You update (normally) by deleting the old file and move over the new one to its new location which is instantaneous. That's actually why you need the "second" space for the new file being downloaded first from the internet (and possibly deflated/decompressed).
There are other issues with the core of how Windows works and is written that make pdating process sometimes slow. And if it's slow, it isn't connected with FS because you can clearly see during periods of waiting, the HDD/SSD light is not lit more than 1% of the whole period. There are other "bottlenecks" here.
Last month I did many test updates to Win Server's and Win10s. Most of them not updated for half or one year and more. Win10s updated for 3 minutes straight, from pushing Check For Updates to the Login screen.
 
Most of them not updated for half or one year and more. Win10s updated for 3 minutes straight, from pushing Check For Updates to the Login screen.

I guess you have the golden special version then.
I'm not blind just look at these forums and see all the issues with windows updates. MS themselves say some of the win10 feature updates can take up to 4hours to install.

As for inodes vs NT / MBT... I'm going to give up. If you want to understand it better... google away and read up on what an inode is, there is plenty of good info out there.
 
Maybe the golden special version not being polluted with crap software or maybe my gigabit internet connection :) . The point was this and I think most people got it right.
I know forums and complaints. As I said before, I had Win7 few years ago which at some time updated overnight and the morning was still updating or checking for updates. THey had some problems with Win7 updating process/servers, and not problem with FS.
I have not argued about inodes and NTFS defficiencies or anything. I argued about your claiming NTFS being old (and missing features or inodes) was the cause of very slow windows updates. You missed the point in my posts many times. I'm not an MS attorney but I challenge the opponent in a discussion where there is a clear fallacy going on.
 
Maybe the golden special version not being polluted with crap software or maybe my gigabit internet connection :) . The point was this and I think most people got it right.
I know forums and complaints. As I said before, I had Win7 few years ago which at some time updated overnight and the morning was still updating or checking for updates. THey had some problems with Win7 updating process/servers, and not problem with FS.
I have not argued about inodes and NTFS defficiencies or anything. I argued about your claiming NTFS being old (and missing features or inodes) was the cause of very slow windows updates. You missed the point in my posts many times. I'm not an MS attorney but I challenge the opponent in a discussion where there is a clear fallacy going on.

Fallacy ??? haha what fallicy. Saying that MS is incapable of updating running libraries, and that they have to shut down anything touching those libraries before they update them. (which is due to a lack of an inode addressing system). Or that their kernel/file system combo is not capable of simply updating a system file link. That is true... no fallacy there. NTFS is not capable of replacing a file by changing a soft link in an inlined meta data file... *nix does that with inlined inodes all the time and its nearly instant. NTFS is forced to edit MBR tables which is far slower, and doesn't work the same way internally so their update process needs to update a bunch of other system linked stuff, its chugging through that mess of a registry config table for darn near every single update large or small.

That is why Linux MacOS Unix and everything spun from them can instant replace a library or any other system file on a hot system. Windows is incapable due to a lack of such features. There really isn't anything to argue about. That is simply the way it is.
 
You are shifting the point, again. My point was that you claiming NTFS was the inherent (or main) cause of very slow WU, because it didn't have inodes. NTFS has the similar concept of inodes and while I'm not too savvy in filesystems, it looks like almost the same paradigm and of course uses different inmplementation. Still, this doesn't explain how the FS would slow down so much an update process, where there are other defficiencies in the OS that are to blame, but not the FS.
Yes, there wasn't anything to argue about when it comes to NTFS being the culprit in slow updates. You kept explaining the tech behind Linux's FSs and inodes, alright. And I'm sincere in that - could you explain in detail the NTFS implementation with (unique in the volume only, like inodes) file IDs? Not that it would matter in the current discussion with slow WU.
Edit: inode is more a *nix term and used in their FSs. You cannot expect MS implementations (similar or not) to use the same terms.
 
Last edited:
You are shifting the point, again. My point was that you claiming NTFS was the inherent (or main) cause of very slow WU, because it didn't have inodes. NTFS has the similar concept of inodes and while I'm not too savvy in filesystems, it looks like almost the same paradigm and of course uses different inmplementation. Still, this doesn't explain how the FS would slow down so much an update process, where there are other defficiencies in the OS that are to blame, but not the FS.
Yes, there wasn't anything to argue about when it comes to NTFS being the culprit in slow updates. You kept explaining the tech behind Linux's FSs and inodes, alright. And I'm sincere in that - could you explain in detail the NTFS implementation with (unique in the volume only, like inodes) file IDs? Not that it would matter in the current discussion with slow WU.

Not sure why but I'll try one more time.

Windows:
File 1
aaaa.dll
10,000 kbytes
version 1.0

File 2
aaaa.dll
10,200 kbytes
version 1.1

Windows has just downloaded a new DLL.... I don't care if it had to uncompress it or just copy it ect ect it doesn't matter. File 1 will have a location in the MBT... lets say its 00001.
Windows will move the new file to the correct location and the MBT will assign it its own # say 00002... but it has the same name. So it needs to remove the old file. NTFS CAN NOT HAVE the same file in the same location. It doesn't matter that it has a different MBT location or anything else it has the same name, which can't be. So it needs to remove the old file first. So nothing can be using it. It has to be closed down to be replaced. If not bad things will happen... so windows will often do this after a reboot. The other way around is to assign it a different name... which would mean editing config files. (solve one problem creates another)

*nix:
File 1
LibXXX.so
10,000 kbytes
version 1.0
INODE number 00012

File 2
LibXXX.so
10,200 kybtes
version 1.1
Inode number # 00012 (Linux can have this file anywhere and assign it the same inode number replacing the old file... but not deleting it, just changing what the file table is pointing at)

Linux copies file 2 to a location. Linux updates the inode pointing at said file... with new location.
Done.
Any software that was using the old file will continue running using the old file. No big deal the old file is still there and was never touched by update. If you start a new software that calls that same 00012 inode, it will get the new version. If you close your program and restart it... it will use the new version.
Thus you can update all you like... and continue using your old software. When you restart that software its picks up the new links.


I hope that makes it more clear.... I detail how Linux does it as... *nix solved this issue in around 1978 when the fathers of Unix Ritchie and Thompson detailed the first inode system in a Bell Labs tech journal. Why MS didn't go with an inode system or some variation beats me... its the KISS solution to the problem of updating running systems.
 
If you don't mind, I will continue to develop this sub-thread.
I'm mostly not arguing about how both FSs are doing the almost same things technically to achieve a given goal. inode is a *nix term mostly, you cannot expect MS would use the same for its implementations. Inode is linux way of storing metadata about FS objects. NTFS uses its own metadata structures, similar to inodes, obviously. Btw, you keep writing about MBT.... what is that in NTFS?
At new file's download time, you claim both files (new and old on the FS ) have the same inode number (but located in different locations)? E.g. two different files in the FS with the same inode identifier?! The file at this time is just a new file on the FS. You say NTFS cannot have the same file in the same location. Of course, filename-wise this is why a FS exists to take care of. These are different files and different file IDs (unique FS-wise). Remember, at this time when the new file is downloaded, for the FS these are totally different file objects in different locations (directories).
An excerpt about inodes.
-----------
An inode is denoted by the phrase "file serial number", defined as a per-file system unique identifier for a file.[8] That file serial number, together with the device ID of the device containing the file, uniquely identify the file within the whole system
-----------
So just asking, inodes are unique numbers for different files or not?
Even if not unique as per your saying above, the old file would have to disappear if your gonna use the new file. I don't believe in *nix you could have two files with identical names in the same directory as well.
After all that it seems to me you have to conduct pretty much the same operations on both FS types to update a file. The most important part is that any process using the file must be made aware of the change, and this is where the problems arise. If it's just a DLL (dynamically loaded), you can make the processes using it aware, the unload it, you replace the file, then processes notified... etc. If the file is a program, it is tied with the process in memory and that's where the problem is, and not the capability of the FS to quickly replace a file. Deleting a file FIRST then moving the new file to the new location is bnearly instantaneous, I repeat. On *nix you should delete the old file too, after the update. How is this made and in what succession is out of the point here. You have processes that depend on a given version of the file, period. How you make other processes handle the new file and most often the new APIs and ways of interacting with the new file, is what matters.
Remark: on Windows you could postpone the reboot indefinitely after an update and unitl that time, the system would just use the old version. It's not NTFS's fault if something updates slowly, that was the point. Again.

Edit:
By the way, I found this excerpt, please comment:
----
File IDs do not change unless the file is deleted. So they are unchanged on move within the same volume and defrag.
----
A least this was a good chance to read something new about NTFS and FSs in general :) .
 
Last edited:
Inodes are static.
Everything in *nix is a file. Including hardware ect. *nix allows for proper hard linking.

So say I have on my Disk a file inode 00001. That can be a file a direcotry or a mouse. I can have that inode 00001 hard linked in 5 different places if I like. It is the same file. So /usr/share/xxx.file could be the same as /home/share/xxx.file or even /home/share/someothername.file if they are all the same inode they are all the same bit of information. I mentioned it much earlier but this part throws most windows users that are used to a filename always referencing unique data.

So Linux updates by copying new data.. and simply taking the old inode # to reference the new data. So if I have 20 hardlinks in my system they are all updated instantly to the new file.

Its hard for you to grasp because clearly you haven't ever really used anything other then windows. Your hardly alone there.

With windows yes file ids stay on a file... bit the OS can't take a new file and assign it the old files ID. It needs to create a brand new ID for a new file. This is partly why appending files is still used. Its easier to write more data to the file and keep the file ID. This is one thing that leads to fragmentation. Linux rarely bothers appending files. There is little need. Copy the new data, give the old Inode the new location.

Defrag isn't something you need to worry about under Linux. 2 Reasons one.... the inode system. 100-1000s of small system files are rarely appended. So there not often being sliced up and saved all over a drive. Secondly NTFS like Fat before it writes data in a squential way. Just like the little picture on the defrag screen shows you data at the start empty space at the end. EXT4 and most other Unix file systems write new files at an equidistant space between files. So it starts at the beginning then writes to the end then in between, then in between those ect.

If you think of a file system writing a CD.... FAT would have started at the inner most bit and just kept writing out. This is fine if nothing is ever going to move. But if you move delete, append you run out of space and the file system has to write to the end.
NTFS made this a little better by leaving larger gaps between files. Allowing for more room to grow each file... but moving, deleting still leads to gaps and holes (Fragmentation)
EXT4 and most nix FS.... write in a scattered pattern one file at the start one at the end one in the middle then to the spaces in the middle of that data... ect ect.

Anyway we are way off topic I'm sure.... and my point still holds Inodes are a superior method of identifying files and hardware on a system. It allows for updates to hot update anything. (even the kernel itself now with newer Enterprise Linux methods) Windows doesn't treat everything like a file... and NTFS doesn't allow the reuse of ident locations. Which forces to developers (including MS update team) to get get creative and do crazy things like back meta data up to hidden parts.... so it can try and recopy old meta data over to new file overwrites, or to just append a ton of files which is both slow and causes fragmentation.
 
Last edited:
Even if not unique as per your saying above, the old file would have to disappear if your gonna use the new file. I don't believe in *nix you could have two files with identical names in the same directory as well.

Just to be clear. A directory is just a file. There is no file within a file sillyness. A directory gets an inode just like a file does.

Yes you can have multiple files with the exact same name in the same "folder".

Yes it leads to lots of confusion for people not used to Unix. It also seems to confuse Linux/Unix users at at times as well lol.... love this one. The first poster details the "issue" and the fix. and even shows an example of the same file name in a location.... the second poster then posts you can't have that. lol Its not complicated really but you have to be willing to do some reading and try and understand what is happening. (just talking in general there... a lot of Linux users have no real inclination of how inodes work) [EDIT ok they both posted semi wrong... there is no need for a space at the end of the file. it can be the same file name.... inode is different its a different file, their fix is correct anyway find the inode number... all though its true GNU tools in general stop you from creating the same file name, it is still technically possible to have 2 identical names]

https://unix.stackexchange.com/ques...ith-same-name-need-to-delete-one-but-not-both

You can't however have one inode pointing 2 places. You can have the same inode 1000 times in the same directory with 1000 different file names. They will all have the same data. Edit one they all change cause its the SAME bit of information at the end of the indoe.
 
Last edited:
Back
Top