OS imaging and redeployment over network?

Deluded

[H]ard|Gawd
Joined
Dec 31, 2009
Messages
1,376
I don't even know where to put this. In ages past, I used to be level 2 in a corp, and once I got out I promptly forgot everything.

Now I'm in automotive electronics and I just secured a lease on a office.

So in anticipation of the move to the new office, I am wondering if it is possible to set up a server to do a OS imaging and boot to OS from the network every time?

Here's my usage case: a lot of the software I use to program vehicle electronics, whether they are JTAG, BDM, circuit board modification, eeprom read/write, flash read/write, testing control modules on the bench....

The software themselves are pretty low in terms of system requirement, but the main issue lies in their inability to coexist with any other type of software. Basically the drivers for a given hardware interface would interfere with the driver of another software, necessitating me to either reinstall the whole software from scratch, or keep separate HDD's with the OS containing only that specific software with that specific driver set, or keep separate laptops dedicated just to that software/hardware interface combo.

I have over 15 devices that I use on a (mostly) daily basis. You can see where I'm going with this. I have about 6 or 7 laptops that I use, and it is horribly inefficient and not to mention very annoying and contributes to my current office clutter.

So, my question is, is it possible to have a server host an image of all different OSes containing the exact software that I use, and then simply deploying it over the network via BXE?

The images could be roughly around 30-40gb, and I could use either a gigabit or 10gig network to make the transfer seamless.

My goal is to reduce the clutter of all those separate HDD's and laptops and simply use one laptop/desktop hooked up to the server and deploy and redeploy all sort of images over the network so I can do my work, then shut down and use BXE to boot to another OS for another software?

Would really appreciate a reality check here, or am I smoking a whole bunch of crack?
 
2nd on the VM thought. Different VM for each software setup. No waiting for a new OS load via network.
Same base OS, different sets of other stuff... kind of wish Microsoft would flesh out their Windows container tech so that you could address a problem like this, supposing that these applications can function behind a hypervisor.
 
Hmmmm, very good idea. I would have to test it extensively, I'm not sure if VM is capable of direct hardware passthrough, some of the devices I work with are really horribly coded and most of them rely on ancient FDTI serial COM drivers, and I'm not sure if the VM hypervisor is intelligent enough to do a 1:1 passthru without muddling it up. Some of the software were designed back in XP days, and while they do work in W10, W10 changed their driver passthru, so in many of those OSes I run W7, W10 with driver signature disabled, everything disabled (including internet -- I'm stupid, but not that stupid).

Another reason why I have this concern is because a lot of eeprom and flash programming I do has to be exact 1:1 copy, checksum corrected and flashed over dubious quality equipment. Otherwise the modules installed into a car will not operate correctly. I am not sure about the variability introduced through VM....I will definitely have to test this when I get the office, set up the server and all that fun bits.

I also tried bootit bare metal -- multiple OSes could be installed on one drive and each OS is ignorant of other OSes installed. It worked brilliantly for my use....until the poorly coded software that was supposed to write the first 1Mb of data to serial COM, accidentally wrote it to every single boot sector of all OSes managed by bootit. Took me about 8 hours to recover.

Doesn't running the VM uses the server resources, as opposed to the laptop/desktop CPU/RAM/GPU?
 
You'd be booting up a VM, or, restoring it's state, same basic thing as hybernation on the host OS. Main annoyance is that the VM will be slower to use, but if they're lightweight and you don't abuse RAM too much, and you don't use more than one at a time, it shouldn't be an issue. Look for minimal installs or even scripts to keep the part of the OS loaded into RAM reduced and so on. But yeah, these would run on the laptop, they just don't need a whole lot of resources themselves once booted (and saved) as VMs. Obviously there's some optimization you could do in terms of hardware, but theoretically you could reduce the setup to one laptop, and you could back up the VMs to use them elsewhere as the hypervisor (be that Virtualbox or even Hyper-V) doesn't change, so the 'hardware' that each VM sees doesn't change.

Now, passthrough of serial interfaces should work fairly well, but that's likely to be your pain point. Can't say that there's really a good way to fix that.
 
I've dealt with some older/shoddy/direct type of software like this and while I've not dealt with it and a VM before, I would expect the VM to potentially introduce something that could introduce a hard to find bug in the finished product or worse. :(

I think your multi-boot solution was the best, but as you mentioned a rogue operation messed that up pretty quickly. Well, if restoring that setup was quick, would that messup be an issue? Here's where I'm going with this--you're going to spend a lot of time booting each of the 'systems' off the network on a regular basis. Plus, you have to set them all up which is yet another new thing to learn. Both of these aren't optimized for time.

But you've already set up the multi-boot before. Yes it took 8 hours, but it worked. I would then just use clonezilla to clone that to a nas unit that you can then quickly re-image to your computer's boot drive if it goes wacky again. And if you update the boot with a new 'system', you can then clone this new setup to the nas and you've got both your new and old setup. Plus, you can back up the files on the nas and you're set that way too. And you can write the clone to any of the multiple laptops you use and have every tool available simultaneously on every laptop just in case you need to do something like run the same program on two systems side-by-side. Coupling this with a kvm and you could have a single console that would then have multiple systems behind it that all could run the software you need and if any of them borks out, you just set it to re-image and then switch to a different system and keep working.

Another idea would be to take things a bit further and instead of pxe booting across the network, use iscsi over pxe to create a 'drive' on the nas which then would be usable like any other drive helping to remove the abstraction layer of the pxe 'boot image'.

Now, I've not done any of these except worked with clonezilla and a plain jane nas so for more details on how exactly to do these, someone else more knowledgeable than me will have to chime in. :D
 
I run multiple vehicle reprogramming softwares at work for module reprogramming and some of the VCI drivers and software refuse to play nice with each other at the worst possible times. Got tired of it and setup each manufacturers software on a separate partition to avoid further conflicts. I choose the boot partition at start up depending on which vehicle needs a module reflash. Just name each partition for the vendor it applies to. Pain to setup but avoids bricking expensive replacement modules.
 
what about installing for "current user only" and have different user accounts for each program. would that work?

I tried that in the past, but the device drivers were global, meaning changing the device drivers were installed for all users, not just the specific users.

Good thought though!

You'd be booting up a VM, or, restoring it's state, same basic thing as hybernation on the host OS. Main annoyance is that the VM will be slower to use, but if they're lightweight and you don't abuse RAM too much, and you don't use more than one at a time, it shouldn't be an issue. Look for minimal installs or even scripts to keep the part of the OS loaded into RAM reduced and so on. But yeah, these would run on the laptop, they just don't need a whole lot of resources themselves once booted (and saved) as VMs. Obviously there's some optimization you could do in terms of hardware, but theoretically you could reduce the setup to one laptop, and you could back up the VMs to use them elsewhere as the hypervisor (be that Virtualbox or even Hyper-V) doesn't change, so the 'hardware' that each VM sees doesn't change.

Now, passthrough of serial interfaces should work fairly well, but that's likely to be your pain point. Can't say that there's really a good way to fix that.

Most of the software I use have very low ram requirement. I think the most demanding one requires 1Gb of ram, so if I max out the server RAM capacity and allocate 4Gb to each VM, it should run ok.

But yes, you're correct that the serial passthrough is going to be a pain in the butt. I will have to run multiple tests through VM and through native install and double check.

I've dealt with some older/shoddy/direct type of software like this and while I've not dealt with it and a VM before, I would expect the VM to potentially introduce something that could introduce a hard to find bug in the finished product or worse. :(

I think your multi-boot solution was the best, but as you mentioned a rogue operation messed that up pretty quickly. Well, if restoring that setup was quick, would that messup be an issue? Here's where I'm going with this--you're going to spend a lot of time booting each of the 'systems' off the network on a regular basis. Plus, you have to set them all up which is yet another new thing to learn. Both of these aren't optimized for time.

But you've already set up the multi-boot before. Yes it took 8 hours, but it worked. I would then just use clonezilla to clone that to a nas unit that you can then quickly re-image to your computer's boot drive if it goes wacky again. And if you update the boot with a new 'system', you can then clone this new setup to the nas and you've got both your new and old setup. Plus, you can back up the files on the nas and you're set that way too. And you can write the clone to any of the multiple laptops you use and have every tool available simultaneously on every laptop just in case you need to do something like run the same program on two systems side-by-side. Coupling this with a kvm and you could have a single console that would then have multiple systems behind it that all could run the software you need and if any of them borks out, you just set it to re-image and then switch to a different system and keep working.

Another idea would be to take things a bit further and instead of pxe booting across the network, use iscsi over pxe to create a 'drive' on the nas which then would be usable like any other drive helping to remove the abstraction layer of the pxe 'boot image'.

Now, I've not done any of these except worked with clonezilla and a plain jane nas so for more details on how exactly to do these, someone else more knowledgeable than me will have to chime in. :D

I'm not gonna lie, I definitely thought about doing this. At least up to backing up clonezilla of the entire drive periodically and when system goes crazy I just restore the backup. And that's definitely going to be on the table, I just think it's inelegant. Bootit software, in this case, is weird as balls...lets give you an example. I had 9 OSes side by side on a 2TB hdd, so that meant about 10 partition, with the first one being dedicated to bootit and the remaining ones dedicated to each OS. And yes, it did work pretty well, I was able to enter each separate OS and all OSes only saw the exact space allocated to them, they were blind to other OSes installed in other partition. Disk management basically said there was empty space on both side of the partition.

But the funky is that clonezilla definitely does not see it this way, it's one solid chunk of data. So the whole 2Tb of HDD has to be cloned and reimaged as a unit if something goes wrong. Bootit has a backup and imaging feature too, but I never managed to get it working. The image could be cloned, but it could not be backed up on external without data corruption. Restoring it either destroyed the windows or the programs. I didn't play around with it much, and clonezilla was better/faster/simpler to use. Plus, you know, my server definitely does not have that much storage in it (8Tb currently, more planned in future).

But I definitely do like the idea of separate VM -- assuming the passthrough works -- and that means I can restore only that specific instances and not the entire 2Tb hdd.

I run multiple vehicle reprogramming softwares at work for module reprogramming and some of the VCI drivers and software refuse to play nice with each other at the worst possible times. Got tired of it and setup each manufacturers software on a separate partition to avoid further conflicts. I choose the boot partition at start up depending on which vehicle needs a module reflash. Just name each partition for the vendor it applies to. Pain to setup but avoids bricking expensive replacement modules.

Out of curiosity, which manufacturer software do you work with? I've found that Ford and GM J2534 works pretty well and are application and device agnostic, but JLR, MJDS and FJDS HATE each other (not surprising, but still). BMW ISTA takes up so much space and requires 80-100Gb of free storage that it's just easier to keep it separate, same as MB, and less said about VIDA, the better. Ugh.

How did you do the partition? Do all the OS see each other? Or did you do it via bootit like I did?
 
I'm not gonna lie, I definitely thought about doing this. At least up to backing up clonezilla of the entire drive periodically and when system goes crazy I just restore the backup. And that's definitely going to be on the table, I just think it's inelegant. Bootit software, in this case, is weird as balls...lets give you an example. I had 9 OSes side by side on a 2TB hdd, so that meant about 10 partition, with the first one being dedicated to bootit and the remaining ones dedicated to each OS. And yes, it did work pretty well, I was able to enter each separate OS and all OSes only saw the exact space allocated to them, they were blind to other OSes installed in other partition. Disk management basically said there was empty space on both side of the partition.

But the funky is that clonezilla definitely does not see it this way, it's one solid chunk of data. So the whole 2Tb of HDD has to be cloned and reimaged as a unit if something goes wrong. Bootit has a backup and imaging feature too, but I never managed to get it working. The image could be cloned, but it could not be backed up on external without data corruption. Restoring it either destroyed the windows or the programs. I didn't play around with it much, and clonezilla was better/faster/simpler to use. Plus, you know, my server definitely does not have that much storage in it (8Tb currently, more planned in future).

But I definitely do like the idea of separate VM -- assuming the passthrough works -- and that means I can restore only that specific instances and not the entire 2Tb hdd.
So what I do with my clonezilla images is use the maximum compression, which when decompressing ends up being much faster since there's less to transfer and the decompression is pretty much as fast as the drive can write--which happens to be contiguous sectors so it's as fast as the drive/interface can go. :)

And you can take this to the extreme by having nvme or pcie storage that you keep a local copy of the image--at 2GB/sec it would only take 15 minutes to restore and be back up and running. And even better, you could literally just keep a second drive ready to go and when the first one borks, just swap it over in the bios and be back up and running. Then reimage the first one overnight.

This whole muck a muck does sound a bit inelegant, but from what I've found sometimes the kiss principle is the easiest way to go even though it takes some manual work to keep it going.

If you can get VMs to work, that would be great as you can have a base VM and the working one would only be a copy of the original. Then if something goes wrong, no biggie, just create a new VM again. That would be a solid way of working too.

I still like the idea of just having a massive number of machines and a 8-port kvm. :D I did that for many years. :love:
 
Question: Do these specialized softwares require a dongle in order to function? In past life, we had a few old machines as the software require needed a dongle or serial port connectivity.
I'd wonder if you could get a powerful system then put the software into a VM and pass through ports/dongles. For this, I would suggest VMware workstation vs. a microsoft solution like hyper-v.
There are adapters out there to do serial over usb, serial over lan, etc.

I figure with something like vmware workstation, this ought to allow passthrough as needed and be simple to deploy, install and manage. Once the VMs are built/established, there wouldn't be a need to boot into another OS, you could have multiple VMs running with different devices passed through vs. boot to a specific OS each time you need to switch to one of the other environments. Once the VMs are built, then I'd suggest to make a copy and put onto an external or two. If a VM gets hosed, you can pull it out of the external and let it rip.

With VMware Workstation, you can also pass local drive(s) from the windows host as a shared drive which is very useful if you need to move files around.

I have at least 17 VMs I bounce between regularly and love the shit out of vmware workstation for my workflow.
 
So what I do with my clonezilla images is use the maximum compression, which when decompressing ends up being much faster since there's less to transfer and the decompression is pretty much as fast as the drive can write--which happens to be contiguous sectors so it's as fast as the drive/interface can go. :)

And you can take this to the extreme by having nvme or pcie storage that you keep a local copy of the image--at 2GB/sec it would only take 15 minutes to restore and be back up and running. And even better, you could literally just keep a second drive ready to go and when the first one borks, just swap it over in the bios and be back up and running. Then reimage the first one overnight.

This whole muck a muck does sound a bit inelegant, but from what I've found sometimes the kiss principle is the easiest way to go even though it takes some manual work to keep it going.

If you can get VMs to work, that would be great as you can have a base VM and the working one would only be a copy of the original. Then if something goes wrong, no biggie, just create a new VM again. That would be a solid way of working too.

I still like the idea of just having a massive number of machines and a 8-port kvm. :D I did that for many years. :love:

Well, all that will be tested end of the month when I move in the new office. I just didn't want to have the same sort of clutter that I do in my current place.

That's why I've been looking at turning my old gaming computer into a NAS, but my office is big enough to stuff a server rack in it, and if I'm going to do a server rack, might as well do the whole shebang... UPS, NAS, VM, IP cameras, pfsense, pihole, the whole works.

Done serial port pass through to windows 10 VM with VMware workstation. Worked without issues, only had to dedicate that com port to the VM i assigned it to.

That is awesome to hear. Was yours a dedicated serial port, or a passthrough?

Question: Do these specialized softwares require a dongle in order to function? In past life, we had a few old machines as the software require needed a dongle or serial port connectivity.
I'd wonder if you could get a powerful system then put the software into a VM and pass through ports/dongles. For this, I would suggest VMware workstation vs. a microsoft solution like hyper-v.
There are adapters out there to do serial over usb, serial over lan, etc.

I figure with something like vmware workstation, this ought to allow passthrough as needed and be simple to deploy, install and manage. Once the VMs are built/established, there wouldn't be a need to boot into another OS, you could have multiple VMs running with different devices passed through vs. boot to a specific OS each time you need to switch to one of the other environments. Once the VMs are built, then I'd suggest to make a copy and put onto an external or two. If a VM gets hosed, you can pull it out of the external and let it rip.

With VMware Workstation, you can also pass local drive(s) from the windows host as a shared drive which is very useful if you need to move files around.

I have at least 17 VMs I bounce between regularly and love the shit out of vmware workstation for my workflow.

I might want to pick your brain when I set up the office, because that is almost exactly my ideal case scenario. And yes, most of my devices act as a security dongle themselves, and some have a separate USB dongle. Without those the software will either not boot, or simply refuses to function.

If the dongle passthrough works and the serial passthrough also works (as confirmed by shockey), then all thats left is to just beef the fuck out of the server to the level that it can do everything I mentioned above and run VM's as well.

Luckily the interface software are pretty low in system requirement, so no need for crazy CPU clocks or RAM.
 
Hey VIDA is my wifes name LOL. Just got rid of a nightmare S80 that's been camped in my bay for some time. IDS didn't used to be so friendly, the VCM2 driver used to be a pain when trying to use my cardaq plus2 for TDS and Ista D. The cardaq unit has no issue sharing a partition with VIDA And ISTA P. I used a supergrub bootloader to set it up a couple years back and don't use it for anything else. Biggest pain in the arse is when ISTA requires updating it always seems to be a crap shoot with the first in line following that misery. The latest VIDA update was virtually painless praise the lord.
 
I might want to pick your brain when I set up the office, because that is almost exactly my ideal case scenario. And yes, most of my devices act as a security dongle themselves, and some have a separate USB dongle. Without those the software will either not boot, or simply refuses to function.

If the dongle passthrough works and the serial passthrough also works (as confirmed by shockey), then all thats left is to just beef the fuck out of the server to the level that it can do everything I mentioned above and run VM's as well.

Luckily the interface software are pretty low in system requirement, so no need for crazy CPU clocks or RAM.

I'd build a decent workstation, decent amount of ram and put the VMs on NVME(or MS Storage spaces with an SSD tier). You may find for your workflow that two or three stations fit the bill. My work provided me with some Dell SFF with an i7-8700 and 32gb ram, about 2tb of SSD. This little box is impressive! I often hear it screaming from under my desk when doing intensive things.

I'd happily answer questions you may have, perhaps I learn a thing or two :)
 
Well, I do have my most recent gaming PC that I've retired -- 4960X and 24Gb of ram, I think. Should be enough. Just need a few things, and ain't I just lucky that the PSU is in short supply nationally?

Well, I still have two more weeks to putter about and then come back and ask you guys for more details.
 
If you need native hardware access there's always the VHD boot option.

https://devblogs.microsoft.com/cesardelatorre/booting-natively-windows-10-from-a-vhdx-drive-file/
https://docs.microsoft.com/en-us/wi...oot--add-a-virtual-hard-disk-to-the-boot-menu

Not sure how many entries you could have on the boot menu, but it might be worth a shot. Create a base vhdx on your host machine, and then setup/update right up until the specialization. Then shut it down, boot the host OS and copy the file as many times as you can/need to. Set each up individually with the required specializaions.
 
Well, since current pc's seem to have dropped the idea of physical comm ports, and to use a serial cable now requires a usb device and assigning a com port via driver, it 'should' be easy enough to load the usb serial driver into your vm's and point the software to the usb comm port. Granted I have only used it one at a time on a laptop without serial ports to connect to network console ports. Or, there are network attached appliances (console servers like Uplogic or Lantronics) out there that provide multiple serial ports, it would take some scripting for a pc to connect to the appropriate serial device (where I work we use HPNA to connect to many cisco devices to grab the config or to change the configs) I'm not on the HPNA side of things, but it works pretty well and it can be automated.
 
Back
Top