What to think about if computer interpreted bits differently than 8-bit's in a byte?

Joined
May 22, 2010
Messages
2,079
My past instructor for UNIX/Linux classes had a student I over heard in one of their classes I had with them that said something about how their trying to come up with a way for computers to read byte differently than 8-bit in a byte and maybe it was none of my business either, but I over heard it. However, I hope it's not true because I still have not been able to master reading binary manually from what My Teacher in high school tried or did teach us in computer basics in the 7th or 8th grade because my mom and my friend talked me into throwing out the folder with the work from that class, which is why I struggled later with mastering computers because I followed through with throwing it out and needed to reference back to that if I was going to better understand sub netting or something more advanced if I would have pursued that route at the Technical College I went to or anywhere else.

How will this effect how computers understand data and why would it be necessary considering computers have always read 1 byte as 8-bits. Also, why would they change this when it just complicates things even further than it needs to be for computers and doesn't seem necessary. I can subnet and interpret binary it's just extremely painful and time consuming for me to do so when their are probably or definitely easier ways to understand or interpret the data on the screen or that
 
Why would you want to read binary manually? Ugh.

The rest of that is damn hard to parse. What is your question now?
 
A byte is almost always 8 bits, that's the common definition of a byte.
There may be systems that define a different byte size, but I think that's unusual.

Different CPUs and systems can process data in larger chunks (the "word size") than one byte at a time though.
The larger registers (16 bit, 32 bit, 64 bit) are generally made up of smaller sub-registers of smaller size.

The x86 16 bit register "AX" is made up of the 8 bit registers AL and AH (low and high bits).

More info:

http://www.eecg.toronto.edu/~amza/www.mindsec.com/files/x86regs.html

.
 
Last edited:
I think that person may be talking more about endianness than the bit'ness, and possibly about qubits (quantum computing)? Or is this more about word-addressable, byte addressing, etc? Dunno, my little thoughts that come to mind...
 
How will this effect how computers understand data and why would it be necessary considering computers have always read 1 byte as 8-bits.
I wont affect anything, they had a hypothetical discussion.

If there was a need for such a change it would be a regular discussion.
A computer with different byte architecture would see slow adoption unless it presented a large performance boost which could then be used to emulate older architectures to run their software.
Otherwise there would be almost nothing to run on it.

But given that a byte size change alone does not infer a performance advantage for the same size of silicon, its just a discussion piece.
 
A byte is almost always 8 bits, that's the common definition of a byte.
There may be systems that define a different byte size, but I think that's unusual.

Different CPUs and systems can process data in larger chunks (the "word size") than one byte at a time though.
The larger registers (16 bit, 32 bit, 64 bit) are generally made up of smaller sub-registers of smaller size.

The x86 16 bit register "AX" is made up of the 8 bit registers AL and AH (low and high bits).

More info:

http://www.eecg.toronto.edu/~amza/www.mindsec.com/files/x86regs.html

.

Your answer says it best because if a computer interpreted bytes as anything other than 8-bits it would be unusual or is unusual if systems have done it in the past.
 
I wont affect anything, they had a hypothetical discussion.

If there was a need for such a change it would be a regular discussion.
A computer with different byte architecture would see slow adoption unless it presented a large performance boost which could then be used to emulate older architectures to run their software.
Otherwise there would be almost nothing to run on it.

But given that a byte size change alone does not infer a performance advantage for the same size of silicon, its just a discussion piece.

I'm not saying there is a need for a change, but it would be unusual for such a change or standard if it accord in the past just as I told Spartacus in my reponse to him. I was just wondering if this change had been considered before or is still being considered.
 
Undoubtedly to both.
Whether it is implemented outside of a lab is another game though.
 
Back
Top