Why memory size is always represented in bytes?

Discussion in 'Off Topic' started by ChatIndia, Apr 30, 2013.

  1. ChatIndia

    ChatIndia Community Advocate Community Support

    Messages:
    1,408
    Likes Received:
    30
    Trophy Points:
    48
    In 8 bit microprocessor, it's alright to have each memory address size equal to 8 bit because the processing capability and most probably the number of data lines is 8. But in 32 bit processor we can make memory address size equal to 32 bit. Then it will be possible to interface 2[SUP]32[/SUP] X 32 bits = 16 GB of RAM instead of just 4 GB. But manufactures make RAM where each memory location can hold only 8 bits. why?
     
  2. essellar

    essellar Community Advocate Community Support

    Messages:
    3,295
    Likes Received:
    227
    Trophy Points:
    63
    Bytes are now just a convenient measure of size/capacity outside of the world of embedded computers (which are still often 8-bit). And it still matters when you talk about the "endianness" and structure of stored data. The actual reading/writing/addressing is done by the "word", the width of which depends on the system architecture. DDR3, for instance, is designed around a 64-bit word, although that would need to be chunked into two 32-bit pieces for local consumption if you are running a 32-bit processor or OS.

    The address space is not directly tied to the memory width; the number of address lines can be very different from the number of data lines. That said, the unit of information is still a byte. Spreading the same information over a 32-bit word would mean that addressing would need to be word-wise rather than byte-wise, so you'd need four times the amount of memory to get the same granularity of addressing, with three-quarters of that space being essentially wasted. (The actual wastage would be somewhat less if backwards compatibility were thrown away; instruction words and next-address locations would still be 75% wasteful, but contiguous binary data would still be "packed". That would require new instruction sets, a new OS, and new versions of programs that wouldn't be directly compatible with older versions.)

    Modern versions of the Big Three OSs are 64-bits, designed to run on the current 64-bit PC architectures. That gives you a RAM address space, in the case of Win64 (which is limited to 44-bit addressing), of 16TB. That's probably enough for the next couple of years, at least, and you'd be hard-pressed to find a non-server motherboard that supports more than 32GB. At the moment, there are very few use-cases for even that much data in active volatile memory, and most of the cases that do exist involve reducing latency in huge ("big data") databases. In almost all other cases, it's a matter of having several applications on the go, each with its own data set (and with most of them now periodically saving recovery/undo data to disk, you pretty much need a fast solid-state drive to keep the system from bogging down as it reads, writes and modifies gigabytes of TEMP files). Going significantly bigger while staying performant would mean finding a fast NVRAM technology so that nothing needs to be committed to archival storage until you're finished with it.
     

Share This Page