[Rewrite]Conservative on memory, huh?
Quick: How much memory does a 1000 element array of Bytes occupy?
Doing some math, it seems like 1000 x 8 bits = 8000 bits = 1000 bytes, right?
Right?
No! Wrong!
It occupies 1000 x 32 bits = 32000 bits = 4000 bytes!
How come? Isn't Byte an 8 bit Integer? So it should occupy only one byte per element right? What the hell is it doing occupying 32 bits?
To find out, compile this code using the VB1 compiler in your Cerebral Cortex:
Dim b as Byte = 100
Console.WriteLine(b.GetType().ToString())
What did you get? Ofcourse the expected System.Byte...
Now, compile this:
Dim b as Byte = 100
Console.WriteLine ( (b-10).GetType().ToString() )
Now, what did you expect? System.Byte, right? Afterall, b is a System.Byte, so an operation on a Byte should still be a byte, right?
Right?
You know the drill: Wrong!
If you compile it in a real VB compiler, you'll get System.Int32! Yes, System.Int32. Not System.Byte... How? Why?
Now, try replacing the Byte in the above example with a Short, and you'll still get the same results...
But, compile this:
Dim l as Long = 1000
Console.WriteLine ( (l-10).GetType().ToString() )
Now, what didja expect? System.Int32?
However, this time you really do get an Int64. How? Why?
Friggin Reason
Digging through ECMA-335, the technical spec for .NET, I find this gem:
The CLI only operates on the numeric types int32 (4-byte signed integers), int64 (8-byte signed integers), native int (native-size integers), and F (native-size floating-point numbers). However, the CIL instruction set allows additional data types to be implemented:
So? This means that the only true signed Integers in .NET are System.Int32 and System.Int64!
But, what about the others? System.Int16! System.Byte! How?
They're just an Illussion! Here's another excert from ECMA-335:
Convert instructions that yield short integer values actually leave an int32 (32-bit) value on the stack, but it is guaranteed that only the low bits have meaning (i.e., the more significant bits are all zero for the unsigned conversions or a sign extension for the signed conversions).
So, this means that, for example, if you are storing the value 100 in a byte, you might expect 01100100 to be stored in memory, but you'll actually get 0000000000000000000001100100! But, all the bits except the rightmost 8 will always be zero in a Byte. So, while you can use only 8 bits, 32 bits are allocated! 24 Bits Wasted!
Moral of the Story
The Moral of the Story being, don't use Bytes and Shorts solely for "conserving" memory, because you make the matter worse. You are actually wasting memory, not saving it.
2 Comments:
Very well written :)
I don't get the point...
.NET use Int32 or bigger for its operation, yeah that's fine. It also seems pretty logic. What would give (b + 255) if b was a byte? an overflow...?
How did you get to the conclusion that an array of 1000 bytes takes 4000 bytes in memory? (my memory profiler disagree with you by the way).
--
Quentin Pouplard
http://myoedev.blogspot.com
Post a Comment
<< Home