PDA

View Full Version : A quick question for the math mavens and tech geeks.



N2NH
11-14-2012, 03:58 AM
If you are using scientific notation for binary would you use x2˟ rather than x10˟?:help:

For example, in binary, 11 = 1011.

Now would 1011 be expressed in binary (base 2) as 1.011 x 2³?:chin:

n2ize
11-14-2012, 06:25 AM
In a nutshell yes. In computer science one often comes across this method of scientific notation when representing floating decimal numbers in binary format. So, say for example we have the number 42.56 We would store that in binary as follows.. Split the number before the decimal point. So we have 42 and 0.56. Converting each to binary,

42 = 101010
0.56 = 100011110011... (its a repeating binary I think)

So we can represent it as

42.56 = 101010.100011110011... which is can be stored in the "binary scientific notation form" as 1.01010100011110011.. x 2101 (some computers may prefer to store it this way)

So yeah, in binary, powers of 2 can indeed move the binary point along.

Disclaimer... my conversions to binary may be wrong, I did them in my head on the fly so I may have a few of the 1's and 0's wrong. But the idea of shifting the point is the same.

also 2101 = 25... The binary value 101 = decimal value 5.

NA4BH
11-14-2012, 06:02 PM
NERDS

W1GUH
11-14-2012, 09:34 PM
Yea, internally a "floating point" quantity is stored with an exponent, a fraction, or mantissa, and a sign. Further, the fraction is almost always "normalized" by shifting it left or right, if necessary, until the first bit after the binary point is a "1",and the exponent is adjusted to compensate for that shift. That allows the max precision to be stored. Reading that over, I see I've left out a lot of stuff & that might be a little confusing. The Wiki article listed below can fill in the detail.

But, unless you're designing a hardware floating point processor or writing low-level code for floating point calculations, it's best to forget about all that -- it's just not really useful for a human and will drive you absolutely nuts til you're used to it.

For more detail, you can look at the Wiki article for IEEE floating point stuff (http://en.wikipedia.org/wiki/IEEE_floating_point).

kf0rt
11-14-2012, 09:35 PM
NERDS

GEEKS

N2NH
11-15-2012, 05:52 AM
Thank you John and Paul. And to the cheering section...

http://remocoon.mnsi.net/Raspberry17.jpg
:lol:

n2ize
11-15-2012, 11:47 AM
Yea, internally a "floating point" quantity is stored with an exponent, a fraction, or mantissa, and a sign. Further, the fraction is almost always "normalized" by shifting it left or right, if necessary, until the first bit after the binary point is a "1",and the exponent is adjusted to compensate for that shift. That allows the max precision to be stored. Reading that over, I see I've left out a lot of stuff & that might be a little confusing. The Wiki article listed below can fill in the detail.

But, unless you're designing a hardware floating point processor or writing low-level code for floating point calculations, it's best to forget about all that -- it's just not really useful for a human and will drive you absolutely nuts til you're used to it.

For more detail, you can look at the Wiki article for IEEE floating point stuff (http://en.wikipedia.org/wiki/IEEE_floating_point).
Yeah, I played around with floating points in binary when I was doing assembly programming. While the arithmetic is not particularly daunting they are still not intuitive and feel awkward performing operations on them, at least until they become second nature. . We humans are generally better with base 10 because, well, thats what we were taught from day 1. The day a 102 year old finds himself in front of a TV set and starts hearing Big Bird singing 1-2-3-...-10 so has his training in base 10 begun. Imagine Big Bird singing 0 1 10 11 ,etc instead. Of course this is all possible since Romney never got elected and thus didn't get to shut down PBS. ;)

Hexadec (base 16) would be great... I mean 0 1 2 3 4 5 6 7 8 9 A B C D E F .. ;)