###
G-15 Binary-to-Decimal Conversion

On a machine with binary arithmetic, conversion of numbers from decimal form to binary is easy; conversely, on a machine with decimal arithmetic, it is conversion to decimal that is easy. The Bendix G-15 had binary arithmetic internally, but most input and output was done in decimal. [1] The Bendix Computer Division [2] provided standard subroutines for single- or double-precision (7 or 14 digits) [3] conversion in each direction for both integers and fractions.

An integer whose magnitude is less than 10^7 can be exactly represented in either form, so conversion could be exact. However, this is not true of fractions. There is no finite binary fraction that exactly equals decimal 0.1, and no 7-digit decimal fraction that exactly equals hexadecimal 0.01 (1/256). Some rounding was inevitable, but it was highly desirable that converting a decimal fraction to binary and then converting it back would yield the original decimal value.

Conversion of a binary fraction to decimal was quite straightforward using available machine operations. The first decimal digit of the fraction could be obtained by multiplying the fraction (in binary) by u, the G-15's hex digit for ten, and extracting the integer part of the result. [4] The remaining digits could be computed in the same way from the remaining fractional parts. Since each multiplier was just a four-bit number, each multiplication required only eight word times. [5]

It might seem that it would be even easier to convert integers than fractions, but it wasn't. A digit-at-a-time strategy would have required division, to extract the low-order digit as the integer mod 10, and then further divisions of the subsequent quotients. But such divisions could not be short-cut like the multiplications, since the full precision results would be needed. Furthermore, the G-15's division algorithm did not provide any quick way to access the remainder. The solution adopted by the Bendix programmers was to divide the integer by the binary value corresponding to 10,000,000 (or to 100,000,000,000,000)--thereby converting it to a binary fraction--and then use the appropriate fraction conversion routine. [6]

Here's an anecdote about testing a binary-to-decimal conversion routine.

Notes:

[1] Actually in a form of

[2] Coincidentally also abbreviated BCD.

[3] Note that 9,999,999 decimal is only 98967z in hex, so there were more than four "extra" bits in the single-precision binary respresentation of seven-digit decimal numbers, and more than eight for double-precision representation of fourteen digits.

[4] The G-15 had a special command for extracting bits from the product register, surely designed just for this purpose.

[5] And since a four-bit multiply shifted the multiplier register right just four bits, the entire sequence of seven multipliers could be loaded as the single word, represented in hex as uuuuuuu.

[6] And I never learned of a superior approach.

An integer whose magnitude is less than 10^7 can be exactly represented in either form, so conversion could be exact. However, this is not true of fractions. There is no finite binary fraction that exactly equals decimal 0.1, and no 7-digit decimal fraction that exactly equals hexadecimal 0.01 (1/256). Some rounding was inevitable, but it was highly desirable that converting a decimal fraction to binary and then converting it back would yield the original decimal value.

Conversion of a binary fraction to decimal was quite straightforward using available machine operations. The first decimal digit of the fraction could be obtained by multiplying the fraction (in binary) by u, the G-15's hex digit for ten, and extracting the integer part of the result. [4] The remaining digits could be computed in the same way from the remaining fractional parts. Since each multiplier was just a four-bit number, each multiplication required only eight word times. [5]

It might seem that it would be even easier to convert integers than fractions, but it wasn't. A digit-at-a-time strategy would have required division, to extract the low-order digit as the integer mod 10, and then further divisions of the subsequent quotients. But such divisions could not be short-cut like the multiplications, since the full precision results would be needed. Furthermore, the G-15's division algorithm did not provide any quick way to access the remainder. The solution adopted by the Bendix programmers was to divide the integer by the binary value corresponding to 10,000,000 (or to 100,000,000,000,000)--thereby converting it to a binary fraction--and then use the appropriate fraction conversion routine. [6]

Here's an anecdote about testing a binary-to-decimal conversion routine.

Notes:

[1] Actually in a form of

*binary coded decimal*(BCD), using hexadecimal digits restricted to the range 0-9. I.e., the decimal number 1,234,567 would be encoded as the hexadecimal number 1234567, equivalent to the binary number 0001 0010 0011 0100 0101 0110 0111 (equal to the decimal number 19,088,743).[2] Coincidentally also abbreviated BCD.

[3] Note that 9,999,999 decimal is only 98967z in hex, so there were more than four "extra" bits in the single-precision binary respresentation of seven-digit decimal numbers, and more than eight for double-precision representation of fourteen digits.

[4] The G-15 had a special command for extracting bits from the product register, surely designed just for this purpose.

[5] And since a four-bit multiply shifted the multiplier register right just four bits, the entire sequence of seven multipliers could be loaded as the single word, represented in hex as uuuuuuu.

[6] And I never learned of a superior approach.

## 0 Comments:

Post a Comment

<< The Way It Was home