• Skip to primary navigation
  • Skip to main content
  • Skip to primary sidebar
  • Skip to footer

Satisfice, Inc.

Software Testing for Serious People

  • Home
  • About
    • Privacy Policy
  • Methodology
    • Exploratory Testing
    • Reasons to Repeat Tests
  • Consulting
    • Ways to Engage Our Services
  • Classes
    • James Bach’s Public Test Consulting
    • Testimonials
    • RST Courses Offered
    • Rapid Software Testing Explored
    • Rapid Software Testing Applied
    • Rapid Software Testing Managed
    • Rapid Software Testing Coached
    • Rapid Software Testing Focused: Risk
    • Rapid Software Testing Focused: Strategy
  • Schedule
  • Blog
  • Contact
  • Resources
    • Downloads
    • Bibliography: Exploratory Process
    • Bibliography: Risk Analysis
    • Bibliography: Coaching
    • Bibliography: Usability
    • Bibliography: My Stuff From IEEE Computer and IEEE Software Magazines
    • Bibliography: The Sociology of Harry Collins

Floating Point Quality: Less Floaty, More Pointed

Years ago I sat next to the Numerics Test Team at Apple Computer. I teased them one day about how they had it easy: no user interface to worry about; a stateless world; perfectly predictable outcomes. The test lead just heaved a sigh and launched into a rant about how numerics testing is actually rather complicated and brimming with unexpected ambiguities. Apparently, there are many ways to interpret the IEEE floating point standard and learned people are not in agreement about how to do it. Implementing floating point arithmetic on a digital platform is a matter of tradeoffs between accuracy and performance. And don’t get them started about HP… apparently HP calculators had certain calculation bugs that the scientific community had grown used to. So the Apple guys had to duplicate the bugs in order to be considered “correct.”

Among the reasons why floating point is a problem for digital systems is that digital arithmetic is discrete and finite, whereas real numbers often are not. As my colleague Alan Jorgensen says “This problem arises because computers do not represent some real numbers accurately. Just as we need a special notation to record one divided by three as a decimal fraction: 0.33333…., computers do not accurately represent one divided by ten. This has caused serious financial problems and, in at least one documented instance, death.”

Anyway, Alan just patented a process that addresses this problem “by computing two limits (bounds) containing the represented real number that are carried through successive calculations.  When the result is no longer sufficiently accurate the result is so marked, as are further calculations using that value.  It is fail-safe and performs in real time.  It can operate in conjunction with existing hardware and software.  Conversion between existing standardized floating point and this new bounded floating point format are simple operations.”

If you are working with systems that must do extremely accurate and safe floating point calculations, you might want to check out the patent.

Filed Under: Quality

Reader Interactions

Comments

  1. Matt Heusser says

    June 20, 2017 at 6:16 pm

    I have worked on some systems that used floating point numbers when the numbers really mattered. In our case it was dollars.

    Our best trick was to use integers and printf in a way that inserted a period between the 1000’s and 100’s place.

    Reply
    • Alan Jorgensen says

      June 24, 2017 at 8:12 am

      Matt,
      They used that method to compute time in the patriot missile system keeping track of time as an integer in tenths of a second. The problem came when they converted that value to floating point for positioning calculations after accumulating time for 100 hours. The result had insufficient precision to accurately place the scud missile and the patriot missile missed.
      Alan

      Reply
      • Alexander Pushkarev says

        July 7, 2017 at 5:08 am

        Why it was necessary to convert it to float in the first place? Would it eliminate the issue if they used only decimal, as missile most-likely have it’s precision somewhere about 0.1 m anyway?

        I see a great value in the suggested approach anyway but also interested in alternatives.

        Reply
    • Alan Jorgensen says

      June 26, 2017 at 5:07 am

      As an additional note, that is the reason that the financial community uses the COBOL language. By default, calculations are performed as scaled integers.
      Alan

      Reply
  2. Alan Jorgensen says

    June 24, 2017 at 8:07 am

    If you have additional questions about this patent and the need for this patent, please email me at alan at bounded floating point dot com.

    Reply
  3. Malcolm Chambers says

    June 25, 2017 at 7:42 pm

    The Solution that I used in my IBM PL/1 and Cobol days was Fixed Decimal For currency in particular. With intermediate results held to an increased precision. So a dollar value might be packed decimal 5.2 5 digits before the decimal point and 2 after but If I was multiplying or dividing two PD numbers of that format The intermediate result was held as 10.4 adding the precisions together. and then convert “safely” according to accounting rules to the lower precision. This worked easily in Cobol but was harder to construct in PL/1 as the compiler was liable to create its own intermediate result. And particular care always had to be taken around dividing a small number by a large number to ensure correctness.

    Inventory management was interesting where pack sizes had to be managed after all if you are filling packs from individual units you cannot have a 0.1 left in the count of individual units. So multiplication and division were all modular arithmetic.

    The other problem was Time. Computers don’t handle time that well at all. As per the example of the fatality. The process control software that I worked on a similar point of time was all based on ticks since midnight. With a super amount of processing to synchronize all the controllers at midnight.

    Reply
  4. bogdan192 says

    June 29, 2017 at 10:52 pm

    Hi James. The link to the patent images is not loading any images for me within a reasonable amount of time. Tested on Mozilla Firefox version 54.0 (32-bit) (Windows 10 Enterprise 64 bit).
    It does seem to work however on Microsoft Edge.

    Is there any way to notify the authority managing the site of this issue?

    [James’ Reply: Your guess is as good as mine.]

    Reply
  5. Oliver Heimlich says

    January 20, 2018 at 3:13 pm

    The technique, which is described in the patent, is widely known as “interval arithmetic”.

    https://en.wikipedia.org/wiki/Interval_arithmetic
    http://standards.ieee.org/findstds/standard/1788-2015.html
    http://standards.ieee.org/findstds/standard/1788.1-2017.html

    Reply
  6. W. Heisenberg says

    January 21, 2018 at 12:03 pm

    oooops:

    https://insidehpc.com/2018/01/decades-old-floating-point-error-problem-solved/#comment-124515

    John Gustafson says:
    January 17, 2018 at 6:50 pm

    Absolutely amazing that the US Patent Office would grant a patent for an idea I first publicly presented in 2013, and published in a very well-received book (The End of Error: Unum Arithmetic) in February 2015. All three forms of unum arithmetic are open source and free of patent restrictions (MIT Open Source license). For Jorgensen to claim to be the inventor of this concept is pretty outrageous.

    Reply
  7. Elis Mains says

    May 1, 2018 at 2:23 pm

    W. Heisenberg & Gustafson if you have a real challenge why not take it up with the US Patient office? It’s free.

    Reply

Leave a Reply Cancel reply

Your email address will not be published. Required fields are marked *

Primary Sidebar

Search

Categories

  • About Me (18)
  • Agile Methodology (14)
  • Automation (19)
  • Bug Investigation and Reporting (9)
  • Buggy Products (22)
  • Certification (10)
  • Context-Driven Testing (44)
  • Critique (41)
  • Ethics (19)
  • Exploratory Testing (32)
  • FAQ (5)
  • For Newbies (25)
  • Heuristics (28)
  • Important! (20)
  • Language (35)
  • Management (20)
  • Metrics (3)
  • Process Dynamics (27)
  • Quality (7)
  • Rapid Software Testing Methodology (22)
  • Risk Analysis (12)
  • RST (3)
  • Scientific Method (3)
  • Skills (30)
  • Test Coverage (7)
  • Test Documentation (8)
  • Test Oracles (4)
  • Test Reporting (11)
  • Test Strategy (26)
  • Testability (3)
  • Testing Culture (90)
  • Testing vs. Checking (17)
  • Uncategorized (10)
  • Working with Non-Testers (6)

Blog Archives

Footer

  • About James Bach
  • Satisfice Blog
  • Bibliography: Bach on IEEE
  • Contact James
  • Consulting
  • Engage Our Services
  • Privacy Policy
  • RST Courses
  • RST Explored
  • RST Applied
  • RST Managed
  • RST Coached
  • RST Focused: Risk
  • RST Focused: Strategy
  • RST Methodology
  • Exploratory Testing
  • Testing Training
  • Resources
  • Bibliography: Exploratory
  • Bibliography: Risk Analysis
  • Bibliography: Coaching
  • Bibliography: Usability
  • Bibliography: The Sociology of Harry Collins
  • Schedule
  • Upcoming Public Classes
  • Upcoming Online Classes
  • Public Events
  • Tester MeetUps

Copyright © 2023 · News Pro on Genesis Framework · WordPress · Log in