I should mention that some folks reject this on a deeper level: they don't think that infinity is a legitimate mathematical construct, or only allow some limited meanings of it.
That's fair. If you reject infinite objects, the entire debate becomes moot. You still don't prove that 0.9999... ≠ 1, but you toss out the notion of infinite decimal expansions in the first place, so the riddle goes away.
The gotcha is that infinity is useful. For example, you probably want some form of a number system that allows π to exist. But algebraically, we only know how to construct π using an infinite process, and the real number line we put it on is usually constructed using infinite sequences too.
So, you need an alternative to infinity that still makes calculus work, makes irrational numbers work, and so on. That's tough, there were many mathematicians who tried, and what they came up with just wasn't useful or elegant enough.
Thank you, these kinds of thought exercises are very interesting. The reality is while an infinite series of any kind might exist mathematically, reality forces us to approximate them. We do not possess infinite containers to retain them in order to process them via human memory or digital or otherwise. Hence the simplest computations become corrupted by forced approximation. The best example is primality and very large prime numbers (millions of digits) where limits /rounding /approximations cause problems. At an atomic level using more than 42 digits of pi becomes meaningless for calculating the diameter of the known universe to less than the size of a hydrogen atom. High precision calculations beyond 75 digits would seem arbitrarily a waste of resources.
I have wondered (ever since I picked it up from a book on the subject - something like _Limits: A Transition to Calculus_) whether an approach where the "limit" idea is primitive might be pedagogically more effective than standard presentations. Nonstandard analysis proper I've been told drives the mathematicians crazy (not sure why other than tradition). I paid attention to this long ago because I was interested in the philosophical literature on supertasks and have also run into debates over continuity in the philosophy of physics literature. It is revisionist, but I think we still haven't settled (except in "mathworld", maybe) our understanding of continua. Leibniz warned us! :) Some of the last bits of that further clicked into place when I saw a Mathologer video on "limits at infinity". I realized that once again mathematical Platonism had crept into the teaching I'd received! (Mathologer is unusual - he's a mathematician who is *not* a Platonist in this sense.)
Pedagogically there is an important step to do before going to proof: Figure out if your audience associates numbersness with syntax or semantics and how they move between them.
What sets people in fire is the intuition violation, not the proof. People easily slip into thinking in syntax. You can pass most basic math in school with just syntax rule manipulation and that forms the intuition.
If you ask what (1 - 0.999...) is and demand answer within a second, someone always says 0.111.. It's a cognitive shortcut. Writing (1.000... - 0.999...) can give different quick answer.
When you compute a limit using hyper-reals, aren't you supposed to take its "standard part", and so .999... = st(1-infinitesimal) = 1? Otherwise the limits of functions like f(x)=x^2 would be different in hyper-real and real analysis.
Right, that's the operation I'm trying to explain here. Saying "you're supposed to" is not a convincing answer, saying "as you can see, there's no other way to map it to reals" is hopefully a bit better.
Is 'context' important? e.g. 0.9999 might be a tolerance necessary for a cylinder to expand in a combustion engine, but is useless when trying to store gases. A vacuum of 0.0001 contains 100 times more molecules than a vacuum of 0.000001, which might contaminate the outcome.
That's a cute way to look at it. I'm comfortable with the notion that 1.0 and 0.999… are just two ways of spelling the same value. For one, it removes an apparent exception to the idea that, while it's possible to exactly name a real number, it's impossible to name the *next* (or previous) real number. There's no successor function for the reals. (Another way of saying they're uncountably infinite.) I used to have to include the exception that given 0.999… one can name 1.0. But it's not really an exception of those are the same number. Every real is an isolated island.
Model as an interactive game, where a disbelieving challenger chooses a small epsilon and you are always able to respond to the challenge by (using a formula with epsilon now fixed and finite) finding the delta that gets you closer to the target than epsilon. Makes "for all, exist" much more intuitively logical
What you call "hyperreal analysis", I knew it as "non-standard analysis". This is to say, there is a difference between 0.999... and 1, but not in the real line. However, the argument stating that 10 x 0.999... = 9.999...90 is interesting. It's a Hilbert grand hotel where you accomodate the new guest 0.
I should mention that some folks reject this on a deeper level: they don't think that infinity is a legitimate mathematical construct, or only allow some limited meanings of it.
That's fair. If you reject infinite objects, the entire debate becomes moot. You still don't prove that 0.9999... ≠ 1, but you toss out the notion of infinite decimal expansions in the first place, so the riddle goes away.
The gotcha is that infinity is useful. For example, you probably want some form of a number system that allows π to exist. But algebraically, we only know how to construct π using an infinite process, and the real number line we put it on is usually constructed using infinite sequences too.
So, you need an alternative to infinity that still makes calculus work, makes irrational numbers work, and so on. That's tough, there were many mathematicians who tried, and what they came up with just wasn't useful or elegant enough.
Thank you, these kinds of thought exercises are very interesting. The reality is while an infinite series of any kind might exist mathematically, reality forces us to approximate them. We do not possess infinite containers to retain them in order to process them via human memory or digital or otherwise. Hence the simplest computations become corrupted by forced approximation. The best example is primality and very large prime numbers (millions of digits) where limits /rounding /approximations cause problems. At an atomic level using more than 42 digits of pi becomes meaningless for calculating the diameter of the known universe to less than the size of a hydrogen atom. High precision calculations beyond 75 digits would seem arbitrarily a waste of resources.
I have wondered (ever since I picked it up from a book on the subject - something like _Limits: A Transition to Calculus_) whether an approach where the "limit" idea is primitive might be pedagogically more effective than standard presentations. Nonstandard analysis proper I've been told drives the mathematicians crazy (not sure why other than tradition). I paid attention to this long ago because I was interested in the philosophical literature on supertasks and have also run into debates over continuity in the philosophy of physics literature. It is revisionist, but I think we still haven't settled (except in "mathworld", maybe) our understanding of continua. Leibniz warned us! :) Some of the last bits of that further clicked into place when I saw a Mathologer video on "limits at infinity". I realized that once again mathematical Platonism had crept into the teaching I'd received! (Mathologer is unusual - he's a mathematician who is *not* a Platonist in this sense.)
0.9999… = 1 irks, me, for some reason. But since 0.333… = 1 ÷ 3, it follows that 0.999… must be 1, and on top, our intuition is shown as wrong.
Related political implications https://open.substack.com/pub/nathanormond/p/if-freedom-is-freedom-to-say-22-equals?r=1v1mzp&utm_campaign=post&utm_medium=web&showWelcomeOnShare=false
Pedagogically there is an important step to do before going to proof: Figure out if your audience associates numbersness with syntax or semantics and how they move between them.
What sets people in fire is the intuition violation, not the proof. People easily slip into thinking in syntax. You can pass most basic math in school with just syntax rule manipulation and that forms the intuition.
If you ask what (1 - 0.999...) is and demand answer within a second, someone always says 0.111.. It's a cognitive shortcut. Writing (1.000... - 0.999...) can give different quick answer.
When you compute a limit using hyper-reals, aren't you supposed to take its "standard part", and so .999... = st(1-infinitesimal) = 1? Otherwise the limits of functions like f(x)=x^2 would be different in hyper-real and real analysis.
Right, that's the operation I'm trying to explain here. Saying "you're supposed to" is not a convincing answer, saying "as you can see, there's no other way to map it to reals" is hopefully a bit better.
I am an engineer.
0.9999... is close *enough* to 1.
Is 'context' important? e.g. 0.9999 might be a tolerance necessary for a cylinder to expand in a combustion engine, but is useless when trying to store gases. A vacuum of 0.0001 contains 100 times more molecules than a vacuum of 0.000001, which might contaminate the outcome.
You missed the "...".
That's a cute way to look at it. I'm comfortable with the notion that 1.0 and 0.999… are just two ways of spelling the same value. For one, it removes an apparent exception to the idea that, while it's possible to exactly name a real number, it's impossible to name the *next* (or previous) real number. There's no successor function for the reals. (Another way of saying they're uncountably infinite.) I used to have to include the exception that given 0.999… one can name 1.0. But it's not really an exception of those are the same number. Every real is an isolated island.
Model as an interactive game, where a disbelieving challenger chooses a small epsilon and you are always able to respond to the challenge by (using a formula with epsilon now fixed and finite) finding the delta that gets you closer to the target than epsilon. Makes "for all, exist" much more intuitively logical
Calling it the hyperreals always felt weird to me.
I prefer "reals 14 Max" or "reals 11 enterprise edition"
Thanks, I "borrowed" your joke when expanding the article!
What you call "hyperreal analysis", I knew it as "non-standard analysis". This is to say, there is a difference between 0.999... and 1, but not in the real line. However, the argument stating that 10 x 0.999... = 9.999...90 is interesting. It's a Hilbert grand hotel where you accomodate the new guest 0.