Here's a paradox that I think most mathematicians will instantly understand but that greatly puzzles everybody else I've tried telling it to, by UCI philosopher of science Jeff Barrett and his coauthor Frank Artzenius.
You start with an infinite pile of dollar bills in front of you. They have integer serial numbers, and are numbered sequentially with the number one at the top of the pile, but they all have the same value. You're going to make an infinite sequence of moves, with the goal of collecting as much money as possible.
On move n, you have two choices: you can either just take a single bill from the top of the pile, or you can take a larger number of bills, 2^(n+1). (4 on the first move, 8 on the second move, 16 on the third, and so on.) But whenever you take the larger number of bills, you then have to burn the smallest-numbered bill that you have, so your profit is a little smaller: 3, 7, 15, etc.
The question is: what is your optimal strategy?
Clearly, at each step, taking the larger number of bills maximizes your profit.
But if you do this every time (or even infinitely many times), you will end up with nothing: every bill you take will eventually be burned. So it's better just to take one bill at each step, ending with all the bills in your hand and nothing burned.
There's been a bunch of follow-up work trying to explain why this isn't a paradox at all (and really it isn't; the mathematics of it is perfectly clear). But it seems genuinely confusing to a lot of people. And Barrett uses it to argue a more serious point: when we try to define what is rational decision-making, we have to look at the whole context of the world in which the decisions are made and how they fit together; it doesn't work to look at each individual decision as rational on its own merely because it maximizes utility.
Maybe one way to explain it to non-mathematicians is to vary the problem slightly. This time, at each stage you have to destroy all the money you've got so far, but you then get given the next 10^n dollar bills. Shouldn't that be even better than the 2^n-1 deal? After all, at any finite stage you've got more. This example makes it more obvious that if all you get to keep is undestroyed bills, then you aren't interested in how much money you have at any finite stage, but only in how much money you have that will not at some point be destroyed.
Suppose you find an amazing investment that will increase your money by 20% per year above inflation. What should you do? One argument says that you should spend as little as possible, because whatever you spend now stops you spending twice as much if you wait just four years. But if you adhere to that policy, you'll end up never enjoying the money (or perhaps spending it only when too old to enjoy it properly, once your expectation of being dead soon begins to kick in).
I'm not claiming it's the same puzzle, but it feels in a somewhat similar spirit.
Just to be clear, the random bill is not chosen from the latest batch of ten, but from all the bills you have received so far. That is an important detail.
(At the end there I used some lemma I'm too lazy to formalize, much less prove.)
Then, since we have countably many events ("the nth bill survives") with probability zero, the expected number of events that occur is zero.
We can generalize and dramatize your problem by imagining humanity trying to take Heaven by storm. On the nth day, f(n) people are born but Lucifer picks off one and sends him to Hell, randomly chosen from the total population. If anyone lives eternally they make it to Heaven.
Puzzle 1. How fast does f(n) need to grow for one person to make it to Heaven with some nonzero probabiity?
Puzzle 2. How fast does f(n) need to grow for at least one person make it to Heaven with probability 1?
Puzzle 3. How fast does f(n) need to grow for there to be a nonzero probability of infinitely many people making it to Heaven?
Puzzle 4. How fast does f(n) need to grow for the probability of infinitely many people making it to Heaven to equal 1?