The cost of the number 9
So we see the real issue isn't whether or not you can count on 100% uptime, but whether or not having downtime in your "100% available" costs all that much.
Are you serving personal pictures on a home DSL line? If so, 99% uptime is probably for you. What's the real cost of a few days of unavailability per year?
Are you serving data commercially? If so, the cost of anything more than maybe 99.9% uptime may not be worth it. (That's about 8 hours of downtime per year) Think about the freebie web server on your local ISP. If it's down for a couple of afternoons per year, is anybody going to complain much?
Are you serving financial records for a state government? If so, the cost of anything more than maybe 99.99% uptime may not be worth it. (That's just under 1 hour of downtime per year)
Are you serving cash Visa for nations? If so, anything more than 99.999% uptime may not be worth it. (That's about 5 minutes of downtime per year)
Each of these "nines" costs exponentially more. A home computer running the latest consumer grade O/S can generally maintain 2 nines without too much difficulty. A basic server running a server O/S (EG: Linux) can generally sustain close to 3 nines without difficulty. When there's a problem, you can drive to the local colo to reboot the server. Keeping a spare server handy and reliable backups means you can recover in less than 8 hours or so. It gets pretty spendy at 4 nines: 99.99% gives you just under an hour. That means you are hosting a fully redundant cluster, with lots of realtime "auto-recover" options. And 99.999% uptime is insanely expensive. Not only are you fully redundant, but you are actually watching each individual process to ensure that it completes, even if the hardware/process dedicated to it fails.
5 nines, along with high performance, can be ridiculously expensive.
So in order to assess how much money you should spend on uptime depends on how much downtime really costs you.