Okay, from a practical perspective - That's NEVER going to happen. Over-the-air OS updates? Yeah. Between the iPads and iPhones that have been sold, that's somewhere north of 50 million devices, trying to download a several hundred megabyte file all at the same time? AT&T and Verizon would be brought to their knees for WEEKS.
Huh? 50 million is the worldwide number. AT&T/VZ != The world's carriers. (Yet? Ha.)
I think the U.S. numbers were closer to 20 million last I checked?
Distributed file servers and updating 20 million devices is child's play for a real sysadmin/engineering team. AT&T & VZ don't want to pay for Apple's infrastructure however, so it'd be up to Apple to plunk down some of those billions in cash they're sitting on.
I'm not going to give them a pass on something they could deploy with a thousand or two Linux boxes and rsync. Seriously.
They just don't have effective political ways to deploy *inside* the carrier's networks. If they did, the carrier backbones are more than adequate to do mass distribution internally.
Mass distribution is child's play for production level sysadmins, really. If Apple can stream on-demand TV to my house over wireline, they can obviously build the servers to handle iOS distribution.
They just have to move it much much closer to the end-user, and that means getting the carriers to play ball. That's the hard part, not the bandwidth. Proper QoS style service design in the OS could be set up to only do such downloading when a particular cell site is not at max-capacity serving customer content, too. That'd require the carriers to integrate the servers properly. "Live" data first, update data goes to the back of the school bus, priority-wise. Simple.
It's easily "do-able" if the right folks sat down at a drawing board. It's not even slightly difficult other than scale, and with virtualization they could thump the scalability problems in the head. I was doing this type of stuff in the 90s with nothing but SSH, a custom CD-ROM to boot messed up systems (no PXE back then), and a pair of hands at the site to push the power button. (Company wouldn't spring for Wake-On-LAN cards, which are also commonplace now as well as "lights-out" management tools on even the cheapest pizza box servers.)
You can own an iPad without owning a computer, and plenty of people do - And you can take it to the Apple Store for OS updates, if you feel the need. (I'm still running iOS 3.0.1 on my iPhone, and it works just fine, thankyouverymuch.) In Tom's situation, he's got a computer that he can use to sync and back up, and his lack of internet is temporary... So I just don't see it as a big problem.
Now, if Tom decides that web and e-mail isn't what he wants to do and that he wants to watch movies on the iPad (I bet he's got a TV in the motor home for that, though) then yeah, he's going to need to find himself a WiFi hotspot to download feature-length movies. That wasn't part of his stated "Mission requirements" though.
Agreed that Tom has better options. That was my point. I felt your post made it seem like iPad was a good option. I disagreed. The iPad alone would make an awful "one device" customer experience without Apple leaning on the wireline carriers for bandwidth.
I've been in one long meeting with the head AT&T Linux exec team. The guy is bright, and as much as the world loves to bash AT&T, me included, their Linux team could take this on, but they'd need $ from both Apple and AT&T to pull it off. And the VZ Unix folks are really strong too.
They'd all just need leadership to say "Get it done" and I bet it could be ramped up in less than a year. Six months with the right executive buy-in.
The economic reality is, Apple likes the mobile devices to leverage people to stop by the stores to consider buying a Mac. Paying big ca$h to install servers at any carrier, isn't even on their radar, I bet.
Technology-wise though, the problem is a very simple one to tackle. Put the servers out as close as possible to the "last-mile" and build them to be remotely bare-metal recoverable so a CO tech can simply swap parts.
This is already tried-and-true server tech. Heck nowadays companies like Expedia and others (since I've seen Expedia's rack farms), they just let multiple boxes die in-place and load-balance until the traffic is about to overwhelm that farm, turn up an entire new rack of pizza boxes, and take the first rack out of service for top-to-bottom rebuilds of OS and hardware swaps/maintenance. A few days later with no drama, they're ready to add the original rack back to the load-balanced server farm, which now has double the bandwidth. Then they alternate racks until load/demand is high enough for new hardware and/or another rack.
This scaled up to more sites is pretty easy to do, and actually easier from a power/cooling standpoint. Three to five pizza boxes per Central Office is nothing. Managing the logistics for parts replacement is harder than designing the distributed file servers.
I know! They're probably just waiting for the 19" rack mount kits and -48 VDC power supplies for the Mac Mini's.