As I said a while back, when this thread died - you may be right, I’m still not sure about that.
But the fact remains that processing large amounts of models is a very real need, which you cannot elegantly work around. I discussed this issue with my colleague, a Rails expert - it turns out, Rails has this feature.
Rails 2.3 added a new method, find_in_batches.
And long before they did that, there was a third-party plugin (still maintained) that adds this feature.
The third-party plugin is particularly interesting, because the author claims that he can actually do ORDER BY.
But the basic approach is the same - recordset are processed in smaller batches, avoiding the memory burden.
In the case of the standard Rails implementation, they achieve this by forcing the ordering of your query to sort by primary keys, so that, for example, if you select posts with many tags, each set of tags will arrive in order with it’s related post, e.g. ORDER BY post.id, tag.id.
Turns out, in 99% of all cases, this limitation doesn’t get in the way of anything you’re trying to do, because when you’re reading very large number of records, it’s usually during a batch operation of some sort.
Still though, the third-party implementation would be a worth a peek - if this plugin does what the author claims, that would be even better - for example, I often have clients who request a CSV export of some data, and usually they expect that if the paged HTML view they’re using is set to order by customer name, that their export comes out ordered the same way.