The guide states:
This is a bottleneck I believe could be removed.
When parsing a URL, rather than walking through all of the CUrlRule instances and trying to resolve each of them, how about just doing it the first time and caching the result?
Depending on the volume of unique URLs in a given application, this feature would need to be scalable. That is to say, an application with 100,000 unique URLs can’t practically cache the results in 100,000 files - but 2,000 files with 50 URLs cached in each file would probably be acceptable. The overhead of loading an array with 50 URL hashes is minimal.
For a growing application with an increasing number of complex rules, this design eliminates this narrowing bottleneck, and would be near-constant - application performance would not degrade with every added rule.
In order to be truly scalable, the application developer would need to configure an estimated number of unique URLs, or simply the number of file segments to use for caching - the segmentation of resolved URLs into cache files could then be tuned to support extremely large number of URLs and arbitrary amount of rules.
What do you think?