CUrlManager performance

The guide states:

This is a bottleneck I believe could be removed.

When parsing a URL, rather than walking through all of the CUrlRule instances and trying to resolve each of them, how about just doing it the first time and caching the result?

Depending on the volume of unique URLs in a given application, this feature would need to be scalable. That is to say, an application with 100,000 unique URLs can’t practically cache the results in 100,000 files - but 2,000 files with 50 URLs cached in each file would probably be acceptable. The overhead of loading an array with 50 URL hashes is minimal.

For a growing application with an increasing number of complex rules, this design eliminates this narrowing bottleneck, and would be near-constant - application performance would not degrade with every added rule.

In order to be truly scalable, the application developer would need to configure an estimated number of unique URLs, or simply the number of file segments to use for caching - the segmentation of resolved URLs into cache files could then be tuned to support extremely large number of URLs and arbitrary amount of rules.

What do you think?

Overhead from url-parsing is negligible in my opinion. Think about YouTube caching 1 million video url’s :D Wouldn’t make much sense. There are many other things that can squeeze out some noteable reqests-per-second.

http://www.yiiframework.com/doc/api/1.1.0/CUrlManager#cacheID-detail

have you done tests to see how slow it actually is? Run tests with 0 url rules, 10, 50, and 100 and see what it actually does. Maybe it’s not that bad

I guess I’m trying to solve a problem that doesn’t really need to be solved at the moment.

Guess I’ll save my concerns for the day when I actually build a site with hundreds of routes :wink: