I have a production Yii application that is on 2 servers with the complete app on each, and a common db. In front of that is a load balancer that is supposed to keep sessions sticky to a server by using a cookie to keep things in order.
What I am seeing is a few messages in the yii application log where a particular asset is missing. After looking at the asset folders in both servers they each have 3 directories, 2 are the same on both and one differs on both.
I am guessing that either the website user is not allowing cookies or something is staying cached in the browser long after the load balancer cookie expires (and user gets a new random sever).
In trying to provide a good user experience is their a correct way to make the asset folders be the same name (or force the same name) in the event of expired cookie from the load balancer?
I’m not exactly sure how yii comes up with the name of the folder, but I’m guessing I can’t just copy the different named folders across each server to make sure that they are always existing to the user?
set up each application to publish assets in different directory:
application on server1 publishes to /assets1/ directory
application on server2 publishes to /assets2/ directory
loadbalancer forwards /assetsX/YYYYY requests to server X.
other solution could be to publish assets to nfs volume accessed from both servers. with this option remember also to publish assets based only on directory (assuming asset location on both servers is same) or every server will still publish its own copy of asset and generate different url-s in HTML…
That could work but seems like a lot of work to maintain. One issue is not sure if the loadbalancer supports smart features like that. It’s Amazon’s AWS and not sure what type of application rules can be created.
What I would be more interested in hearing about is how these asset ID’s are created and why would one be different then the other since the code is the same across both servers. Maybe some simple way to ensure the generated ID’s are the same for same resources.
I increased the ‘stickyness’ of the load balancer to an instance by extending the life of the cookie that keeps track of that. Will see if that helps…
Thanks for the ideas, hopefully I won’t have to mess with some of them
now - if application is in different location on each server, or modyfication time is different - asset names will be different.
next - essets are publish in php scripts during request processing.
consider such case: you have fresh install with empty assets folder on two servers. Client connects to server 1 and requests some page. During render assets are published (for example - page requires some JS files to render widget). But those files are published only on server 1. If the request for such asset will be redirected to server 2 by load balancer - you will get 404.
so - you need either some kind of sticky redirect (if request for page was redirected to server 1, then all resources for that page must go from same server), or you need to synchronize assets directory somehow (for example shared NFS volume) or pre-populate it with all needed assets on every server (this could be very hard to achieve and maintain during subsequent versions deployment).
That’s what I’m doing right now I set up the load balancer to use it’s own session cookie. The only thing I can think of is that possibly the user does not accept cookies and not much else can be done about it. I also made the life of the session longer to see if that would help.
The app is installed on 2 identical AWS instances and I’m using git to deploy the code base. So not sure of all the magic that Yii is doing with the asset naming but 2 out of the 3 are the same. Might be worth a try to do a clean install of each and see what the asset ID’s do. Would be nice if they were all the same but I guess I can’t expect that!
Well I extended the time for the life of a session and I think it helped, but one thing is that if a user can’t accept a cookie it going to be an issue.
I did find out that the other assets were not real, and were coming from my Git release that was picking them up from dev. Once I excluded them, I only had the one asset for JQuery (it looked like it anyway). I ‘Hack’ solved the issue by copying each asset directory to the other machine. Not a great way to go but keeps things quiet, again a Hack but seems to get more noise out of the log and keep the end user running fine.