Strange behaviour when using HTTP 1.1.

0

I have 5 VIP.s pointing to the same 2 real servers. I use HTTP/1.1 to differentiate the sites on the webservers. When one of the sites is down for maintenance the healthcheck takes the entire real server down even for the other 4 sites. It goes back up again as soon as it checks the health on another site. Then down again when it comes to the site under maintenance and so it continues until all sites are back up again. I have disabled drainstop on real servers on failure so the sites misbehaves to say the least.

On the start page it says I have 10 real servers but when I click details it only shows the 2.

Can I use SubVS so that only the current site is affected by maintenance or how can I configure this without setting a different IP for each site on each RS?

7 comments

Avatar
Christian Scheller Official comment

Are you saying your VIPs are the real Servers IP addresses? That is a misconfiguration and cannot work.

 

Please assign VIPs from the same subnet. If this is not possible, enable "Remote Real Servers" under Network Options.

 

Avatar
0
Mark Deegan

Hello Ted,

You can in the real server health checks specify a url in the http string. this would look like "/owa" for instance. this would make the real server check look for a page within the website as opposed to just whether the service fails a check or not. You can also specify the host "mail.domanin.com" in the host section of the http 1.1 See attached screenshot.

regards

Mark

Avatar
0
ted.molin

I have done this on each one of the VIP.s They each have the exact same ip to the 2 real servers but they all have different hosts in the HTTP/1.1 Host entry.

When VIP #1 fails for some reason the loadbalancer takes the ip of the real server offline which includes all the other VIP.s

Avatar
0
ted.molin

Each VIP has it's own IP. So 5 IP.s for VIP (x.x.x.1-5) on the loadbalancer, but I have only 2 real servers (on different ip.s x.x.x.101 and x.x.x.102) behind the loadbalancer. Each site on those 2 servers has it's own hostname binding in IIS. We can call them site 1-5.example.com. Each VIP is set up as Mark describes with HTTP/1.1 (HTTP protocol, HTTP Method HEAD, Use HTTP/1.1) and the hostname binding of the site in HTTP/1.1 host field. So for site 1 thats 1.example.com, for 2 2.example.com and so on. Each of the 5 VIP.s has the same 2 real servers.

Now if 1.example.com goes down on real server no 1. The loadbalancer will list that real server as down even if 2.example.com works as expected. Each time the health check on VIP #1 checks real server 1 if will mark all of the VIP.s real server no 1 as down. When the health check on VIP #2 runs it will mark real server 1 as up for all VIP.s including for site no 1 and so it will go on until server 1 is up again. If 1.example.com goes down due to for example maintenance it will take all the other VIP.s with it when the LB checks the health on that site. They will go back up again when the LB checks the other VIP.s health.

Avatar
0
ted.molin

Understand the problem now? I can explain further or somehow show you want I mean if you don't!

 

Avatar
0
Christian Scheller

In this case it would be wise to split the services up. This way they won't affect each other.

 

Avatar
0
ted.molin

Well they shouldn't have to since they are different sites.

I did a workaround now and assigned 8 different IP.s to the two webservers. 4 each. Still using the same hostname bindings as before I just set the VIP.s to the different IP.s of the real server and now it works. If 1.example is down or unavailable it doesn't effect the other sites. So same server just a different IP.

One VIP.s healthcheck shouldn't affect the health of the the other VIP.s. Even if the real server IP is the same!