Summary: Zoning on FC Switches !

From: Ayaz Anjum <>
Date: Mon May 23 2005 - 23:56:54 EDT
Thanks to all who took time in replying to my query. The common agreement
was to zone the switch based on one Initiator and one target per server.
HBA are know to effect other servers in there zones in case of any hardware
or driver related issues.  However i am still  curious to know if creating
so many zones on the FC Switch has any adverse effects in terms of
performance etc etc.



We create a zone for each server.  If you have a zone for each OS then
just by sight you don't know which WWN is associated with which server,
only that it is supposed to be of a certain OS.  With a zone for each
server, you know that a particular WWN is supposed to be of a particular
zone (and name your zone something meaningful to help).  Besides, isn't
zoning by OS counter to one of the main functions of zones - what happens
when you have a Windows server that you don't want to be able to see other
Windows servers' shares?
In most cases, you want  single initiator zones, meaing you can have
multiple targets, but only one initiator per zone. Which means if you have
two hbas per host, you are looking at two zones per host. Some will claim
also single target zones, but I don't like this. Having a target exist in
multiple zones I can live with, a hba on the otherhand seems silly to me.

Now, there will be others (like a large online auction I can think of) who
don't like zones at all, just have one giant zone. This (to me) is simply
insane. The updates will flood the zone. But they like it, even though it
takes hosts down sometimes. To each their own.

Also keep in mind, how this is all done is up to each switch vendor. So you
need to find out what they recommend as "best practice" (I think brocade has
a few books out for example) and also find out what your storage vendor
wants (EMC for example will demand single initiator zones, and will try to
talk you into single target and initiator zones.)

hope this makes sense

I have always done it by host. Each host has a zone that is the
computer and the disk or tape it talks to. It is the least confusing
and safest, but you end up with a lot of zones.
I use the one Zone per server approach and that works well for me. I
overlap the storage
ports between Windows and Solaris systems no problems.

I think as far as Sun is concerned thats fine. The main thing is to not
have any Windows boxen
in the Same Zone as a Solaris server.

This approach is logical and neat to use. I normally do the following.

create an alias for the server ports.

servername -> 4,12 2,10

then an alias for the storage (dual controllers, dual pathed)

storagename => 1,2 3,12 1,5 3,10

Then I create a zone name after the server.

zone_servername -> servername, storagename

That works well for me and is easy to follow. Also if you stuff a Zone,
then you only break one

Hope this is useful. I have around 25 servers and 6 diskarrays in my fabric.

I create a zone for each individual HBA (each host has 2, going to different
switches, so effectively one zone per host).

I would recommend the above unless licensing is such to make it cost
prohibitive (e.g. if your switch uses a license per zone or some such).

The advantage is that if there is any chatter from one HBA, it is isolated
from the rest of the system.  This could be due to HW problems on the HBA,
poor implementation of the HBA driver software (I've heard that some brands
of HBAs are "chatty").
best practice is one zone per HBA.

sunmanagers mailing list
Received on Tue May 24 00:03:42 2005

This archive was generated by hypermail 2.1.8 : Thu Mar 03 2016 - 06:43:47 EST