Business Strategy Brief - iSCSI vs FC (Fibre Channel) SANs
Client Resource Library Access
Clients, please Login to retreive your current Visio Network Diagrams, Product Maintenance & Patch Releases, Error Log Analysis, Proposed Solutions, Training Material, Site Best Practices, Custom Scripts, and other available downloads from your secure FTP repository.
When we propose a SAN infrastructure for growing small businesses, our project coordinators usually present at least two fabric options. The most popular storage interconnects today are iSCSI, Fibre Channel, and SAS. While SAS is fast and easy to manage, we usually don’t consider it a true “SAN” fabric due to distance and replication limitations across sites (though it is a great solution for replacing U320 SCSI direct attached storage). Fibre Channel over Ethernet (FCoE) is another option but is usually reserved for converging IP and SAN traffic in dense blade environments or transporting storage across extended links.
This leaves native FC and iSCSI as our most popular proposals for any storage needs requiring connectivity or replication at distances greater than 30ft .We would like to start by explaining that 1Gb iSCSI is without question the least expensive SAN fabric. Any quality small business switch and NIC from D-Link, NetGear, Cisco / Linksys, etc. can be used to create a SAN. Existing CAT5 wiring is adequate as well, so there are no expensive SFPs or optical cables to purchase. Performance is adequate for light database driven applications, Exchange Mail servers, SAN boots for 1-3 servers, and Hyper-V Cluster Shared Volumes (CSVs.) Due to the low cost and ease of integration of 1Gb iSCSI, we almost universally recommend such a fabric to small businesses building their first SANs unless we expect larger I/O loads. The problem with 1Gb ISCSI is that performance drops off drastically during moderate I/O loads. 1Gb iSCSI is comparable to 1Gb FC, delivering about 100 MB/s transfer speeds under optimal conditions.
[1Gb/s = about 1000 Mbps divided by 8 bits per byte = 125MB /s, and subtract about a 10% overhead for frame and packet encapsulation- more on this later -> ~100 MB/s]
To put this in perspective, a single Seagate Cheetah 15K SAS disk drive can produce about 100 MB/s of data during sequential read operations. Combine 10 of these drives into an EMC VNXe3100 or EMC VNXe3300 storage shelf, it is very possible to pump out close to a gigabyte of data per second (again assuming best case scenario sequential loads.) Emerging Solid State disk drives can further saturate the 1Gb iSCSI link under real world random I/O operations.
So far, we just want you to take away the fact that 1Gb iSCSI is a great solution for small businesses and branch offices. 1 Gb iSCSI SANs will easily fit into a 10GbE or 40GbE infrastructure down the road especially if copper 10 GbE over twisted pair wires with RJ45 connectors (aka 10GBASE-T) becomes mainstream. HP P2000 G3 and P4300 G2 SANs already have upgrade paths to convert 1Gb iSCSI controllers into 10GbE controllers.
However, as your small business turns into a medium business, you will need SAN fabrics operating at speeds greater than 1Gbps. As of 2012, 8Gb FC and 10Gb iSCSI represent the fastest (readily available) fabric speeds. On the “low end,” 4Gb is still available. 2Gb FC HBAs and switches have been discontinued by Emulex and Q-Logic.
Customers needing storage connectivity at speeds > 1Gbps usually contact us first about iSCSI options and are hesitant to even entertain a FC proposition. These customers cite cost and future-proofing as their objections to a FC fabric. While Storage Networks deploys a large number of iSCSI SANs as they are the most flexible and cost effective solutions for many applications, we would like to spend some time dispelling myths about Fibre Channel because we are confident that native Fibre Channel SANs will capture a significant market share right beside 10GbE iSCSI for at least another decade.
Myth: Fibre Channel has been replaced by iSCSI
Fibre Channel is not dead nor is it dying. In September of 2010, the FCIA (Fibre Channel Industry Association) announced ratification of the 16Gb FC specification. 2012 will be the year 16Gb FC products start entering the main stream market while the FCIA outlines plans for 32Gb FC specifications . Furthermore, Fibre Channel standards have traditionally required 3-generation backward compatibility. In other words, today’s 8Gb FC investment should be supported until 128Gb FC devices are deployed. Therefore, you don’t have to worry about the other extreme- FC developing too quickly and leaving todays 8Gb investments in the dust.
The biggest threat to Fibre Channel is FCoE (Fibre Channel over Ethernet), not iSCSI! FCoE is about getting FC traffic to flow over Ethernet wire and switches (making a converged fabric). Mechanisms for attaching FC HBAs and storage arrays to “FCoE Converged Network Adapters” already exist in Cisco and Brocade switches.
One source for the Fibre Channel obsolescence myth may be the decision of storage vendors such as HP and EMC to move away from using Fibre Channel disk drives in favor of SAS disk drives. Since the late 1990s, disk drives shipped with dual port SCA-40PIN fibre channel connectors- it seemed natural to use such drives with arrays using FC backends. However, since the disk drives communicate with controllers via the SCSI protocol just as SAS disk drives do, there is no reason to attached drives via FibreChannel. In order to drive costs down, disk drive manufactures such as Seagate and Hitachi have worked with HP and EMC to qualify SAS drives for use in storage enclosures and JBODs. After all, SAS disk drives can be used in desktops, workstations, and servers whereas FC drives were usually reserved for storage arrays only. Economies of scale due to mass production help drive costs down.
As of 2012, 8Gb FC frontend connectivity is being shipped with the most popular storage products including HP’s P2000 G3 and EMC VNX 5100 and VNX5500 arrays. We have even heard rumors that the EMC VNXe (entry level) SANS may be getting FC host connectivity! Fibre Channel is still quite alive and well.
Myth: Fibre Channel is more expensive than iSCSI
Until the introduction of 10GbE, comparing FibreChannel to iSCSI was like comparing apples to oranges. Sure, if you compare the cost of deploying a 1Gb iSCSI array such as the VNXe3100 / VNXe300 against an 8Gb HP P2000 G3 or EMC VNX5100, the cost of FibreChannel solution will be higher since just about all businesses already have 1Gb capable switches and NICs installed. However, 8Gb FC should be compared to 10Gb iSCSI (10 GbE) which is quite possible to do now.
Assuming that your business currently does not have 10 GbE devices nor any 8Gb FC devices, here are some cost estimates for building a SAN (all based on MSRP, so the prices are inflated):
Component | 8Gb FC |
10GbE |
Switch |
QLogic SANBOX 3810 8-port- $3000 |
Cisco Nexus 5010 20-Port $17,250* |
CNA / HBA |
QLogic QLE2560 Single Port $1450 |
Qlogic QLE8240 Single Port $1450 |
SFP+ |
SFP8-SW-1PK: $300.00 |
Q-Logic 10Gb SFP+ SR Optical Transceiver $900 ** |
30M Cable |
$75 (OM3 Optical Cable) |
$75 (OM3 Optical Cable) |
Storage |
HP P2000 G3 FC Dual Controller, No Drives: $9,950.00 |
HP P2000 G3 Dual Controller, No Drives $12,150 |
** Using 10 GbE over Twisted Pair Ethernet Wire (10GBaseT)may be a game changer here since SFPs won’t be necessary. 10 GbE will require CAT6 cable at the minimum and CAT6A is preferred. Some environments will require shield twisted pair (STP). Copper-only switches are very expensive and not readily available, however. The NetGear XSM7224S, for example, only has (4) RJ45 connectors and the Cisco Nexus 5010 has NO RJ45 connectors.
The point we are trying to make here is that 8Gb FC is NOT more expensive than 10 GbE, and in some cases, it is in fact cheaper. As 10GBaseT emerges and (most likely) drops the cost of 10Gb iSCSI down, 16Gb FC will have entered the mainstream and will provide faster speeds to justify the potentially higher cost due to optical (SFP) requirements.
Myth: Adding Fibre Channel storage increases management complexity
FC management complexity is a matter of opinion. Most Storage Networks engineers think that keeping storage (SAN) traffic on separate hardware and fabrics makes logically understanding your IT infrastructure easier. IP Storage can be segregated into VLANs and/or separate physical LANs, but there is something to be said for having physically different HBAs and switches distinguishing traffic types. Other engineers argue that they “already understand Ethernet and TCP/IP,” so there is no learning curve involved with implementing iSCSI. We usually counter that argument with iSCSI management is an art- defining QoS, optimal Ethernet frame sizes, VLan configurations, etc are vital considerations for squeezing the best performance out of iSCSI. Is this art easier to master than learning how to implement FC? We don’t have an answer for that. In short, we usually find that IT gurus used to dealing with storage (FC, U320 SCSI, SAS, etc.) prefer FC SANs. IT gurus who have spent their career keeping corporate LANs and WANs in tip top shape that are now faced with the concept of converging storage and IP traffic prefer working with iSCSI.
Here are some of the facts about FC that you must consider:
On the protocols: Fibre Channel was built from the ground-up as a STORAGE protocol. There are less layers of abstraction. SCSI blocks are transmitted via FC frames. iSCSI was developed on top of the successful TCP/IP protocol. SCSI blocks are encapsulated in TCP segments, then IP packets, and finally encapsulated in Ethernet frames. TCP/IP encapsulation adds an entire level of management complexity to fine tuning iSCSI performance, thus the “art” we spoke of above.
On LUN Masking and Zoning: Both iSCSI and FC employ the concept of LUN masking, or associating LUNs on a target with defined initiators. However, Fibre Channel supports zoning, whereas IP storage has no directly comparable mechanism.
Fibre Channel SAN fabric management is (in many of our engineers’ opinions) easier. Fibre Channel targets and initiators can be zoned by physical port on a switch (hard zoning) or by WWN (Soft Zoning). Securing storage system access can be as simple as telling the switch to “zone” two ports together and not allow any other entity (port, WWN, etc.) to access the targets and initiators of the defined zone. Vendors such as Brocade allow for “aliases” to abstract the nitty-gritty details such as port numbers and WWNs to make zoning even easier.
The closest iSCSI equivalent to zoning is a VLAN. Most storage systems have multiple Ethernet ports that can be assigned to separate VLANs. Typically, you are limited to only one VLAN per Ethernet port such that you max out at about 4 VLANs per storage device whereas FC allows for hundreds of zones. Creating VLANs involves defining subnets, assigning IP addresses, etc., all of which are unnecessary with FC zoning.
On LUN Access: We are going to be as bold as to state that accessing Fibre Channel LUNs is simpler than accessing iSCSI LUNs. iSCSI storage requires initiator software. While the initiator software is usually bundled with the OS free of charge, it is still another piece to add to the puzzle of management. Unless you are running an ISNS (Internet Storage Name Service), you must enter the iSCSI IQN (iSCSI Qualified Name) or EUI (Extended Unique Identifier) of the target containing your storage in the initiator software before you can see your LUNs from a server. With iSCSI name servers, vendors are working on auto-discovery to negate this manual task but the process has not yet been perfected. Initiator software is still required. Fibre Channel SANs, on the other hand, do not require any special software for LUN discovery on a server. Once a target and initiator are zoned, and a LUN is masked, the initiator will see all of the LUNS available on a target that are masked to its WWN. This also makes FC SANs easer to boot from (See: http://www.storagenetworks.com/writeups/EMC/san-boot/emc-boot-from-san.php) Note: you can boot from SAN with iSCSI, but the process of doing so varies among HBA / CNA manufacturers.
Myth: FC is slower than iSCSI as of 2012
This is another apples-to-oranges comparison. 8Gb FC is often compared with 10GbE, but 16 Gb FC is emerging which will be faster than iSCSI over 10GbE. Some camps even go as far as to say 8Gb FC is faster than 10 GbE. Since 10GbE is still emerging, we have not dealt with a sufficient number of switches and HBAs to make any concrete comparisons.
Here are the facts at play, however, which directly relate to the speeds:
iSCSI relies on TCP/IP. As such, all SCSI commands and data must be encapsulated in TCP segments followed by IP packets before being shipped from target to initiator and vice versa. In other words, TCP operates at layer 4 of the OSI network model. IP packets operate at layer 3. Packets are further encapsulated and then transported via layer 2 Ethernet frames. FC natively operates at a lower protocol level, relying on FC “frames” to carry SCSI blocks rather than packets and segments. In fact, fibre channel can even operate over the same Ethernet wire with FCoE at a lower protocol layer 2 but this is no longer iSCSI.
iSCSI Bottleneck 1: If you don’t have a dedicated iSCSI HBA, the server CPU must perform the TCP/IP encapsulation. If you have an HBA such as the QLogic QLE8240 mentioned above, a TOE (TCP/IP Offload Engine) chip on the HBA takes care of this so the server does not have to. This is why we did not choose the cheapest 10GbE network adapter in our cost comparison. Depending on the speed of the server CPU and/or the quality of the chosen NIC, performance degradation due to TCP/IP encapsulation can be signification.
iSCSI Bottleneck 2: TCP/IP is not a lossless protocol, meaning that if segments are dropped, they must be retransmitted (unlike UDP which just plain drops the segment.) The way that TCP/IP handles network congestion and buffer overloads does not align well with how applications and drivers traditionally work with storage. If a network connection becomes congested, TCP/IP segments are dropped. TCP/IP and iSCSI will request that the storage controller or server resend those segments. The segments, containing the SCSI blocks to be read or written, are now out of order. TCP/IP must re-order those packets on the receiving end adding to overhead. Fibre Channel, on the other hand, will not allow for out of order because it is a lossless protocol.
Further Considerations
Other Reasons to select Fibre Channel (FC)
Some people think Fibre Channel SANs are more secure than IP based storage simply because the fabric is more difficult to connect to. FC HBAs do not come installed in laptops and PCs so it is unlikely that an unauthorized employee or contractor can easily attach his or her personal computer to your storage and steal data. An iSCSI SAN, however, may be accessible via any WIFI or Ethernet port in a building IF the network is not properly segregated, either physically or with V-Lans. iSCSI does offer many security algorithms, though, including CHAP, IPSec, ACLs, etc. so security is not a concern if you know what you are doing.
Other reasons to select iSCSI
A major disadvantage of FC SANs is the inability to accommodate unified storage systems, unless that unified storage system has both FC and Ethernet ports. Unified storage systems (such as the EMC VNXe3100 and EMC EMC VNXe3300) offer file level storage (CIFS and NFS) in addition to block level storage (iSCSI.) Unified storage relinquishes the need to have a dedicated file server on your network (which may be attached to the FC SAN). Further, the EMC VNXe3100 and EMC VNXe3300 offer advanced file server features like compression and de-duplication right on the storage system to further improve performance. File level protocols MUST be accessed via TCP/IP- via the same RJ45 (twisted pair) connectors that you use for iSCSI access. We are not aware of any storage systems that currently offer FC host connectivity as well as IP connectivity AND unified storage.
(Interestingly, the HP P2000 G3 SANs ship with both iSCSI and FC ports but do not have file level storage so they are not unified.)