Saturday, November 1, 2008

Five new developments in storage infrastructure solutions

First there was Ethernet. Then, there was IP over Ethernet. Next came the mixed use of Ethernet, IP, and the SCSI command set (iSCSI) to simplify storage and to bring down the cost and complexity of storage. Today, iSCSI and Fibre Channel are fighting it out in all but the largest enterprises, and both have their pros and cons. Even though these are the two primary contenders in today’s block-level shared storage market, there are some other alternatives. The line is continuing to blur between these solutions as new initiatives are brought to market. Let’s take a look at some new developments in storage infrastructure solutions.

Faster Fibre ChannelTwo Gbps and 4 Gbps Fibre Channel are very common in the marketplace, and manufacturers are just now beginning to demonstrate 8 Gbps Fibre Channel gear. There are also standards in the works for Fibre Channel running at 10 Gbps and 20 Gbps. This venerable technology continues to improve to meet the increasingly robust storage needs demanded by the enterprise. In some cases, Fibre Channel solutions on the market rival iSCSI solutions from a price perspective (i.e., Dell/EMC AX150) for simple solutions. However, faster Fibre Channel still has the same skill set hurdles to overcome. Just about every network administrator knows IP, but Fibre Channel skills are a different matter.

iSCSI over 10G EthernetiSCSI has become a technology that deserves short-list status… and at a gigabit per second, no less. Many iSCSI naysayers point to its slower interlink speed as a reason that it won’t stack up to Fibre Channel. However, iSCSI solutions are now on the cusp of moving to 10 Gbps Ethernet, meaning that iSCSI’s link speed could surpass even the fastest Fibre Channel solutions on the market. Of course, iSCSI still has IP’s overhead and latency, so we’ll see how well 10 Gbps Ethernet performs in real-world scenarios when compared to 8 Gbps Fibre Channel.

Further, 10 Gbps Ethernet gear is still extremely expensive, so, for the foreseeable future, 10 Gbps-based iSCSI solutions probably won’t fit the budgets of many organizations considering iSCSI as a primary storage solution. All this said, interlink speed is not necessarily the primary driver for replacement storage infrastructure in the enterprise. Performance boosts are often achieved by adding more disk spindles to the infrastructure or by moving to faster disk drives (i.e., SATA to 15K RPM SAS or Fibre Channel).

Fibre channel-over-IP (FCIP)Fibre Channel-over-IP (FCIP) is a method by which geographically distributed Fibre Channel-based SANs can be interconnected with one another. In short, FCIP is designed to extend the reach of Fibre Channel networks over wide distances.

Internet Fibre Channel Protocol (iFCP)Internet Fibre Channel Protocol (iFCP) is an effort to bring an IP-based infrastructure to the Fibre Channel world. Much of the cost of Fibre Channel is necessary infrastructure, such as dedicated host bus adapters (HBAs) and switches. These components can, on a per-port basis, add thousands of dollars to connect a server to the storage infrastructure. In contrast, transmitting Fibre Channel commands over an IP network would drive down infrastructure costs in a major way, requiring only gigabit Ethernet connections, which are already found on most servers. Further, even high-density Gigabit Ethernet switches cost only a couple thousand dollars. The main drawback to this proposal is the limitation to 1 Gbps Ethernet; although 10 Gbps gear is available, it would negate some of the cost benefit. On the plus side, iFCP (even on 10 Gbps Ethernet) would open Fibre Channel solutions to administrators that have IP-based skill sets. iFCP was ratified by the Internet Engineering Task Force in late 2002/early 2003.

ATA-over-Ethernet (AoE)ATA-over-Ethernet (AoE) hasn’t enjoyed the popularity of iSCSI, but this isn’t due to any technical hurdles. The AoE specification is completely open and only eight pages in length. AoE doesn’t have the overhead of IP as does iSCSI since it runs right on top of Ethernet. Of course, this does limit AoE’s use to single locations, generally, since raw Ethernet can’t be routed. You can find more about AoE in one of my previous posts.

SummaryThe future of storage is wide open. Between iSCSI, Fibre Channel ,and even AoE, solutions abound for organizations of any size and as the lines blur between some of these technologies, cost becomes less of an issue across the board.

Intel open sources Fibre Channel over Ethernet package

has released a software package that is intended to encourage the development of Fibre Channel over Ethernet (FCoE) products for the Linux operating system.

If you are scratching your head about what FCoE is, here’s an excerpt from Network World:

FCoE is a proposed specification that allows Fibre Channel storage-area-network (SAN) traffic to run over Ethernet. Consolidating LAN and SAN traffic onto a single fabric is said to simplify network infrastructure in the data center.

Linux developers can test and modify the FCoE software stack as part of the released package.

First 8Gbps Fibre Channel products are out

It appears that the first 8Gbps Fibre Channel storage networking products are out, as reported by The Register. Still, the sentiment is that it will be unlikely that this new technology will do much to stem the drift to iSCSI over 10Gbps Ethernet, although it might perhaps slow it somewhat.

The main advantage of 8Gbps is that it not only uses the same infrastructure as earlier generations of Fibre Channel, but it is also backwards compatible to them. Slap the new 8Gbps devices onto an existing SAN, and they should interoperate without any issues, automatically running at the highest speed supported by both ends of the channel.

Interestingly, most SAN users have not (even) reached the limits of 2Gbps technology, never mind 4Gbps, according to Enterprise Strategy Group analyst Brian Garrett.

However, he hastened to add:

Administrators of infrastructure applications like disk-to-disk replication and vertical business applications like video post-production are already asking for higher performance storage networks.

The backwards compatibility of 8Gbps Fibre Channel will be warmly embraced in data centers, video production houses, and other application environments where performance counts.

The first 8Gbps products are promised by Emulex and QLogic, and according to the companies, they should be available for just a 10 to 20 percent price premium over existing 4Gbps products. While Brocade and Cisco have not yet made any announcements, I’m sure that they aren’t too far behind.

Fibre Channel and Ethernet starting to converge

Ethernet, the longtime standard for LAN traffic, is seeing another upgrade on the horizon, with 10 Gbps Ethernet beginning to explode onto the market. The speed upgrade will help Ethernet and Fibre Channel, the longtime standard for SAN traffic, converge onto one high speed network, linking servers in large farms to the storage arrays that store their data. Intel has just released “barely out of Alpha” code for Fibre Channel over Ethernet (FCoE) for Linux, though only for a specific release and configuration.

Cisco has seen sales of 10 Gbps Ethernet ports triple since they entered the market in the second quarter of 2007. The strong sales indicate that there is still plenty of demand for increased bandwidth in the data center. The ability to send Fibre Channel packets over Ethernet will help to reduce the number of data centers that have to maintain two architectures — one for storage and another for servers.

Until recently, I didn’t think that I would have 10 Gbps Ethernet in my shop, because we are so small. However, if I can use Ethernet to access my NAS and iSCSI boxes rather than Fibre Channel, I can see us bypassing SAN technology altogether in favor of technology that fits in better with what we are doing already. Do you see 10 Gbps in your data center in the near future?

Shared block-level storage continues to become more accessible

considered entitling this post “Is storage becoming commoditized?” but the technical definition of “commodity” doesn’t quite fit the bill. My question is this: Is the market for shared block-level storage continuing to become more accessible to a wider variety of customer? Personally, I think it is, and this is a good thing. With one exception, for most of my career, I’ve worked for fairly small organizations. A few years ago, the idea of a Fibre Channel-based SAN didn’t even get raised because of the cost and complexity of such a solution. It was RAID all the way in most servers. For some servers, even RAID wasn’t considered due to cost. Remember, RAID and SCSI drives used to be expensive!

Now, though, the storage market has exploded. With the introduction of iSCSI and new breeds of Fibre Channel being offered, it seems like there is something for everyone and at every price range. Here are some examples:

Dell AX150 array, Fibre Channel, 6TB raw, dual processors, refurbished, not scalable, 10 hosts max: $7,500.
EqualLogic PS400E, iSCSI, 10.5TB raw, fully redundant, scalable, unlimited hosts: $60K - $65K.
Overland ULTAMUS RAID 4800, Fibre Channel (4Gb), 18TB raw, redundant: $42K.
Left Hand Networks NSM 160, iSCSI, 2TB raw, redundant with three units (6TB): guess - ~ $40K or so.
Nexsan SATABoy, Fibre Channel/iSCSI, 7TB raw: $18K.
Nexsan SATABeast, Fibre Channel/iSCSI, 42TB raw: $55K.
Please don’t use these prices for your budget. I Googled for this information, so some may be out of date. The point of this exercise, however, is to demonstrate that choice and price is all over the map. If you need shared storage for 2 or 3 servers and are on a super-tight budget, buy the Dell AX150. If money isn’t an object and you want “best of breed” iSCSI, go for the EqualLogic PS400E. If your storage needs are a little more modest and budget is somewhat important, look to the SATABoy. For kicks, take a look at the SATABeast specs, too. At $55K for 42TB, it easily wins the price/TB comparison and supports both Fibre Channel (4Gb no less) and iSCSI.

Every time I look, there is something new to consider in the storage space. Sure, not all of the new options have whizbang new features, but they are certainly providing additional choice at prices that are all over the map. As a result, although the storage market is becoming a little more complex to navigate, there is incredible opportunity for customers of almost any size to take part in the shared block-level storage game.

iSCSI anyone?

iSCSI is a technology which seems to have been cropping up a
lot recently—while visiting a conference on the topic of data protection and
compliance, iSCSI was being pushed as ‘the next big thing’ in storage.

So what is iSCSI? iSCSI is a protocol defined by the
Internet Engineering Task Force (IETF) which enables SCSI commands to be
encapsulated in TCP/IP traffic, thus allowing access to remote storage over low
cost IP networks.

What advantages would using an iSCSI Storage Area Network
(SAN) give to your organisation over using Direct Attached Storage (DAS) or a
Fibre Channel SAN?

iSCSI
is cost effective, allowing use of low cost Ethernet rather than expensive
Fibre architecture.
Traditionally
expensive SCSI controllers and SCSI disks no longer need to be used in
each server, reducing overall cost.
Many
iSCSI arrays enable the use of cheaper SATA disks without losing hardware
RAID functionality.
The
iSCSI storage protocol is endorsed by Microsoft, IBM and Cisco, therefore
it is an industry standard.
Administrative/Maintenance
costs are reduced.
Increased
utilisation of storage resources.
Expansion
of storage space without downtime.
Easy
server upgrades without the need for data migration.
Improved
data backup/redundancy.
You’ll notice that I mentioned reduced administrative costs;
I was very interested to find this document prepared
by Adaptec on the cost advantages of iSCSI SAN over DAS or Fibre Channel
SAN—most notably the Total Cost of Ownership analysis, stating that one
administrator can manage 980GB of DAS storage, whereas the same administrator
could manage 4800GB of SAN storage. Quite an increase!

Isn’t there going to be a bandwidth issue with all of this
data flying around? Well, this is a question I had but found the answers in
this very informative ‘iSCSI
Technology Brief’ from Westek UK. Direct
attached U320 SCSI gives a theoretical data transfer rate of 320Mbytes/s; on a
standard Gigabit network, iSCSI will provide around 120Mbytes/s; and Fibre
Channel provides up to 200Mbytes/s, but at considerable cost. 120Mbytes/s is
probably fast enough for all but the most demanding applications. All
connectivity between the iSCSI storage and your servers would be on a dedicated
Ethernet network, therefore not interfering with your standard network traffic
(and vice versa). If this isn’t enough, 10Gbit copper Ethernet is now pushing
its way on to the market and costs are falling—this would give a possible
1Gbyte/s of throughput!

Most iSCSI devices I have seen give the ability to take
‘snapshots;’ this snapshot will only save changes made to the file system since
the previous snapshot—meaning you won’t need to put aside huge amounts of
storage while maintaining the possibility of rolling back to a previous state
after disaster (data corruption/deletion). Snapshots only take a few seconds to
perform (compared to hours for a traditional image to be created) and can be
scheduled for regular, automatic creation.

I have recently been asked to look at consolidating our
storage, and iSCSI looks like an innovative, well supported, and cost effective
way of doing this. The Power iSCSI range from Westek UK looks very promising
with the option of 10GBit connectivity, Hardware RAID6 (offsetting reliability
concerns due to SATA disks), plus an option of real-time replication and
fail-over between two units.

Have you deployed iSCSI-based SAN within your organisation?
Do you know of any other iSCSI appliance providers offering innovative
features? Maybe you decided to go with Fibre Channel instead? What kind of data
transfer rates do you require for your storage? Do you feel modern SATA disks
provide good enough performance and reliability or are expensive SCSI disks
still worth the premium?