Conventional storage exists on magnetic platters, where mechanical devices physically "seek" the requested data. This mechanical blundering is the weakest link in the input/output chain. Typical "latency" (the time between data requested and data received) is around 5 or 6 milliseconds for the very best RAID arrays.
On RAM SSD storage, data is not stored on traditional hard disks-- it is stored on DDRRAM memory chips. The result is -- 14 microseconds of latency - 250x faster than RAID.
Other companies brag about their latency times, but these numbers are useless unless they result in truly faster data transmission. The truth is in the IOPS-- Input/Output transactions (I/Os) per second. Truly low latency should be backed up by extreme IOPS performance.
A storage device can only handle so many IOPS, regardless of how small or large the I/O is. Conventional hard disks are capable of, at most, 300 IOPS. The best RAID is capable of approximately 10,000 IOPS. The slowest solid state drive, by comparison, is capable of 100,000 random IOPS. Since there are no moving parts, data performance is high regardless of whether the data is accessed sequential or randomly. This is important, since "real world" use is typically more random than it is sequential. Beware of any storage device that does not display random IOPS. Database applications benefit most frequently from high random IOPS.
The super-fast performance of the solid state drive enables it to fully saturate today's high-bandwidth interfaces. For this reason, the is equipped with up to eight Fibre Channel ports and a high speed backplane. This ensures that high-bandwidth applications are provided with as much data as the physical interfaces allow-- up to 3 GB/sec of data from a single storage appliance. These interface ports can provide a single, high bandwidth link to a demanding application, or they can serve separate functions for maximum efficiency. 1, 2, or 4 Gb Fibre Channel interfaces are available as are 4x InfiniBand interfaces, and the solid state drive is certified as interoperable with a wide variety of Fibre Channel host bus adaptors and InfiniBand host channel adapters from the leaders in the industry. High bandwidth is essential for some applications, including non-linear video editing and video-on-demand.
Lower I/O Wait Time
I/O Wait Time is experienced when processors are literally waiting on storage to process their I/O request. To some degree, I/O wait time occurs in all systems. Typical applications, however, do not thrash hard drives enough for users to notice.
A demanding enterprise application -- OLTP, data warehousing, video-on-demand -- can easily become busy enough that servers are constantly waiting on storage...up to 60% of the time! When servers wait on storage for data, users wait on servers.
The incredibly low latency of the SSD means that I/O wait time can be completely eliminated! CPUs once underutilized will be used to their best capacity, increasing the performance of the applications they run.
Improved Server Efficiency
When slow, conventional storage holds back the potential of expensive processors and servers, efficiency is reduced and money is wasted. Conversely, introducing a blazing fast solid state disk fully utilizes those servers, resulting in maximized ROI. If your data only travels as fast as the slowest point in the network, then removing that bottleneck results in efficiency gains throughout the system.
The drive towards server consolidation means squeezing every last drop of performance out of the remaining servers. If an SSD can improve server efficiency, then that efficiency increase can lead to server consolidation without performance loss! This is especially true in "server-bloated" environments, where the problem of I/O wait time was not solved by adding additional servers or processing power.
More Concurrent Users
When a solid state disk is installed, it typically takes the pressure off of whatever system was being thrashed in its place (a RAID array, server system memory, etc.). Those resources are freed up for other applications and tasks. In the case of query-based applications, this can translate in to more concurrent users receiving their data at higher speeds than ever before. Conventional thinking suggests that adding concurrent users requires more servers. The SSD allows you to scale concurrent users by improving server efficiency.
Faster Response Times
Solid state disks are famous for consistently decreasing the response times of demanding applications. Without mechanical storage devices to slow down performance, users and applications get data at the speeds they demand. At the core of any enterprise is a critical database. Whether it is queried by employees, customers, or other servers, anyone can benefit from faster response times.
In many environments, particularly OLTP, increased customer satisfaction is the first priority. Eliminating I/O bottlenecks with a solid state disk can improve the performance of all hardware depending on that data. Whether the application is e-commerce, OLTP, hot files storage, or any other use, higher performance, faster response, and greater transactions means increased satisfaction by users.
Financial, telecom, and e-commerce industries know the value of increased transactions per second. To those industries, every additional transaction that their hardware can carry out directly affects the bottom line. In such a situation, it is easy to see how the solid state disk quickly pays for itself. This logic, however, can be applied to virtually any mission-critical application that requires a solid state disk to reach its potential. Compare the cost of an SSD to alternative solutions that increase application performance from 2x to 10x, and the choice becomes easy.