Install Steam
login
|
language
简体中文 (Simplified Chinese)
繁體中文 (Traditional Chinese)
日本語 (Japanese)
한국어 (Korean)
ไทย (Thai)
Български (Bulgarian)
Čeština (Czech)
Dansk (Danish)
Deutsch (German)
Español - España (Spanish - Spain)
Español - Latinoamérica (Spanish - Latin America)
Ελληνικά (Greek)
Français (French)
Italiano (Italian)
Bahasa Indonesia (Indonesian)
Magyar (Hungarian)
Nederlands (Dutch)
Norsk (Norwegian)
Polski (Polish)
Português (Portuguese - Portugal)
Português - Brasil (Portuguese - Brazil)
Română (Romanian)
Русский (Russian)
Suomi (Finnish)
Svenska (Swedish)
Türkçe (Turkish)
Tiếng Việt (Vietnamese)
Українська (Ukrainian)
Report a translation problem
But I'd like to know the stability of such a configuration and what would be real life chances of killing the SSDs!
Last year I had setup a RAID0 on two 9 year old 80GB HDDs I had lying around, just for ♥♥♥♥♥ and giggles. But one of them died like a day after I set them up in RAID0 LOL!
Thanks, and his RAID config is on an SSD or HDD?
Yeah should be, i never did. i bought 1 large SSD...
But I guess I'll try this out and post my experience soon!
Yeah , same problem here =P i bought mine on ebay.
Standard levels[edit]
Main article: Standard RAID levels
Storage servers with 24 hard disk drives and built-in hardware RAID controllers supporting various RAID levels.
A number of standard schemes have evolved. These are called levels. Originally, there were five RAID levels, but many variations have evolved, notably several nested levels and many non-standard levels (mostly proprietary). RAID levels and their associated data formats are standardized by the Storage Networking Industry Association (SNIA) in the Common RAID Disk Drive Format (DDF) standard:[16][17]
RAID 0
RAID 0 consists of striping, without mirroring or parity. The capacity of a RAID 0 volume is the sum of the capacities of the disks in the set, the same as with a spanned volume. There is no added redundancy for handling disk failures, just as with a spanned volume. Thus, failure of one disk causes the loss of the entire RAID 0 volume, with reduced possibilities of data recovery when compared to a broken spanned volume. Striping distributes the contents of files roughly equally among all disks in the set, which makes concurrent read or write operations on the multiple disks almost inevitable and results in performance improvements. The concurrent operations make the throughput of most read and write operations equal to the throughput of one disk multiplied by the number of disks. Increased throughput is the big benefit of RAID 0 versus spanned volume.[11]
RAID 1
RAID 1 consists of data mirroring, without parity or striping. Data is written identically to two (or more) drives, thereby producing a "mirrored set" of drives. Thus, any read request can be serviced by any drive in the set. If a request is broadcast to every drive in the set, it can be serviced by the drive that accesses the data first (depending on its seek time and rotational latency), improving performance. Sustained read throughput, if the controller or software is optimized for it, approaches the sum of throughputs of every drive in the set, just as for RAID 0. Actual read throughput of most RAID 1 implementations is slower than the fastest drive. Write throughput is always slower because every drive must be updated, and the slowest drive limits the write performance. The array continues to operate as long as at least one drive is functioning.[11]
RAID 2
RAID 2 consists of bit-level striping with dedicated Hamming-code parity. All disk spindle rotation is synchronized and data is striped such that each sequential bit is on a different drive. Hamming-code parity is calculated across corresponding bits and stored on at least one parity drive.[11] This level is of historical significance only; although it was used on some early machines (for example, the Thinking Machines CM-2),[18] as of 2014 it is not used by any of the commercially available systems.[19]
RAID 3
RAID 3 consists of byte-level striping with dedicated parity. All disk spindle rotation is synchronized and data is striped such that each sequential byte is on a different drive. Parity is calculated across corresponding bytes and stored on a dedicated parity drive.[11] Although implementations exist,[20] RAID 3 is not commonly used in practice.
RAID 4
RAID 4 consists of block-level striping with dedicated parity. This level was previously used by NetApp, but has now been largely replaced by a proprietary implementation of RAID 4 with two parity disks, called RAID-DP.[21] The main advantage of RAID 4 over RAID 2 and 3 is I/O parallelism: in RAID 2 and 3, a single read/write I/O operation requires reading the whole group of data drives, while in RAID 4 one I/O read/write operation does not have to spread across all data drives. As a result, more I/O operations can be executed in parallel, improving the performance of small transfers.[2]
RAID 5
RAID 5 consists of block-level striping with distributed parity. Unlike RAID 4, parity information is distributed among the drives, requiring all drives but one to be present to operate. Upon failure of a single drive, subsequent reads can be calculated from the distributed parity such that no data is lost. RAID 5 requires at least three disks.[11] RAID 5 is seriously affected by the general trends regarding array rebuild time and the chance of drive failure during rebuild.[22] Rebuilding an array requires reading all data from all disks, opening a chance for a second drive failure and the loss of the entire array. In August 2012, Dell posted an advisory against the use of RAID 5 in any configuration on Dell EqualLogic arrays and RAID 50 with "Class 2 7200 RPM drives of 1 TB and higher capacity" for business-critical data.[23]
RAID 6
RAID 6 consists of block-level striping with double distributed parity. Double parity provides fault tolerance up to two failed drives. This makes larger RAID groups more practical, especially for high-availability systems, as large-capacity drives take longer to restore. RAID 6 requires a minimum of four disks. As with RAID 5, a single drive failure results in reduced performance of the entire array until the failed drive has been replaced.[11] With a RAID 6 array, using drives from multiple sources and manufacturers, it is possible to mitigate most of the problems associated with RAID 5. The larger the drive capacities and the larger the array size, the more important it becomes to choose RAID 6 instead of RAID 5.[24] RAID 10 also minimizes these problems.[25]
https://en.wikipedia.org/wiki/RAID
Yes, but RAID means what? REDUNDANT array of inexpensive disks.
There is 0 REDUNDANCY in RAID0.
Also, when running RAID with parity on SSD's, you can kiss your WLC goodbye :)
If you want to do proper RAID you want to MIX disk brands. For strip sets you probably want same brands and models.
You do NOT need strip sets to game on an SSD. That is just a waste of SSD's.
Keep in mind, the Samsung (or indeed any other brand too) low capacity (250 GiB and lower) models have much lower WLC count than their 500+ GiB models. Compounded with that WLC, EVO models also have half again the WLC than their PRO countarparts. I know because I use both models.
Do you even have a backup policy? If your strip set fails, you lose your data. Half is stored on each of the drives. So, if the set fails, then you have to rebuild it, that will tear into your WLC again (and the 250 GiB models have lower WLC and being EVO, even lower too).
You are just going to end up having a higher chance of failure with striping on two bottom end consumer SSD's than just using them normally.
So your plan is to buy 500 GiB of storage and only get 250 GiB of it because you are on a tight budget, well, penny wise, pound foolish we say. You are paying for 500 GiB and getting 250 GiB and a higher chance of failure.
You should watch your WLC on Samsung Magician SMART. I recommend buying high capacity SSD's for the higher WLC count and PRO if you intend to keep churning through game installs as that will eat into it a lot, that is if you want to use it for many years.
For gamers I don't see why not, as long as u know the draw-backs. You should always have your OS drive setup so if lost, you aren't actually losing anything really. All your actual important stuff should be housed on other backup drives.
To be honest, if 500GB SSDs are too expensive but 250GB are not; just get one 250GB SSD for your OS and be done with this. SSD = there really is no point in RAID, u won't see any real speed increases.
thanks a lot for your well detailed input. Those points you mention are certainly what I was looking for as an answer to, in terms of disk failure and a premature death.
And while I do have an external backup drive, the data, apart from the OS(which is also backed up appropriately) isn't as valuable for me!
But thanks again for you detailed input.
@Bad-Motha:
500GB SSD in my country is around 220-240$(after conversion) whereas I can get 250GB ones for as low as 75$(or even 65-70 if there's some sale going on), so price the difference is massive for some odd reason!
But all said and done this is not something I plan up to set on for a long term. Kind of like an experiment to see how it goes.
Feel free to add some more input if you guys would like, or if you had any personal experiences with setting up RAID arrays on SSDs, I would totally appreciate you sharing your experience!
Maybe share a few websites you can use for India and I'll take a look.
Even still, a single 250GB would be fine. Stick your OS, Drivers, Apps on that.
More than enough room. Get rid of Hibernation settings/file, lock in a lower PageFile, and manually clear out System Restore Points every once in a while and you should be good. Then for your game clients, set those up to install games to a secondary HDD.
2 SSD on 1 OSS = better life spawn for ssd =P less gb writen, and faster speed x2.
if you would buy 1 large ssd and it would diie , you will lost your data any way.
i had x3 HDDs - on raid 0 worked over 5 years. without any problems.