Introduction to Solid State Drives

An SSD (acronym for Solid State Drive) is a storage drive based entirely on flash memory. It is therefore much more like a USB stick than a HDD (which has platters and heads).

All SSDs are VERY faster than a HDD or a RAID 0 of more HDDs together, the latest models can even saturate the SATA BUS III. Moreover, due to the absence of mechanical parts, they are very shock resistant and do not produce any vibration. In addition to all this, they have very low power consumption compared to HDDs and heat much less.

In short, an SSD is made up of a series of NAND memory modules connected to a controller (usually made up of very fast dedicated processors) that takes care of their management (we will talk about this later).

On the other hand, at least at the moment they have a very high cost per Gb (even if the prices are constantly decreasing) and the largest “accessible” size is 512 Gb. An SSD works in a completely different way from an HDD: data writing mechanisms come into play and must be understood in order to understand how to best use it.

Is it worth switching from HDD to SSD?

Absolutely it is. Because of the already mentioned speed, the system acquires a fluidity and immediacy in the response unmatched. This is also true for older systems, as confirmed by an article published here on Hardware Upgrade.

Partition Alignment

Traditional mechanical disks are divided into physical sectors, the operating system and all its components therefore operate according to this sector logic. A solid state disk has a completely different logical (and physical, as already seen) structure, but it is still treated with this sector logic.

Alignment is necessary to make sure that a logical sector starts exactly at the beginning of a physical page of the SSD: without this, the margins of the logical sector and the physical page would not match and the sector would exceed the page, implying an extra write operation (to clean two blocks instead of just one) drastically reducing (up to 50% in theory) the write speed.

Alignment of the partitions is necessary for proper operation even in the most modern hard disk drives (called “Advanced format drives”).

There are many guides on the WEB that explain how to align a partition using very simple tools, but consider that modern format operating systems and the latest releases of the aforementioned partitioning tools automatically align partitions.

Data Retention

Data Retention is defined as the ability of the drive to retain data, once disconnected from the power supply, for a period of time before it deteriorates and becomes unreadable (in this case the cells will have lost data once the SSD is turned back on but will work well with new writes). Hard drives have an average Data Retention of about 10-15 years while unfortunately for SSDs this is much lower and varies greatly from the level of cell wear. The useful life of the cells (measured in write cycles) is in fact established given a minimum Data Retention of about 12 months. If you exceed the maximum number of cycles established for the SSD this will probably continue to work normally but will lose data in a few weeks once disconnected. As usual this is a very simplified matter, many other factors (the average temperature of use in the first place) not considered here would come into play. It must be remembered, however, that this problem is solved simply by switching on the unit of many from time to time, in fact, by restoring power to the cells the data (if not already lost) are “revitalized” unless the memories are so worn that they can no longer be stored. In light of this, I advise against using SSDs for backup if you plan to leave them disconnected for a long time.

Garbage Collection and TRIM, who are they?

Flash memories store data inside cells grouped in drives called Page but they are not able to overwrite data as in hard drives. It is only possible to delete block data from a set of pages (called Block). If the user or the operating system deletes a file, the corresponding pages are marked as invalid, but cannot be deleted individually because the block contains pages that are still valid. Periodically, the SSD must perform an operation called Garbage Collection: the content of the block is copied to another block, ignoring the invalid pages in order to free up space. Some SSD controllers try to do this in Idle Garbage Collection (Background Garbage Collection) to ensure that they do it faster during the idle phase. However, normally the operating system does not tell the SSD what files have been deleted, and the SSD can only sense this from the fact that at some point the operating system instructs it to write to an address that already contains data. Therefore, the Background Garbage Collection has a higher wear and tear on the drive in return because it could cause the SSD to move data that has already been deleted even if the SSD is not yet aware of it.

Modern operating systems (Windows 7, Linux from kernel 2.6.33 and newer versions of Mac OSX) support TRIM, a SATA command by which the OS communicates to the SSD the LBA addresses occupied by a file at the moment the file is deleted, making the Garbage Collection operation more efficient. (Source: Notebookitalia)

To verify the actual functioning of the TRIM I refer to a post of our Tennic.

The actual writes performed on NANDs are generally greater than those actually required by the user because all the cell “cleaning” systems described above perform extra writes (phenomenon known as write amplification). This brings into play other factors that depend heavily on the optimization of the controller firmware and the specific use of the SSD. By clicking here you can read a more in-depth analysis of the problem written by our s12a user.

Garbage Collector and TRIM are both essential for the proper functioning of the SSD.

What do I have to do to make the best use of my SSD?

First of all we suggest you all to do a clean Windows installation (if you don’t have a clean copy go to the fourth post of this page), making sure that the BIOS is set to AHCI mode on the SATA ports before starting. Windows (from Vista on) automatically aligns the SSD at the time of the first installation. You can make as many partitions as you want on the SSD, just like a hard disk. However, I recommend using the most advanced file systems for formatting (e.g. NTFS for Windows) because you are sure to have TRIM support. Once installed the operating system and all the drivers (I recommend RST in the case of Intel Sata controllers) you have to do a WEI, the internal evaluation test of Windows hardware that gives an index from 1 to 7.9 on each component (Control Panel > System Performance). Once evaluated the Windows 7 system will recognize the presence of an SSD and will automatically deactivate defragmentation. Windows 8 and 10, on the other hand, will not disable the defragmentation service because the latter actually optimizes the use of the SSD by the operating system itself: in Windows 8 and 10, in fact, features have been added that allow a better use of the latter, given their widespread use.

In the light of the much higher duration of the expectations and reachable only in tens of years with an average use of the PC personally I suggest not to bump into unnecessary optimizations (such as changing parameters on the Windows registry etc.): the system will work at its best as it is!

Below I’ll give you some small tricks that will help you to save space on the SSD.

If you don’t use extremely heavy applications (such as advanced photo editing or video editing) you can reduce the Paging file to 1 or 2 gb if you have 8 or more gb of RAM installed in your system.

Since boot times are in the order of 10-15 seconds, it is advisable to disable hibernation (it takes up a space on the SSD equal to the RAM installed in the system) and use “intelligent” sleep. To disable hibernation and recover the space occupied by the hiberfil.sys file, you must run the instruction “powercfg.exe /hibernate off” from the command console (cmd.exe) started with administrative privileges. To reactivate it, simply replace “on” with “off”. Moreover, if this function is absolutely necessary (this is usually the case in Notebooks) you can set the system to occupy only half the RAM with the command “powercfg -h size 50”, but in the latter case there is a greater risk that the hibernation operation will fail if you actively use more than 50% of RAM. Obviously if you have a lot of free space on the SSD it is not a problem to leave the default hibernation. As for suspending it I suggest to activate it automatically not before 20 or 30 minutes so as not to disturb the Garbage Collection (which I remember works when the SSD is idle), while there are no problems if you launch it in manual mode.

I also recommend a periodic run with the “disk cleaning” tool (present by default in Windows), checking all the attached entries; in this way you will delete temporary files that are practically useless, sometimes freeing many GB of data.

It is possible to recover a lot of space by disabling “system configuration restore” and deleting all associated restore points. If you really don’t want to disable it it would be good to limit the space available to it so as not to be too “intrusive”.

To make the SSD work at its best it is advisable not to fill it with more than 75%-80% if it is up to 128 GB in size, while I recommend always leaving at least 20 GB free in the case of SSDs from 256 GB and up. The ideal would be to place a traditional Hard Disk alongside the latter: you would install the operating system and “basic” programs (web browser, Office, various plugins…) on the SSD and relegate the rest (music, videos…) to the hard disk. Many programs such as video games and professional applications (such as SolidWorks and Matlab) take little advantage from being installed on the SSD (mainly much shorter boot times) and could therefore be safely installed on HDD.

On Windows 8.1/10 the automatic optimization of SSDs performs at regular intervals the “classic” defragmentation which, as already said, is totally useless on Solid State Drives. It seems that this behavior is wanted by Microsoft to avoid an excessive fragmentation of the File System that brings problems to the operating system. However, the thing is not clear yet and we are waiting for an official communication from Microsoft itself. At the moment I advise not to make any changes to the operating system because this defragmentation is done about a v

SSD: size and speed

The speed of an SSD depends on its size: in particular the speed is increasing (and also consistently) in the passage from 128 to 256 gb, while it remains stable (or sometimes it reduces slightly) in the passage from 256 to 512 gb. This happens because the individual memory modules work in parallel on the different channels of the controller (as if they were already RAID 0), so the more there are, the better the overall speed. Obviously this is true as long as there are free channels; usually a 256 GB SSD uses all these already mentioned channels working in parallel on the controller, which means that the maximum speed peak of the controller is reached. This is why there is no substantial improvement with SSD has 512 gb.

All you’ve just read is an extremely simplified approach to the problem (many other factors would be involved) but in general it gives a good idea of how things are going.

I need to buy a SSD, which models do you recommend?

The models that here on the forum (and not only) we strongly recommend are:

  • Samsung 960/950: the latest Samsung SSD series boasts very high performance but is only available with PCIe 3.0 x 4 interface and M2 form factor. The SATA 3 interface would be too limiting. For all details please refer to the Official Thread.

  • Samsung 850: the 850 series consists of two models, the Evo and the Pro. The Pro costs more than the Evo, uses innovative V-NAND 3D (vertical layer stacking memories) and increases the already excellent performance of the previous model (the 840 PRO). In practice this SSD has as strong limitation the speed of the SATA 3 interface, as it is able to saturate it in many scenarios of use; however compared to the already mentioned 840 PRO it has a much lower latency. The V-NAND memories mounted on the PRO guarantee a duration of 6000 cycles, compared to 3000 of the MLC of the previous model. Unlike the PRO, the EVO version is cheaper and less performing in very heavy load scenarios, but it mounts the same type of memories and guarantees a duration of 2000 write cycles (twice the TLC of the 840 EVO). Also in this case there is a good performance leap compared to the 840 EVO model that it replaces and in some scenarios the performance is close to the 850 PRO model. Through Samsung Magician software it is possible to activate the RAPID mode, which allows to use a part of the system RAM as SSD cache.

  • Samsung 840: the 840 series also consists of two models, the Evo and the Pro and is currently absolutely reliable. The Pro costs more than the Evo and has better memories (the proven MLC memories). The Evo in turn is cheaper and less performing considering the mounted memories, the TLC (they are cheaper and have a life 3 times lower than the MLC memories). However, in the Evo have been implemented advanced features that allow, through the proprietary software of Samsung SSD Magician, to fill the performance gap using RAPID. Warning: it is recommended to update this SSD to the latest firmware revision that fixes in EVO models a bug that causes a drastic reduction in data reading speed if they have not been modified for more than a month. UPDATE: it seems that the bug has remained in some cases. Link to the official thread.

  • Crucial MX300: cheaper, with more capacity and less performing than a Samsung 850 EVO, this SSD can be seen as a good compromise for those looking for an “honest” product for an upgrade.

The recommended cut for a good price/performance/duration ratio is 256 Gb (or higher).

On the site xtremesystems.org have been collected features, reviews and comparisons of all SSDs produced so far (all in English). The collection work they have done is really commendable.

Note: it’s true that TLC memories last less than MLCs, but we are talking (considering an average user and an SSD filled for less than 50%) about the difference in duration from about 7-10 years for TLCs to almost 30 years for MLCs… In a nutshell you’ll change the SSD for obsolescence long before the maximum write cycles set by the manufacturer are over!

Why can’t I find SSDs like Vertex OCZs in the previous list?

On paper these products seem to perform better than the recommended ones, but they don’t. All SSDs with SandForce controllers (which we do not recommend) compress data before writing it to the memory: of course manufacturers’ benchmarks are made with highly compressible files, so it seems that SSDs are stratospheric but with incompressible data they are much slower than the recommended products. Actually, with average usage you have files that are only 1/3 of the total compressible, but even in this scenario the recommended products are faster in daily use. In addition, many of the SSDs with the aforementioned SandForce controller give BSOD and freeze random problems. In addition to this they tend to have a significant drop in performance when filled to over 60-70%.

Newer products with the latest firmware updates seem to be doing better while Intel SSDs equipped with this controller do not give any problems.

Is it worth doing a RAID 0 with SSDs?

Although it may seem strange, we do not recommend RAID 0 with DSS. From many users who have tried it for a long time we know that the advantage of doubling the actual operating speed is practically only noticeable in benchmarks and only in sequential (which is only fully used in particular scenarios). Since almost all accesses are random, this increase in the sequential is not only practically useless but causes a worsening in the random due to the greater load that the controller that manages RAID 0 itself has to bear. Therefore, a purely numerical increment where it is not needed (the sequential) corresponds to a worsening where it is needed (random). Moreover, in an SSD RAID the TRIM doesn’t work (it remains active only on the latest high-end Intel chipsets) and the Garbage Collection alone can’t keep them clean: the result is a significant drop in performance over time. This drop in some cases (with older models, such as Vertex 2) has led to almost halving the initial speed in a few months, making the array useless. In addition, it is not possible to perform firmware updates under Windows or write checks via SMART. In some cases, with particular motherboard-SSD pairs you have freeze and BSOD random or even sudden breaks of the RAID itself (with irreparable data loss). It must be remembered, however, that some users who have used the latest generation of SSDs are very comfortable and have no problems in daily use.

The general advice, however, is NOT to use RAID 0 with SSDs, but to buy instead of 2 small ones, a larger one (see the premise): for example, instead of making an array with two 128 gb SSDs it is advisable to buy a 256 gb one (which should cost a little less than the two 128 gb ones).

On tomshardware.com there are direct comparisons between larger single SSDs and RAIDs with 2 SSDs comparing units of the same make and model. It comes to the same conclusions written above but with “numbers” in hand. The article is in English.

comments powered by Disqus