gwwak 30 MAR a las 9:35 a. m.
What are your thoughts on the 24TB Seagate Barracuda hard drive? Model No: ST24000DM001
https://www.seagate.com/ca/en/products/hard-drives/barracuda-hard-drive/

Saw it on sale at a cheap $/TB on Newegg a while back, but now it is sold out.
< >
Mostrando 16-30 de 48 comentarios
andreasaspenberg575 31 MAR a las 3:24 p. m. 
it saves a lot of space. my chassis only have room for 2 HDD drives and i have one already so, i would need the second one to be as large as possible.
gwwak 31 MAR a las 5:09 p. m. 
A lot of modern cases don't really have many bays. 2, or 3 at most, since they prioritise appearance or AIO compatibility instead. So unless you are using a dedicated NAS chassis, 4 HDDs are not practical.
Philco7a 31 MAR a las 5:25 p. m. 
beware of used hard drives being sold as new. They just clear the hours on the hard drives board chip.

https://www.tomshardware.com/pc-components/hdds/seagates-fraudulent-hard-drives-scandal-deepens-as-clues-point-at-chinese-chia-mining-farms
Última edición por Philco7a; 31 MAR a las 5:28 p. m.
Schrute_Farms_B&B 31 MAR a las 5:39 p. m. 
Exos and Barracuda have a really great price/tb ratio. I got myself 2x24TB Exos X24 for around 340 bucks (14$ per TB) and 2x24TB Barracuda 512e as a backup for aprox. the same amount of money.
Those Barracudas were even recertified ones and they are running 24/7 for 3,5 years now without any issues. Im also saving two bays on my NAS and ready for future upgrades. Never going below 24tb again.

I had many drives over the years and Seagate never ceases to satisfy. Ive had dying WD Red Pros and Toshiba Enterprises and whatnot, but never a Seagate. Call it a fluke or whatever, but thats just my 2 cents.
GOD RAYS ON ULTRA™ 31 MAR a las 6:19 p. m. 
Publicado originalmente por gwwak:
https://www.seagate.com/ca/en/products/hard-drives/barracuda-hard-drive/

Saw it on sale at a cheap $/TB on Newegg a while back, but now it is sold out.
When it comes to data center drives, you want the helium actuators for better efficiency. It's also better to get one with a large cache.

The Seagate lacks quality and is rather slow when compared to my old western digital that comes encased in helium and has a large 512mb cache.

The Seagate is only rated for 190mb/sec. My old western digital can sustain over 300mb/sec and is close to SATA SSD speeds. It's the only HDD I have that can play starfield.
Publicado originalmente por Kobs:
Who tf needs 24 TB on a single drive....You can probably get the same 24 TB in a bank of 4 drive for a LOT less

Skyrim players.
76561198285398721 3 ABR a las 3:57 a. m. 
can u imagine how long it would take to do some kind of full scan or a clone on a hard drive that big
i have a pair of 4tb seagate exos hdds and it takes 30 hours to clone with the dd command in linux
andreasaspenberg575 3 ABR a las 4:07 a. m. 
i never clone and i do not scan that often.
76561198285398721 3 ABR a las 4:40 a. m. 
Publicado originalmente por BlackBloodRum:
Publicado originalmente por 76561198285398721:
can u imagine how long it would take to do some kind of full scan or a clone on a hard drive that big
i have a pair of 4tb seagate exos hdds and it takes 30 hours to clone with the dd command in linux
If you're on Linux then use rsync, it'll save you a ton of time if backup is your goal between the drives.
not an option rn but thanks because i didnt know about that
Última edición por 76561198285398721; 3 ABR a las 4:41 a. m.
_I_ 3 ABR a las 5:01 a. m. 
7200rpm, with very high data density will max sata3 for seq read/writes
scanning like would take time, but not as long as you would think

as long as cpu can keep up, and its reads are seq, it would be done in well under a day
tyl0413 6 ABR a las 12:52 p. m. 
I prefer my 20T Toshibas to the Exos, bit faster, doesn't have the weird encrypted SMART like Seagate, Seagate has worse reliability on Backblaze, but theyre all fine.

Publicado originalmente por Kobs:
Who tf needs 24 TB on a single drive....You can probably get the same 24 TB in a bank of 4 drive for a LOT less
Its actually way more expensive, 20TB drives are the best value rn and have been for a while, and 16 before that, half the $/TB compared to small drives

Publicado originalmente por gwwak:
A lot of modern cases don't really have many bays. 2, or 3 at most, since they prioritise appearance or AIO compatibility instead. So unless you are using a dedicated NAS chassis, 4 HDDs are not practical.
Thats why you just reuse old cases if you're not into paying $300 for a fractal or whatever.
Modern cases are a scam, just use a laptop at that point if you're going to throw away all your expansions for "looks" (functionality looks better tho)

Publicado originalmente por 76561198285398721:
can u imagine how long it would take to do some kind of full scan or a clone on a hard drive that big
i have a pair of 4tb seagate exos hdds and it takes 30 hours to clone with the dd command in linux
Takes a few days, its fine, 30hrs for 4TB is definitely not normal, but I guess it could be because you have a ton of tiny files, use some crap software or an SMR HDD if it takes that long.
PopinFRESH 6 ABR a las 1:10 p. m. 
Publicado originalmente por 76561198285398721:
Publicado originalmente por BlackBloodRum:

If you're on Linux then use rsync, it'll save you a ton of time if backup is your goal between the drives.
not an option rn but thanks because i didnt know about that
Just be aware of the differences; a dd is a block-level clone (meaning it is cloning each block of the disk regardless of what is contained ontop of it) where as an rsync is a file-level clone that is copying the file data from one file path to another. As such an rsync is not going to be a "clone" as in a replica because it is not going to copy the data on the disk outside of the filesystem (e.g. the partition table, boot block, software RAID metadata, etc.).

This can also mean that the dd will be significantly faster than an rsync depending upon the data makeup of the disk; for example if the filesystem has tons of very small files rsync may be substantially slower due to the disk needing to seek substantially more where as a dd is copying blocks so it is pretty much all sequential reads and writes.

Another tip to speed up your cloning via dd is to specify a larger block size within the limits of your memory (RAM). The `bs=` argument tells the dd command how much data to read in to the buffer before flushing it to the destination. So if you have something like 16GB of RAM then you can specify `bs=8G` so it will read in 8GB of data at a time and then flush that and write 8GB to the destination disk. This will significantly increase the speed of the dd because you aren't having it flush the buffer every 512 bytes. This can/will cause a buffer overrun at the end of the disk where the last block of 8GB will overrun the end of the disk (unless it happens to perfectly align to the disk size), however, the "data" that is past the end of the disk would be all 0s so it shouldn't matter and all you'll need to do is run an fsck on the filesystem that is in the last partition on the disk to fix any broken end-of-disk flag.
PopinFRESH 6 ABR a las 1:16 p. m. 
Publicado originalmente por tyl0413:
...
Publicado originalmente por 76561198285398721:
can u imagine how long it would take to do some kind of full scan or a clone on a hard drive that big
i have a pair of 4tb seagate exos hdds and it takes 30 hours to clone with the dd command in linux
Takes a few days, its fine, 30hrs for 4TB is definitely not normal, but I guess it could be because you have a ton of tiny files, use some crap software or an SMR HDD if it takes that long.

It is normal if they are just doing `dd if=/dev/sda of=/dev/sdb` without specifying any other directives. By default dd will us the default block size of 512bytes so it will read in 512bytes to the buffer, flush that 512bytes to write it out to the destination disk and then proceed to the next 512bytes. This adds a ton of overhead in both buffer commands at the kernel level and disk commands being sent to both disks controllers.

Specifying a larger block size within the bounds of your available system memory will substantially increase the speed of running a block-level dd clone.
PopinFRESH 6 ABR a las 1:56 p. m. 
Publicado originalmente por BlackBloodRum:
Publicado originalmente por PopinFRESH:

It is normal if they are just doing `dd if=/dev/sda of=/dev/sdb` without specifying any other directives. By default dd will us the default block size of 512bytes so it will read in 512bytes to the buffer, flush that 512bytes to write it out to the destination disk and then proceed to the next 512bytes. This adds a ton of overhead in both buffer commands at the kernel level and disk commands being sent to both disks controllers.

Specifying a larger block size within the bounds of your available system memory will substantially increase the speed of running a block-level dd clone.
Also be aware, rsync only copies the files it needs to. So, once your initial backup has taken place further backups will only copy files which have changed or have been newly added. It'll also stop if it encounters errors with files, unlike dd.

It will be much, much faster after the first backup than dd as you no longer need to clone/copy the whole disk.

True, if all you are trying to do is a file-level backup and not an actual clone/replica then rsync is likely a better solution in most cases.

E.g. if you have said disk partitioned with a single full-disk partition and one filesystem, mounted to something like /steamlibrary and all you want to do is backup all of those files periodically then rsync is probably the better option. If you are trying to make a replica of your entire OS on a bootable disk with multiple partitions and filesystems; dd will likely be a better option unless you are comfortable rebuilding and fixing the things that will not be captured via an rsync in that scenario. (grub/bootloader, fstab, /etc/default/ configurations, etc.)

Another thing to note on this topic is that dd is going to be an actual block-level replica, which includes the disks/partitions GUIDs and the filesystem(s) UUID(s). As such you can't leave both the source and destination disks attached to the system at the same time during boot because both disks will have the exact same identifiers; and the same with triggering the system to parse fstab and/or mount by ID as the filesystems on both disks will be the same exact filesystems including their identifiers/metadata.
76561198285398721 12 ABR a las 3:00 a. m. 
Publicado originalmente por PopinFRESH:
Publicado originalmente por tyl0413:
...

Takes a few days, its fine, 30hrs for 4TB is definitely not normal, but I guess it could be because you have a ton of tiny files, use some crap software or an SMR HDD if it takes that long.

It is normal if they are just doing `dd if=/dev/sda of=/dev/sdb` without specifying any other directives. By default dd will us the default block size of 512bytes so it will read in 512bytes to the buffer, flush that 512bytes to write it out to the destination disk and then proceed to the next 512bytes. This adds a ton of overhead in both buffer commands at the kernel level and disk commands being sent to both disks controllers.

Specifying a larger block size within the bounds of your available system memory will substantially increase the speed of running a block-level dd clone.
indeed thats what ive been doing
i was going to use rescuezilla instead for its clone function but had technical problems, not sure if that would have also been faster than dd ¯\_(ツ)_/¯
< >
Mostrando 16-30 de 48 comentarios
Por página: 1530 50