1 (edited by gedakc 2016-05-04 22:30:25)

Topic: [SOLVED] block size option for dd based operations

i could be completely wrong here, so im sorry if this is a stupid post.

im not sure if my understanding of hd buffering is correct but i believe that the larger the buffer is, the faster one can read/write large amounts of data.  that being said, using small block sizes for dd based operations (i think) would cause things to run slower.  given the option to make the block size larger would increase the speed significantly.

thanks

2

Re: [SOLVED] block size option for dd based operations

we don't use dd anymore, but every time we do massive read/write operations we do some benchmarking to determine an optimal blocksize. see http://cvs.gnome.org/viewcvs/gparted/sr … iew=markup (grep for 'finding optimal') for the algorithm.
If you see any room for improvement of this (quite basic) algorithm, i'm all ears smile

3 (edited by gedakc 2008-08-07 04:33:12)

Re: [SOLVED] block size option for dd based operations

A problem was discovered in the algorithm that attempts to choose an optimum block size for moving or copying partitions.

The new enhanced algorithm will benchmark times with all of the block size possibilities.  Then it will select the block size that took the smallest time to copy.

The previous algorithm would start with a small block size, test the time of a copy, and then try the next larger block size.  As soon as the next larger block size took longer than the previous one, the algorithm would stop and declare the previous block size as the optimal size for the copy operation.  In theory this would appear to be a good algorithm, but due to anomalies in benchmark times the algorithm would wrongly favour smaller block sizes.

If you wish to review the algorithm, see the following URL and search for "finding optimal":
http://svn.gnome.org/viewvc/gparted/tru … iew=markup

The improved algorithm will be available in the next release of GParted (0.3.9).