I’ve solved the problem, and I'd like to share with you, the process that I did. It will be extense, but I hope it could help someone who needs it. I was always doing my work on a forensic image called “image.dd”, made with the program Testdisk.
After the interruption of the moving process, the disk was arranged in this way.
<------ 920 Gib ?? ------->
Sector 0 2048 1900 mill
(please, note that "C" is duplicated)
Fortunately, there was in the HDD some files with recognizable data inside, like XML or text files of which I know both, the content and the filename.
The first thing I did, was locate the $MFT table, at the beginning of the raw disk. It was in sector 2064, very close to sector 2048, a very common place to start the main partition. After that, I located the entry for a known filename. I read the data attribute and after some hex-to-dec and cluster-to-sector calculations, I found the theoretical position of this file. Let’s suppose that the position stored in the $MFT was “T” in the map described above.
I used HxD to see the content of this “T” and I found nothing there, but I performed a search starting from this sector, searching the known data inside the file. After some seconds, HxD located beginning the file about 1.7 million sectors beyond “T”. Let’s say that this real position was “U” in the same map.
With the information of the theoretical location in the entry of the $MFT (“T”), and the actual position of the file (“U”), I calculated the size of the movement to the left, that Gparted was performing before the USB failure. It was 1.716.224 sectors, equivalent exactly to 838 Mib. It was good to see an integer number of Mebibytes.
I was unsure about the amount of data that had been moved to the left, but, using a little loop in a linux shell, I got the first 16 bytes from every 1 Mib block, starting in an offset of 10 Gib and finishing in 50 Gib. The code that I used was something like
for k in (671088640.. 3355443200..65536)
dd if=image.dd bs=16 skip=k count=1 | hexdump -C
The output was a text file, that I decided analyze in Excel, but I could have made it in MySQL. I compared the samples with the data contained in a particular cell with the data 838 cells ahead. Quickly, I realized the first duplicated sector was 41.231.296, and obviously, there was 838 cells duplicated data in Excel. The corresponding sectors to these cells were in the zone of “C” in the map above, duplicated indeed.
Using again HxD, I deleted 1.716.224 sectors starting at sector 41.231.296. To be clear I deleted one of the two sector “C”. I saved the changes and the program took about 3 hours, because it had to write about 930 Gib of data.
The structure now, had no duplicated sectors.
<------- 920 Gib ------->
Sector 0 2048 1900 mill
Finally, I mounted the image in Windows using OSFMount, in writable cache mode and after some verifications made automatically by Windows 10, the partition was there, 100% readable.
I promised me, “always, but always, perform a backup before any risky action”.