2020年03月31日

apod.nasa.gov への トレースができたときはこう出てきた

tracert apod.nasa.gov

apod.nasa.gov [129.164.179.22] へのルートをトレースしています
経由するホップ数は最大 30 です:

1 <1 ms <1 ms <1 ms *** 自宅〜プロバイダ
2 <1 ms <1 ms <1 ms *** 自宅〜プロバイダ
3 6 ms 3 ms 3 ms *** 自宅〜プロバイダ
4 3 ms 4 ms 3 ms *** 自宅〜プロバイダ
5 7 ms 7 ms 7 ms *** 自宅〜プロバイダ
6 3 ms 3 ms 4 ms 210.130.155.177
7 3 ms 5 ms 6 ms ngy007bb01.IIJ.Net [58.138.108.169]
8 122 ms 123 ms 123 ms sjc002bb12.IIJ.Net [58.138.80.45]
9 * * * 要求がタイムアウトしました。
10 * * * 要求がタイムアウトしました。
11 * * * 要求がタイムアウトしました。
12 * * * 要求がタイムアウトしました。
13 * * * 要求がタイムアウトしました。
14 182 ms 181 ms 182 ms cn-1-49-fw-reth0.gsfc.nasa.gov [198.119.58.6]
15 * * * 要求がタイムアウトしました。
16 * * * 要求がタイムアウトしました。
17 * * * 要求がタイムアウトしました。
18 186 ms 190 ms 190 ms 129.164.141.2
19 193 ms 199 ms 197 ms antwrp.gsfc.nasa.gov [129.164.179.22]

トレースを完了しました。


@2020/03/31 22:09 | Comment(0) | 日記

2020年03月29日

久しぶりに NASA の APOD が(一時的に)ダウンか、、

https://apod.nasa.gov/apod/astropix.html

NASAのページ(apod)が表示できない: NN Space BLOG-NN空間ブログ

2020-03-29_231820.png
@2020/03/29 23:18 | Comment(0) | 日記

2020年03月28日

Regza HDD Copy by xfsdump and xfsrestore

REGZA側できちんと4TB−HDDを認識する機械に交換したところ、まず レグザで初期化をして、その後取り外してPCに接続してKNOPPIXで 以下の xfsdump and xfsrestore を実行するだけで コピーは無事に完了した。

巷で言うような、UUIDの複製はする必要ないみたい。(xfsdump and xfsrestore では UUIDは複製されないです)
ちゃんと元のHDDとして勝手に認識されていました。(37 Z7000の場合)

先日までは、1台のお立ち台に2つ新旧HDDをつないでESATA接続でデータコピーしていましたがその時は8時間以上はコピーに時間がかかってました。
今回は REGZA用に新規に購入した お立ち台(USB3.0)を使い別々にPCに接続してコピーしました。すると3時間で1.5TBの録画データをコピー完了しました。
1分あたり7.87GB 程度の転送スピードです。 130MB/sec 程度は出ているのでまずまずのスピードかなと。

コピー後のディスクをレグザに取り付け後、録画番組のプレビューの最初に若干不具合がありましたが、しばらくするとそれらもOKになりました。というわけで問題なくデータの引っ越し完了。

結論として、10年前からやり方が変わっていない(10年以上前の製品だしね)という事になります。
http://nnspaces.sblo.jp/article/42316179.html

今回いろいろてこずったのは、10年前のKNOPPIXを捨てて最新のKNOPPIXでやろうとしたこと(xfsdumpコマンドをあとから入れる必要があったお)とか、4TBを認識しない原因の特定に時間がかかったこととかがありました。KNOPPIXで日本語入力が普通にできたら最高の環境だなと思った。あと、Ubuntuでもおんなじことをやってみたい。
以上。


今回のコピーコマンドの控え:

knoppix@Microknoppix:~$ sudo xfsdump -J - /media/sdb1 |sudo xfsrestore -J -p 180 - /media/sdc1
xfsrestore: using file dump (drive_simple) strategy
xfsdump: using file dump (drive_simple) strategy
xfsrestore: version 3.1.6 (dump format 3.0)
xfsdump: version 3.1.6 (dump format 3.0)
xfsdump: level 0 dump of Microknoppix:/media/sdb1
xfsdump: dump date: Sat Mar 28 19:54:02 2020
xfsdump: session id: 7f4f5930-991f-40bd-80f0-31449a680eb7
xfsdump: session label: ""
xfsrestore: searching media for dump
xfsdump: ino map phase 1: constructing initial dump list
xfsdump: ino map phase 2: skipping (no pruning necessary)
xfsdump: ino map phase 3: skipping (only one dump stream)
xfsdump: ino map construction complete
xfsdump: estimated dump size: 1603201556608 bytes
xfsdump: creating dump session media file 0 (media 0, file 0)
xfsdump: dumping ino map
xfsdump: dumping directories
xfsdump: dumping non-directory files
xfsrestore: examining media file 0
xfsrestore: dump description:
xfsrestore: hostname: Microknoppix
xfsrestore: mount point: /media/sdb1
xfsrestore: volume: /dev/sdb1
xfsrestore: session time: Sat Mar 28 19:54:02 2020
xfsrestore: level: 0
xfsrestore: session label: ""
xfsrestore: media label: ""
xfsrestore: file system id: b87b48ac-6d33-4891-b53d-3cf53d224997
xfsrestore: session id: 7f4f5930-991f-40bd-80f0-31449a680eb7
xfsrestore: media id: fef30ede-94c2-4a10-aa17-2eeb2c8337dd
xfsrestore: searching media for directory dump
xfsrestore: reading directories
xfsrestore: 8 directories and 1945 entries processed
xfsrestore: directory post-processing
xfsrestore: restoring non-directory files
xfsrestore: status at 19:57:02: 37/1938 files restored, 1.7% complete, 180 seconds elapsed
xfsrestore: status at 20:00:02: 42/1938 files restored, 3.6% complete, 360 seconds elapsed
xfsrestore: status at 20:03:02: 48/1938 files restored, 5.0% complete, 540 seconds elapsed
xfsrestore: status at 20:06:02: 76/1938 files restored, 7.5% complete, 720 seconds elapsed
xfsrestore: status at 20:09:02: 84/1938 files restored, 8.7% complete, 900 seconds elapsed
xfsrestore: status at 20:12:02: 106/1938 files restored, 10.5% complete, 1080 seconds elapsed
xfsrestore: status at 20:15:02: 131/1938 files restored, 12.4% complete, 1260 seconds elapsed
xfsrestore: status at 20:18:02: 154/1938 files restored, 13.9% complete, 1440 seconds elapsed
xfsrestore: status at 20:21:02: 160/1938 files restored, 15.5% complete, 1620 seconds elapsed
xfsrestore: status at 20:24:02: 170/1938 files restored, 17.3% complete, 1800 seconds elapsed
xfsrestore: status at 20:27:02: 189/1938 files restored, 18.9% complete, 1980 seconds elapsed
xfsrestore: status at 20:30:02: 211/1938 files restored, 20.6% complete, 2160 seconds elapsed
xfsrestore: status at 20:33:02: 233/1938 files restored, 23.1% complete, 2340 seconds elapsed
xfsrestore: status at 20:36:02: 252/1938 files restored, 23.9% complete, 2520 seconds elapsed
xfsrestore: status at 20:39:02: 287/1938 files restored, 25.5% complete, 2700 seconds elapsed
xfsrestore: status at 20:42:02: 307/1938 files restored, 27.1% complete, 2880 seconds elapsed
xfsrestore: status at 20:45:02: 324/1938 files restored, 28.5% complete, 3060 seconds elapsed
xfsrestore: status at 20:48:02: 368/1938 files restored, 30.5% complete, 3240 seconds elapsed
xfsrestore: status at 20:51:02: 423/1938 files restored, 32.0% complete, 3420 seconds elapsed
xfsrestore: status at 20:54:02: 451/1938 files restored, 34.2% complete, 3600 seconds elapsed
xfsrestore: status at 20:57:02: 463/1938 files restored, 35.3% complete, 3780 seconds elapsed
xfsrestore: status at 21:00:02: 475/1938 files restored, 36.6% complete, 3960 seconds elapsed
xfsrestore: status at 21:03:02: 501/1938 files restored, 38.8% complete, 4140 seconds elapsed
xfsrestore: status at 21:06:02: 511/1938 files restored, 39.8% complete, 4320 seconds elapsed
xfsrestore: status at 21:09:02: 536/1938 files restored, 42.3% complete, 4500 seconds elapsed
xfsrestore: status at 21:12:02: 546/1938 files restored, 43.0% complete, 4680 seconds elapsed
xfsrestore: status at 21:15:02: 584/1938 files restored, 44.5% complete, 4860 seconds elapsed
xfsrestore: status at 21:18:02: 627/1938 files restored, 45.8% complete, 5040 seconds elapsed
xfsrestore: status at 21:21:02: 660/1938 files restored, 47.6% complete, 5220 seconds elapsed
xfsrestore: status at 21:24:02: 670/1938 files restored, 49.1% complete, 5400 seconds elapsed
xfsrestore: status at 21:27:02: 697/1938 files restored, 50.3% complete, 5580 seconds elapsed
xfsrestore: status at 21:30:02: 714/1938 files restored, 51.6% complete, 5760 seconds elapsed
xfsrestore: status at 21:33:02: 721/1938 files restored, 53.0% complete, 5940 seconds elapsed
xfsrestore: status at 21:36:02: 752/1938 files restored, 54.5% complete, 6120 seconds elapsed
xfsrestore: status at 21:39:02: 773/1938 files restored, 56.2% complete, 6300 seconds elapsed
xfsrestore: status at 21:42:02: 806/1938 files restored, 57.9% complete, 6480 seconds elapsed
xfsrestore: status at 21:45:02: 857/1938 files restored, 58.9% complete, 6660 seconds elapsed
xfsrestore: status at 21:48:02: 888/1938 files restored, 60.3% complete, 6840 seconds elapsed
xfsrestore: status at 21:51:02: 901/1938 files restored, 61.8% complete, 7020 seconds elapsed
xfsrestore: status at 21:54:02: 940/1938 files restored, 63.1% complete, 7200 seconds elapsed
xfsrestore: status at 21:57:02: 962/1938 files restored, 64.7% complete, 7380 seconds elapsed
xfsrestore: status at 22:00:02: 979/1938 files restored, 66.0% complete, 7560 seconds elapsed
xfsrestore: status at 22:03:02: 1010/1938 files restored, 67.5% complete, 7740 seconds elapsed
xfsrestore: status at 22:06:02: 1059/1938 files restored, 69.5% complete, 7920 seconds elapsed
xfsrestore: status at 22:09:02: 1069/1938 files restored, 70.6% complete, 8100 seconds elapsed
xfsrestore: status at 22:12:02: 1099/1938 files restored, 72.3% complete, 8280 seconds elapsed
xfsrestore: status at 22:15:02: 1113/1938 files restored, 74.4% complete, 8460 seconds elapsed
xfsrestore: status at 22:18:02: 1130/1938 files restored, 75.8% complete, 8640 seconds elapsed
xfsrestore: status at 22:21:02: 1161/1938 files restored, 77.8% complete, 8820 seconds elapsed
xfsrestore: status at 22:24:02: 1191/1938 files restored, 79.3% complete, 9000 seconds elapsed
xfsrestore: status at 22:27:02: 1214/1938 files restored, 81.3% complete, 9180 seconds elapsed
xfsrestore: status at 22:30:02: 1255/1938 files restored, 82.8% complete, 9360 seconds elapsed
xfsrestore: status at 22:33:02: 1279/1938 files restored, 84.4% complete, 9540 seconds elapsed
xfsrestore: status at 22:36:02: 1292/1938 files restored, 85.8% complete, 9720 seconds elapsed
xfsrestore: status at 22:39:02: 1325/1938 files restored, 87.4% complete, 9900 seconds elapsed
xfsrestore: status at 22:42:02: 1361/1938 files restored, 88.9% complete, 10080 seconds elapsed
xfsrestore: status at 22:45:02: 1408/1938 files restored, 90.6% complete, 10260 seconds elapsed
xfsrestore: status at 22:48:02: 1428/1938 files restored, 92.0% complete, 10440 seconds elapsed
xfsrestore: status at 22:51:02: 1462/1938 files restored, 93.5% complete, 10620 seconds elapsed
xfsrestore: status at 22:54:02: 1480/1938 files restored, 94.5% complete, 10800 seconds elapsed
xfsrestore: status at 22:57:02: 1504/1938 files restored, 96.4% complete, 10980 seconds elapsed
xfsrestore: status at 23:00:02: 1522/1938 files restored, 98.2% complete, 11160 seconds elapsed
xfsrestore: status at 23:03:02: 1539/1938 files restored, 99.9% complete, 11340 seconds elapsed
xfsdump: ending media file
xfsdump: media file size 1603639490016 bytes
xfsdump: dump size (non-dir files) : 1603611349248 bytes
xfsdump: dump complete: 11435 seconds elapsed
xfsdump: Dump Status: SUCCESS
xfsrestore: restore complete: 11435 seconds elapsed
xfsrestore: Restore Status: SUCCESS


knoppix@Microknoppix:~$ df -ht xfs
Filesystem Size Used Avail Use% Mounted on
/dev/sdc1 3.7T 1.5T 2.2T 41% /media/sdc1
/dev/sdb1 1.9T 1.5T 370G 81% /media/sdb1


knoppix@Microknoppix:~$ lsblk -fip
NAME FSTYPE LABEL UUID FSAVAIL FSUSE% MOUNTPOINT
/dev/sdb
`-/dev/sdb1 xfs ***-6d33-**-*-3cf53***4997 369.8G 80% /media/sdb
/dev/sdc
`-/dev/sdc1 xfs ***-f139-**-*-b9ab0***b76c 2.2T 40% /media/sdc


xfsdump をインストールする場合
http://nnspaces.sblo.jp/article/187305442.html
@2020/03/28 23:06 | Comment(0) | PC環境構築奮闘記

parted -l on my PC

knoppix@Microknoppix:~$ sudo parted -l
Model: ATA WDC WD6003FZBX-0 (scsi)
Disk /dev/sda: 6001GB
Sector size (logical/physical): 512B/4096B
Partition Table: gpt
Disk Flags:

Number Start End Size File system Name Flags
1 17.4kB 16.8MB 16.8MB Microsoft reserved partition msftres
2 16.8MB 1702GB 1702GB ntfs Basic data partition msftdata
3 1702GB 2017GB 315GB ntfs Basic data partition msftdata
4 2017GB 3589GB 1573GB ntfs Basic data partition msftdata
5 5896GB 6001GB 105GB ntfs Basic data partition msftdata


Model: ATA TOSHIBA MD04ACA2 (scsi)
Disk /dev/sdb: 2000GB
Sector size (logical/physical): 512B/512B
Partition Table: gpt
Disk Flags:

Number Start End Size File system Name Flags
1 17.4kB 2000GB 2000GB xfs primary msftdata


Model: Netac OnlyDisk (scsi)
Disk /dev/sde: 1995MB
Sector size (logical/physical): 512B/512B
Partition Table: msdos
Disk Flags:

Number Start End Size Type File system Flags
1 16.4kB 1995MB 1995MB primary fat32 boot, lba




Model: ASMT 2115 (scsi)
Disk /dev/sdc: 4001GB
Sector size (logical/physical): 512B/4096B
Partition Table: gpt
Disk Flags:

Number Start End Size File system Name Flags
1 17.4kB 4001GB 4001GB xfs primary msftdata


Model: Kingmax USB2.0 FlashDisk (scsi)
Disk /dev/sdd: 7986MB
Sector size (logical/physical): 512B/512B
Partition Table: msdos
Disk Flags:

Number Start End Size Type File system Flags
1 1049kB 7986MB 7985MB primary fat32 boot, lba


Warning: Unable to open /dev/cloop2 read-write (Read-only file system).
/dev/cloop2 has been opened read-only.
Error: /dev/cloop2: unrecognised disk label
Model: Unknown (unknown)
Disk /dev/cloop2: 152MB
Sector size (logical/physical): 512B/512B
Partition Table: unknown
Disk Flags:

Model: WDS500G3X0C-00SJG0 (nvme)
Disk /dev/nvme0n1: 500GB
Sector size (logical/physical): 512B/512B
Partition Table: gpt
Disk Flags:

Number Start End Size File system Name Flags
1 1049kB 524MB 523MB ntfs Basic data partition hidden, diag
2 524MB 628MB 104MB fat32 EFI system partition boot, esp
3 628MB 645MB 16.8MB Microsoft reserved partition msftres
4 645MB 500GB 499GB ntfs Basic data partition msftdata


Warning: Unable to open /dev/cloop0 read-write (Read-only file system).
/dev/cloop0 has been opened read-only.
Error: /dev/cloop0: unrecognised disk label
Model: Unknown (unknown)
Disk /dev/cloop0: 9686MB
Sector size (logical/physical): 512B/512B
Partition Table: unknown
Disk Flags:

Warning: Unable to open /dev/cloop1 read-write (Read-only file system).
/dev/cloop1 has been opened read-only.
Error: /dev/cloop1: unrecognised disk label
Model: Unknown (unknown)
Disk /dev/cloop1: 2317MB
Sector size (logical/physical): 512B/512B
Partition Table: unknown
Disk Flags:

Model: Samsung SSD 970 EVO Plus 1TB (nvme)
Disk /dev/nvme1n1: 1000GB
Sector size (logical/physical): 512B/512B
Partition Table: gpt
Disk Flags:

Number Start End Size File system Name Flags
1 17.4kB 16.8MB 16.8MB Microsoft reserved partition msftres
2 16.8MB 900GB 900GB ntfs Basic data partition msftdata


Model: Unknown (unknown)
Disk /dev/zram0: 4295MB
Sector size (logical/physical): 4096B/4096B
Partition Table: loop
Disk Flags:

Number Start End Size File system Flags
1 0.00B 4295MB 4295MB linux-swap(v1)

Model: Patriot Memory (scsi)
Disk /dev/sde: 15.5GB
Sector size (logical/physical): 512B/512B
Partition Table: msdos
Disk Flags:

Number Start End Size Type File system Flags
1 4129kB 15.5GB 15.5GB primary ntfs

@2020/03/28 22:13 | Comment(0) | PC環境構築奮闘記

Regza のHDDコピーに失敗した理由(4TB 容量を認識できていない)は HDDのクレードルが古いタイプだったからのようだ

4TBのHDDをRegzaにつないだのに、1.8TBしか認識されず、PCにつなぎなおししてから 領域を拡張して、、なんてやっていたので PCでのデータコピーは成功していても、Regza 側では認識せずに困っていた。

PC側の新しめのクレードルにつないだところ問題なく認識した。

こちらがその古いクレードルを、Regzaから取り外ししてPCに持ってきてつないだ時のHDD認識の画面。
確かに1.8TBまでしか認識していない、納得。
2020-03-28_095550.png

新しめのクレードル。全容量認識。
2020-03-28_100147.png

問題の原因はRegza側に用意していたHDDクレードル(HDDを裸のまま接続可能な例のお立ち台)が古くて、認識がおかしかったのだ。
つまりあれだ、Regza用に新しいクレードルを用意しないと。。。
引き続き検証します。

⇒ 新しい裸族のお立ち台(HDDクレードル)で、REGZA側4TB無事認識、その後のコピーも過去に比べて最速コピー完了。
Regza HDD Copy by xfsdump and xfsrestore: NN Space BLOG-NN空間ブログ
@2020/03/28 10:09 | Comment(0) | PC環境構築奮闘記

2020年03月25日

Regza HDD Easy Repair * LOG / on KNOPPIX 8.6 english

http://www.netbuffalo.net/regza/RgzHddEasyRepair/download.html


knoppix@Microknoppix:~/Downloads$ sudo dpkg -i regza-hdd-easy-repair-1.0.deb

Target : New DISK

Execute > Accessories > Regza HDD Easy Repair ...

Phase 1 - find and verify superblock...
- reporting progress in intervals of 15 minutes
- block cache size set to 196272 entries
Phase 2 - using internal log
- zero log...
zero_log: head block 2 tail block 2
- scan filesystem freespace and inode maps...
sb_fdblocks 589634270, counted 589650654
- 22:03:16: scanning filesystem freespace - 72 of 72 allocation groups done
- found root inode chunk
Phase 3 - for each AG...
- scan and clear agi unlinked lists...
- 22:03:16: scanning agi unlinked lists - 72 of 72 allocation groups done
- process known inodes and perform inode discovery...
- agno = 45
- agno = 15
- agno = 30
- agno = 60
- agno = 46
- agno = 0
- agno = 16
- agno = 47
- agno = 61
- agno = 31
- agno = 17
- agno = 32
- agno = 18
- agno = 62
- agno = 48
- agno = 33
- agno = 49
- agno = 63
- agno = 34
- agno = 64
- agno = 35
- agno = 50
- agno = 65
- agno = 36
- agno = 66
- agno = 51
- agno = 37
- agno = 67
- agno = 19
- agno = 52
- agno = 68
- agno = 20
- agno = 38
- agno = 53
- agno = 69
- agno = 21
- agno = 54
- agno = 55
- agno = 70
- agno = 56
- agno = 39
- agno = 22
- agno = 40
- agno = 71
- agno = 57
- agno = 23
- agno = 41
- agno = 24
- agno = 25
- agno = 26
- agno = 58
- agno = 59
- agno = 27
- agno = 28
- agno = 29
- agno = 42
- agno = 43
- agno = 44
- agno = 1
- agno = 2
- agno = 3
- agno = 4
- agno = 5
- agno = 6
- agno = 7
- agno = 8
- agno = 9
- agno = 10
- agno = 11
- agno = 12
- agno = 13
- agno = 14
- 22:03:16: process known inodes and inode discovery - 2048 of 2048 inodes done
- process newly discovered inodes...
- 22:03:16: process newly discovered inodes - 72 of 72 allocation groups done
Phase 4 - check for duplicate blocks...
- setting up duplicate extent list...
- 22:03:16: setting up duplicate extent list - 72 of 72 allocation groups done
- check for inodes claiming duplicate blocks...
- agno = 0
- agno = 2
- agno = 3
- agno = 4
- agno = 5
- agno = 6
- agno = 7
- agno = 8
- agno = 9
- agno = 10
- agno = 11
- agno = 12
- agno = 13
- agno = 15
- agno = 17
- agno = 14
- agno = 16
- agno = 19
- agno = 20
- agno = 18
- agno = 23
- agno = 24
- agno = 21
- agno = 26
- agno = 22
- agno = 27
- agno = 28
- agno = 30
- agno = 31
- agno = 33
- agno = 32
- agno = 34
- agno = 35
- agno = 36
- agno = 29
- agno = 37
- agno = 39
- agno = 41
- agno = 38
- agno = 43
- agno = 25
- agno = 1
- agno = 42
- agno = 45
- agno = 48
- agno = 44
- agno = 49
- agno = 50
- agno = 51
- agno = 52
- agno = 46
- agno = 55
- agno = 53
- agno = 56
- agno = 57
- agno = 58
- agno = 59
- agno = 47
- agno = 40
- agno = 60
- agno = 62
- agno = 66
- agno = 67
- agno = 63
- agno = 69
- agno = 61
- agno = 70
- agno = 54
- agno = 68
- agno = 64
- agno = 65
- agno = 71
- 22:03:16: check for inodes claiming duplicate blocks - 2048 of 2048 inodes done
Phase 5 - rebuild AG headers and trees...
- agno = 0
- agno = 1
- agno = 2
- agno = 3
- agno = 4
- agno = 5
- agno = 6
- agno = 7
- agno = 8
- agno = 9
- agno = 10
- agno = 11
- agno = 12
- agno = 13
- agno = 14
- agno = 15
- agno = 16
- agno = 17
- agno = 18
- agno = 19
- agno = 20
- agno = 21
- agno = 22
- agno = 23
- agno = 24
- agno = 25
- agno = 26
- agno = 27
- agno = 28
- agno = 29
- agno = 30
- agno = 31
- agno = 32
- agno = 33
- agno = 34
- agno = 35
- agno = 36
- agno = 37
- agno = 38
- agno = 39
- agno = 40
- agno = 41
- agno = 42
- agno = 43
- agno = 44
- agno = 45
- agno = 46
- agno = 47
- agno = 48
- agno = 49
- agno = 50
- agno = 51
- agno = 52
- agno = 53
- agno = 54
- agno = 55
- agno = 56
- agno = 57
- agno = 58
- agno = 59
- agno = 60
- agno = 61
- agno = 62
- agno = 63
- agno = 64
- agno = 65
- agno = 66
- agno = 67
- agno = 68
- agno = 69
- agno = 70
- agno = 71
- 22:03:16: rebuild AG headers and trees - 72 of 72 allocation groups done
- reset superblock...
Phase 6 - check inode connectivity...
- resetting contents of realtime bitmap and summary inodes
- traversing filesystem ...
- agno = 0
- agno = 1
- agno = 2
- agno = 3
- agno = 4
- agno = 5
- agno = 6
- agno = 7
- agno = 8
- agno = 9
- agno = 10
- agno = 11
- agno = 12
- agno = 13
- agno = 14
- agno = 15
- agno = 16
- agno = 17
- agno = 18
- agno = 19
- agno = 20
- agno = 21
- agno = 22
- agno = 23
- agno = 24
- agno = 25
- agno = 26
- agno = 27
- agno = 28
- agno = 29
- agno = 30
- agno = 31
- agno = 32
- agno = 33
- agno = 34
- agno = 35
- agno = 36
- agno = 37
- agno = 38
- agno = 39
- agno = 40
- agno = 41
- agno = 42
- agno = 43
- agno = 44
- agno = 45
- agno = 46
- agno = 47
- agno = 48
- agno = 49
- agno = 50
- agno = 51
- agno = 52
- agno = 53
- agno = 54
- agno = 55
- agno = 56
- agno = 57
- agno = 58
- agno = 59
- agno = 60
- agno = 61
- agno = 62
- agno = 63
- agno = 64
- agno = 65
- agno = 66
- agno = 67
- agno = 68
- agno = 69
- agno = 70
- agno = 71
- traversal finished ...
- moving disconnected inodes to lost+found ...
Phase 7 - verify and correct link counts...
- 22:03:16: verify and correct link counts - 72 of 72 allocation groups done

XFS_REPAIR Summary Wed Mar 25 22:03:17 2020

Phase Start End Duration
Phase 1: 03/25 22:03:14 03/25 22:03:14
Phase 2: 03/25 22:03:14 03/25 22:03:16 2 seconds
Phase 3: 03/25 22:03:16 03/25 22:03:16
Phase 4: 03/25 22:03:16 03/25 22:03:16
Phase 5: 03/25 22:03:16 03/25 22:03:16
Phase 6: 03/25 22:03:16 03/25 22:03:16
Phase 7: 03/25 22:03:16 03/25 22:03:16

Total run time: 2 seconds
done
DO_REPAIR_SUCCESS


-------------------------------------------------------

TARGET Original Disk
Phase 1 - find and verify superblock...
- reporting progress in intervals of 15 minutes
- block cache size set to 226080 entries
Phase 2 - using internal log
- zero log...
zero_log: head block 122271 tail block 122267
ALERT: The filesystem has valuable metadata changes in a log which is being
destroyed because the -L option was used.
- scan filesystem freespace and inode maps...
sb_fdblocks 96515157, counted 96547925
- 22:06:53: scanning filesystem freespace - 32 of 32 allocation groups done
- found root inode chunk
Phase 3 - for each AG...
- scan and clear agi unlinked lists...
- 22:06:53: scanning agi unlinked lists - 32 of 32 allocation groups done
- process known inodes and perform inode discovery...
- agno = 30
- agno = 15
- agno = 31
- agno = 0
- agno = 16
- agno = 17
- agno = 18
- agno = 19
- agno = 20
- agno = 21
- agno = 22
- agno = 23
- agno = 24
- agno = 25
- agno = 26
- agno = 27
- agno = 28
- agno = 29
- agno = 1
- agno = 2
- agno = 3
- agno = 4
- agno = 5
- agno = 6
- agno = 7
- agno = 8
- agno = 9
- agno = 10
- agno = 11
- agno = 12
- agno = 13
- agno = 14
- 22:06:56: process known inodes and inode discovery - 2368 of 2368 inodes done
- process newly discovered inodes...
- 22:06:56: process newly discovered inodes - 32 of 32 allocation groups done
Phase 4 - check for duplicate blocks...
- setting up duplicate extent list...
- 22:06:56: setting up duplicate extent list - 32 of 32 allocation groups done
- check for inodes claiming duplicate blocks...
- agno = 1
- agno = 3
- agno = 4
- agno = 2
- agno = 5
- agno = 6
- agno = 7
- agno = 8
- agno = 9
- agno = 10
- agno = 11
- agno = 12
- agno = 13
- agno = 14
- agno = 15
- agno = 16
- agno = 17
- agno = 18
- agno = 19
- agno = 20
- agno = 21
- agno = 22
- agno = 23
- agno = 24
- agno = 25
- agno = 26
- agno = 27
- agno = 28
- agno = 29
- agno = 30
- agno = 31
- agno = 0
- 22:06:56: check for inodes claiming duplicate blocks - 2368 of 2368 inodes done
Phase 5 - rebuild AG headers and trees...
- agno = 0
- agno = 1
- agno = 2
- agno = 3
- agno = 4
- agno = 5
- agno = 6
- agno = 7
- agno = 8
- agno = 9
- agno = 10
- agno = 11
- agno = 12
- agno = 13
- agno = 14
- agno = 15
- agno = 16
- agno = 17
- agno = 18
- agno = 19
- agno = 20
- agno = 21
- agno = 22
- agno = 23
- agno = 24
- agno = 25
- agno = 26
- agno = 27
- agno = 28
- agno = 29
- agno = 30
- agno = 31
- 22:06:56: rebuild AG headers and trees - 32 of 32 allocation groups done
- reset superblock...
Phase 6 - check inode connectivity...
- resetting contents of realtime bitmap and summary inodes
- traversing filesystem ...
- agno = 0
- agno = 1
- agno = 2
- agno = 3
- agno = 4
- agno = 5
- agno = 6
- agno = 7
- agno = 8
- agno = 9
- agno = 10
- agno = 11
- agno = 12
- agno = 13
- agno = 14
- agno = 15
- agno = 16
- agno = 17
- agno = 18
- agno = 19
- agno = 20
- agno = 21
- agno = 22
- agno = 23
- agno = 24
- agno = 25
- agno = 26
- agno = 27
- agno = 28
- agno = 29
- agno = 30
- agno = 31
- traversal finished ...
- moving disconnected inodes to lost+found ...
Phase 7 - verify and correct link counts...
- 22:06:57: verify and correct link counts - 32 of 32 allocation groups done

XFS_REPAIR Summary Wed Mar 25 22:06:57 2020

Phase Start End Duration
Phase 1: 03/25 22:06:51 03/25 22:06:52 1 second
Phase 2: 03/25 22:06:52 03/25 22:06:53 1 second
Phase 3: 03/25 22:06:53 03/25 22:06:56 3 seconds
Phase 4: 03/25 22:06:56 03/25 22:06:56
Phase 5: 03/25 22:06:56 03/25 22:06:56
Phase 6: 03/25 22:06:56 03/25 22:06:57 1 second
Phase 7: 03/25 22:06:57 03/25 22:06:57

Total run time: 6 seconds
done
DO_REPAIR_SUCCESS

コピー元のオリジナルディスクもろともぶっ壊れて、REGZAに認識されなくて困っていたところ。

このRepairツールで元に戻りました。 Ubuntuではなくて KNOPPIX でもこのツールは使えた。

@2020/03/25 22:08 | Comment(0) | PC環境構築奮闘記

2020年03月24日

Regza HDD copy / on KNOPPIX 8.6 english

http://www.4682.info/easy

knoppix@Microknoppix:~/Downloads$ cd /home/knoppix/Downloads


knoppix@Microknoppix:~/Downloads$ sudo dpkg -i regza-hdd-easy-copy-x64-1.1.deb
dpkg: error processing archive regza-hdd-easy-copy-x64-1.1.deb (--install):
package architecture (amd64) does not match system (i386)
Errors were encountered while processing:
regza-hdd-easy-copy-x64-1.1.deb

knoppix@Microknoppix:~/Downloads$ sudo dpkg -i regza-hdd-easy-copy-1.1.deb

............ done!

MENU > Accessories > REGZA HDD Easy Copy.

... Now Processing

※結局どこで失敗したのか、わからないけど、コピー先のディスクは認識されなかった。
UUIDも複製されたんだけどねぇ。
@2020/03/24 22:51 | Comment(0) | PC環境構築奮闘記

USBブートの KNOPPIX8.6 で SWAPパーティションを追加する

USBブートの KNOPPIX 8.5 で SWAPパーティションを追加したので、適当にメモしておくね

1. System Tools > GPARTED などをつかって、任意のパーティションを切って SWAP領域としてフォーマットする。

2. sudo swapon -a で有効化する

knoppix@Microknoppix:~$ free
total used free shared buff/cache available
Mem: 32858024 1011952 222636 1029164 31623436 30500884
Swap: 0 0 0

knoppix@Microknoppix:~$ sudo swapon -a

knoppix@Microknoppix:~$ free
total used free shared buff/cache available
Mem: 32858024 995364 190340 960544 31672320 30586164
Swap: 13516796 0 13516796

既存のディスクのパーティションをいじると、WINDOWSブートに戻った時に変なことになるかもしれないですよ。(なった)
@2020/03/24 20:14 | Comment(0) | PC環境構築奮闘記

KNOPPIX 8.6 英語版で日本語入力に難儀した

http://ime.baidu.jp/type/about/onlineime.php

WEB IMEで検索したらこれを見つけた

ほぼ完璧なんだけど、{見つけた}>{Found} って最初に変換される。

@2020/03/24 16:22 | Comment(4) | 日記

xfsdump and restore / on KNOPPIX 8.6 english

http://fibrevillage.com/storage/668-xfs-copy-command-examples
https://www.atmarkit.co.jp/ait/articles/1804/26/news045.html

knoppix@Microknoppix:/$ date
Tue 24 Mar 2020 01:46:37 PM EDT

knoppix@Microknoppix:~$ sudo xfsdump -J - /media/sdb1 |sudo xfsrestore -J -p 256 - /media/sdc1

xfsrestore: using file dump (drive_simple) strategy
xfsdump: using file dump (drive_simple) strategy

xfsrestore: version 3.1.6 (dump format 3.0)
xfsdump: version 3.1.6 (dump format 3.0)

xfsdump: level 0 dump of Microknoppix:/media/sdb1
xfsdump: dump date: Tue Mar 24 13:44:54 2020
xfsdump: session id: 6a79b2d9-bd20-47fd-98fc-9345e0c3ec1a
xfsdump: session label: ""
xfsdump: ino map phase 1: constructing initial dump list
xfsdump: ino map phase 2: skipping (no pruning necessary)
xfsdump: ino map phase 3: skipping (only one dump stream)
xfsdump: ino map construction complete
xfsdump: estimated dump size: 1604803675520 bytes
xfsdump: creating dump session media file 0 (media 0, file 0)
xfsdump: dumping ino map
xfsdump: dumping directories
xfsdump: dumping non-directory files
xfsrestore: searching media for dump
xfsrestore: examining media file 0
xfsrestore: dump description:
xfsrestore: hostname: Microknoppix
xfsrestore: mount point: /media/sdb1
xfsrestore: volume: /dev/sdb1
xfsrestore: session time: Tue Mar 24 13:44:54 2020
xfsrestore: level: 0
xfsrestore: session label: ""
xfsrestore: media label: ""
xfsrestore: file system id: b87b48ac-6d33-4891-b53d-3cf53d224997
xfsrestore: session id: 6a79b2d9-bd20-47fd-98fc-9345e0c3ec1a
xfsrestore: media id: 43a183dc-0745-4b95-a44b-389fabc6f156
xfsrestore: searching media for directory dump
xfsrestore: reading directories
xfsrestore: 8 directories and 1949 entries processed
xfsrestore: directory post-processing
xfsrestore: restoring non-directory files
xfsrestore: status at 13:47:00: 8/1942 files restored, 0.5% complete, 126 seconds elapsed
xfsrestore: status at 13:48:57: 22/1942 files restored, 0.9% complete, 243 seconds elapsed
xfsrestore: status at 13:51:10: 32/1942 files restored, 1.0% complete, 376 seconds elapsed
xfsrestore: status at 13:53:04: 33/1942 files restored, 1.4% complete, 490 seconds elapsed
xfsrestore: status at 13:54:54: 38/1942 files restored, 1.8% complete, 600 seconds elapsed
xfsrestore: status at 13:56:55: 38/1942 files restored, 1.8% complete, 721 seconds elapsed

... now processing !

xfsrestore: status at 21:14:54: 1458/1942 files restored, 92.8% complete, 27000 seconds elapsed
xfsrestore: status at 21:16:54: 1478/1942 files restored, 94.3% complete, 27120 seconds elapsed
xfsrestore: status at 21:18:54: 1494/1942 files restored, 95.8% complete, 27240 seconds elapsed
xfsrestore: status at 21:20:54: 1499/1942 files restored, 96.2% complete, 27360 seconds elapsed
xfsrestore: status at 21:22:54: 1521/1942 files restored, 97.4% complete, 27480 seconds elapsed
xfsrestore: status at 21:24:54: 1530/1942 files restored, 98.6% complete, 27600 seconds elapsed
xfsrestore: status at 21:26:54: 1542/1942 files restored, 99.9% complete, 27720 seconds elapsed
xfsdump: ending media file
xfsdump: media file size 1605242145056 bytes
xfsdump: dump size (non-dir files) : 1605213975552 bytes
xfsdump: dump complete: 27786 seconds elapsed
xfsdump: Dump Status: SUCCESS
xfsrestore: restore complete: 27786 seconds elapsed
xfsrestore: Restore Status: SUCCESS

@2020/03/24 13:50 | Comment(0) | 日記

install xfsdump / on KNOPPIX 8.6 english

knoppix@Microknoppix:~$ sudo apt-get update

初期状態の KNOPPIX 8.6には そのまま 以下のインストールコマンドが使えないため、まずは上記のUpdateを実施します。
...

knoppix@Microknoppix:/media/sdc1$ sudo apt-get install xfsdump
Reading package lists... Done
Building dependency tree
Reading state information... Done
Suggested packages:
quota
The following NEW packages will be installed:
xfsdump
0 upgraded, 1 newly installed, 0 to remove and 456 not upgraded.
Need to get 284 kB of archives.
After this operation, 970 kB of additional disk space will be used.
Get:1 http://ftp.de.debian.org/debian stable/main i386 xfsdump i386 3.1.6+nmu2+b1 [284 kB]
Fetched 284 kB in 2s (147 kB/s)
Selecting previously unselected package xfsdump.
(Reading database ... 479212 files and directories currently installed.)
Preparing to unpack .../xfsdump_3.1.6+nmu2+b1_i386.deb ...
Unpacking xfsdump (3.1.6+nmu2+b1) ...
Setting up xfsdump (3.1.6+nmu2+b1) ...
Processing triggers for man-db (2.8.5-2) ...
Running prelink, please wait...

// this command also installs xfsrestore

http://www.4682.info/hdd
http://www.4682.info/repair
@2020/03/24 13:37 | Comment(0) | PC環境構築奮闘記

lsblk command memo / on KNOPPIX 8.6 english

https://www.atmarkit.co.jp/ait/articles/1802/02/news021.html

knoppix@Microknoppix:~$ lsblk
NAME MAJ:MIN RM SIZE RO TYPE MOUNTPOINT
sda 8:0 0 5.5T 0 disk
├─sda1 8:1 0 16M 0 part
├─sda2 8:2 0 1.6T 0 part
├─sda3 8:3 0 293G 0 part
├─sda4 8:4 0 1.4T 0 part
└─sda5 8:5 0 97.7G 0 part
sdb 8:16 0 1.8T 0 disk
└─sdb1 8:17 0 1.8T 0 part /media/sdb1 ... from here
sdc 8:32 0 3.7T 0 disk
└─sdc1 8:33 0 2.9T 0 part /media/sdc1 ... to here
sdd 8:48 1 7.4G 0 disk
└─sdd1 8:49 1 7.4G 0 part /media/sdd1
sr0 11:0 1 1024M 0 rom
cloop0 240:0 0 9G 1 disk /KNOPPIX
cloop1 240:1 0 2.2G 1 disk /KNOPPIX1
cloop2 240:2 0 144.8M 1 disk /KNOPPIX2
zram0 253:0 0 4G 0 disk [SWAP]
nvme0n1 259:0 0 465.8G 0 disk
├─nvme0n1p1 259:1 0 499M 0 part
├─nvme0n1p2 259:2 0 99M 0 part
├─nvme0n1p3 259:3 0 16M 0 part
└─nvme0n1p4 259:4 0 465.2G 0 part
nvme1n1 259:5 0 931.5G 0 disk
├─nvme1n1p1 259:6 0 16M 0 part
└─nvme1n1p2 259:7 0 838.4G 0 part


knoppix@Microknoppix:~$ uname -r
5.2.5-64
knoppix@Microknoppix:~$ xfs_copy -V
xfs_copy version 4.20.0
knoppix@Microknoppix:~$ lsblk -fip
NAME FSTYPE FSUSE% MOUNTPOINT
/dev/sda
|-/dev/sda1
|-/dev/sda2 ntfs
|-/dev/sda3 ntfs
|-/dev/sda4 ntfs
`-/dev/sda5 ntfs
/dev/sdb
`-/dev/sdb1 xfs 80% /media/sdb1
/dev/sdc
`-/dev/sdc1 xfs 0% /media/sdc1
/dev/sdd
`-/dev/sdd1 vfat 58% /media/sdd1
/dev/sr0
/dev/cloop0 100% /KNOPPIX
/dev/cloop1 100% /KNOPPIX1
/dev/cloop2 100% /KNOPPIX2
/dev/zram0 [SWAP]
/dev/nvme0n1
|-/dev/nvme0n1p1 ntfs
|-/dev/nvme0n1p2 vfat
|-/dev/nvme0n1p3
`-/dev/nvme0n1p4 ntfs
/dev/nvme1n1
|-/dev/nvme1n1p1
`-/dev/nvme1n1p2 ntfs




knoppix@Microknoppix:~$ df
Filesystem 1K-blocks Used Available Use% Mounted on
rootfs 16425532 52 16425480 1% /
/dev/sdd1 7781376 4511740 3269636 58% /mnt-system
tmpfs 26284032 477244 25806788 2% /ramdisk
/dev/cloop 9459128 9459128 0 100% /KNOPPIX
/dev/cloop1 2262876 2262876 0 100% /KNOPPIX1
/dev/cloop2 148074 148074 0 100% /KNOPPIX2
unionfs 26284032 477244 25806788 2% /UNIONFS
tmpfs 20480 3548 16932 18% /run
tmpfs 10240 4 10236 1% /UNIONFS/var/lock
tmpfs 102400 84 102316 1% /UNIONFS/var/log
tmpfs 2097152 4 2097148 1% /tmp
cgroup 12 0 12 0% /sys/fs/cgroup
udev 20480 0 20480 0% /dev
tmpfs 2097152 83040 2014112 4% /dev/shm
/dev/sdb1 1953383424 1567225516 386157908 81% /media/sdb1
/dev/sdc1 1759403776 34420 1759369356 1% /media/sdc1

knoppix@Microknoppix:~$ df -ht xfs
Filesystem Size Used Avail Use% Mounted on
/dev/sdb1 1.9T 1.5T 369G 81% /media/sdb1
/dev/sdc1 1.7T 34M 1.7T 1% /media/sdc1

... did some operations ...

knoppix@Microknoppix:~$ df -ht xfs
Filesystem Size Used Avail Use% Mounted on
/dev/sdb1 1.9T 1.5T 369G 81% /media/sdb1
/dev/sdc1 3.7T 1.5T 2.2T 41% /media/sdc1


lsblk コマンドを覚えた!!
lsblk -fip オプションも覚えた!!!
@2020/03/24 12:52 | Comment(0) | PC環境構築奮闘記

2020年03月16日

英語のビジネスメールでよく使うフレーズ 〜 【主張】 編

英語のビジネスメールでよく使うフレーズ <主張> - Pioneer of the Star

よく使うようになりたい。

2020年03月14日

もうオリンピックはあきらめて、みんな自宅待機だ。ヤバイ。

去年だったら、オリンピックシーズンは東京に行くつもりだったんだ。オリンピックを見に行く人を見に行くっていう目的でね。

ピークカット戦略(集団免疫戦略)地獄への道は善意で舗装されている | Medium

どうなっていくんだ、新型コロナウィルス!

上記の記事が、具体的数字をイメージさせてくれるけど、真実かどうかが大事ではない。

具体的な数字を見ないでグラフの形だけ見て「わかった気になってしまっていた」自分を戒めるためにこの記事を書いた。

だけど、病院のキャパシティは調べればわからなくもない数字だけど、ピークが今後どう推移していくかなんて、わからんだろ。でも楽観的でいられない程度の状況だという事は念頭に置くべきだと思う。

そう、思ってるだけなんだ。(駄目だね)

痛いニュース(ノ∀`) : 全米科学財団 さらばコロナよ ヤツの弱点は『湿度』だ - ライブドアブログ
@2020/03/14 22:47 | Comment(2) | 日記

2020年03月07日

Windows10に切り替え完了

今までWindows7だった環境をすべてWindows10に切り替えた。

もはやどれがどれだか区別がつかない。

Microsoftアカウントでテーマの同期を行うのをやめた。

2020年03月06日

みんながリモートワークを始めたことが影響しているのか?

Webミーティングで、通話品質が悪くなった。ここ数日。
@2020/03/06 22:21 | Comment(0) | 覚書と備忘録

2020年03月05日

ネットワーク越しでファイルを送りあう

ネットワーク越しでパイプしたり、あらゆるデバイス間でデータ転送したい!

なんかこれ楽しそう
@2020/03/05 00:29 | Comment(0) | テクノロジー

2020年03月03日

エクスプローラ周り重い人向け覚書

エクスプローラ周り重い人向け覚書

めも。

@2020/03/03 00:28 | Comment(0) | PC環境構築奮闘記

AWSでのセキュリティ対策全部盛り

AWSでのセキュリティ対策全部盛り[初級から中級まで] - Speaker Deck

Gan Cubeのお手入れ方法

Daily care of GAN cubes – GANCube

メモ
@2020/03/03 00:04 | Comment(0) | お勧めサイト紹介

2020年03月02日

Western Digital SSDダッシュボード

Model String: WDS500G3X0C-00SJG0

ソフトウェアおよびファームウェアのダウンロード | WD サポート

@2020/03/02 21:57 | Comment(0) | PC環境構築奮闘記

2020年03月01日

臨時休校対応特別企画

科学技術広報研究会 臨時休校対応特別企画

めも。




・おすすめ楽天ショップ1:trendyimpact楽天市場店
・おすすめサプリショップ:iHerb.com
・おすすめ楽天ショップ2:上海問屋
Powered by さくらのブログ