こちらは「Jetson Xavier NX上でSimSiamとCIFAR-10で表現学習をやってみた facebookresearch github引数部分の変更(1)」の続きになります
technoxs-stacker.hatenablog.com
目次
スポンサーリンク
1.実行環境
Jetson Xavier NX
ubuntu18.04
docker
python3.x
pytorch
->Jetson Xavier NX上におけるpytrorch環境構築は以下でやってますので、ご参考までに(^^)/
technoxs-stacker.hatenablog.com
2.コード変更
2.1 main関数変更
元のコードのマルチプロセス部分をコメントアウトしてmain_workerを呼び出すように変更
def main(): args = parser.parse_args() if args.seed is not None: random.seed(args.seed) torch.manual_seed(args.seed) cudnn.deterministic = True warnings.warn('You have chosen to seed training. ' 'This will turn on the CUDNN deterministic setting, ' 'which can slow down your training considerably! ' 'You may see unexpected behavior when restarting ' 'from checkpoints.') if args.gpu is not None: warnings.warn('You have chosen a specific GPU. This will completely ' 'disable data parallelism.') # ここから--------------------------------------------------------------------------------------------- #if args.dist_url == "env://" and args.world_size == -1: #args.world_size = int(os.environ["WORLD_SIZE"]) #args.distributed = args.world_size > 1 or args.multiprocessing_distributed #ngpus_per_node = torch.cuda.device_count() #if args.multiprocessing_distributed: # Since we have ngpus_per_node processes per node, the total world_size # needs to be adjusted accordingly #args.world_size = ngpus_per_node * args.world_size # Use torch.multiprocessing.spawn to launch distributed processes: the # main_worker process function #mp.spawn(main_worker, nprocs=ngpus_per_node, args=(ngpus_per_node, args)) else: # Simply call main_worker function #main_worker(args.gpu, ngpus_per_node, args) # ここまでコメントアウト--------------------------------------------------------------------------- # ここから--------------------------------------------------------------------------------------------- ngpus_per_node = torch.cuda.device_count() main_worker(args.gpu, ngpus_per_node, args) # ここまで追加----------------------------------------------------------------------------------------
※(3)に続きます
参考
github.com スポンサーリンク