こすたろーんエンジニアの試行錯誤部屋

作成物の備忘録を書いていきますー

I tried tensorboradX.

tensorboardX is a dashboard tool that allows visualization of data flow graphs and variables (loss, accuracy, etc.) of the learning relationship.
(pytorch version of tensorboard used in tensorflow)

We will add tensorboardX into the simsiam code we created previously.
technoxs-stacker.hatenablog.com

contents

スポンサーリンク

abstract

・How to install tensorboardX
・Basic usage of tensorboardX

1.requirement

Jetson Xavier NX
ubuntu18.04
docker
python3.x
pytorch
->The pytrorch environment on Jetson Xavier NX is shown below for your reference.
technoxs-stacker.hatenablog.com

2.how to install

Can be installed with pip

pip install tensorboard
pip install tensorboardX

3.Basic usage

The following flow can be used to visualize with tensorboardX.
3.1Import of required modules
3.2Definition of writer
3.3write a numerical values
3.4start a visualization tools

3.1Import of required modules

# ------------------------------- 3.1
from tensorboardX import SummaryWriter
# -------------------------------

3.2Definition of writer

The first argument logdir specifies the directory to save the output results.

def main_worker(gpu):
  ・
  ・
  ・
    # ------------------------------- 3.2
    # define writer
    writer = SummaryWriter(log_dir)
    # ------------------------------- 

    for epoch in range(start_epoch, epochs):
        adjust_learning_rate(optimizer, init_lr, epoch, epochs)

        # train for one epoch
        # ------------------------------- 3.2
        train(train_loader, model, criterion, optimizer, epoch, gpu, print_freq, writer)
        # -------------------------------

        save_checkpoint({
            'epoch': epoch + 1,
            'arch': arch,
            'state_dict': model.state_dict(),
            'optimizer' : optimizer.state_dict(),
        }, is_best=False, filename='checkpoint_{:04d}.pth.tar'.format(epoch))

    torch.save(model.state_dict(),
                checkpoint_dir / 'latest.pth')

3.3 write a numerical values

Add the variables you want to visualize to writer.
This time we visualize the LOSS value.
The function used is add_scalar.

item function
add_scalar writer.add_scalar("label", value, horizontal axis value)

Append add_scalar to the train function.

# ------------------------------- 3.3  
def train(train_loader, model, criterion, optimizer, epoch, gpu, print_freq, writer):
# ------------------------------- 
    batch_time = AverageMeter('Time', ':6.3f')
    data_time = AverageMeter('Data', ':6.3f')
    losses = AverageMeter('Loss', ':.4f')
    progress = ProgressMeter(
        len(train_loader),
        [batch_time, data_time, losses],
        prefix="Epoch: [{}]".format(epoch))

    # switch to train mode
    model.train()
    end = time.time()
    for i, (images, _) in enumerate(train_loader, start=epoch * len(train_loader)):
        # measure data loading time
        data_time.update(time.time() - end)
        images[0] = images[0].cuda(gpu, non_blocking=True)
        images[1] = images[1].cuda(gpu, non_blocking=True)

        # compute output and loss
        p1, p2, z1, z2 = model(x1=images[0], x2=images[1])

        # compute output and loss
        loss = -(criterion(p1, z2).mean() + criterion(p2, z1).mean()) * 0.5
        losses.update(loss.item(), images[0].size(0))

        # compute gradient and do SGD step
        optimizer.zero_grad()
        loss.backward()
        optimizer.step()

        # measure elapsed time
        batch_time.update(time.time() - end)
        end = time.time()

        if i % print_freq == 0:
            progress.display(i)
        
        # ------------------------------- 3.3
        writer.add_scalar("train_loss", loss.item(), i)
        # -------------------------------

3.4 start a visualization tools

The following command launches the dashboard tool.

tensorboard --logdir="path_to_log" --port=port_no

スポンサーリンク