No path specified. Models will be saved in: "AutogluonModels/ag-20231207_033914/"
AutoGluon infers your prediction problem is: 'binary' (because only two unique label-values observed).
2 unique label values: [1, 0]
If 'binary' is not the correct problem_type, please manually specify the problem_type parameter during predictor init (You may specify problem_type as one of: ['binary', 'multiclass', 'regression'])
Global seed set to 0
AutoMM starts to create your model. ✨
- AutoGluon version is 0.8.2.
- Pytorch version is 1.13.1.post200.
- Model will be saved to "/root/Dropbox/MP/AutogluonModels/ag-20231207_033914".
- Validation metric is "roc_auc".
- To track the learning progress, you can open a terminal and launch Tensorboard:
```shell
# Assume you have installed tensorboard
tensorboard --logdir /root/Dropbox/MP/AutogluonModels/ag-20231207_033914
```
Enjoy your coffee, and let AutoMM do the job ☕☕☕ Learn more at https://auto.gluon.ai
1 GPUs are detected, and 1 GPUs will be used.
- GPU 0 name: NVIDIA A100-SXM4-80GB MIG 7g.80gb
- GPU 0 memory: 84.37GB/85.20GB (Free/Total)
CUDA version is 11.2.
Using 16bit None Automatic Mixed Precision (AMP)
GPU available: True (cuda), used: True
TPU available: False, using: 0 TPU cores
IPU available: False, using: 0 IPUs
HPU available: False, using: 0 HPUs
You are using a CUDA device ('NVIDIA A100-SXM4-80GB MIG 7g.80gb') that has Tensor Cores. To properly utilize them, you should set `torch.set_float32_matmul_precision('medium' | 'high')` which will trade-off precision for performance. For more details, read https://pytorch.org/docs/stable/generated/torch.set_float32_matmul_precision.html#torch.set_float32_matmul_precision
LOCAL_RANK: 0 - CUDA_VISIBLE_DEVICES: [0]
| Name | Type | Params
----------------------------------------------------------
0 | model | MultimodalFusionMLP | 109 M
1 | validation_metric | BinaryAUROC | 0
2 | loss_func | CrossEntropyLoss | 0
----------------------------------------------------------
109 M Trainable params
0 Non-trainable params
109 M Total params
219.567 Total estimated model params size (MB)
Epoch 0, global step 26: 'val_roc_auc' reached 0.81683 (best 0.81683), saving model to '/root/Dropbox/MP/AutogluonModels/ag-20231207_033914/epoch=0-step=26.ckpt' as top 3
Epoch 0, global step 53: 'val_roc_auc' reached 0.88538 (best 0.88538), saving model to '/root/Dropbox/MP/AutogluonModels/ag-20231207_033914/epoch=0-step=53.ckpt' as top 3
Epoch 1, global step 80: 'val_roc_auc' reached 0.88780 (best 0.88780), saving model to '/root/Dropbox/MP/AutogluonModels/ag-20231207_033914/epoch=1-step=80.ckpt' as top 3
Epoch 1, global step 107: 'val_roc_auc' reached 0.87404 (best 0.88780), saving model to '/root/Dropbox/MP/AutogluonModels/ag-20231207_033914/epoch=1-step=107.ckpt' as top 3
Epoch 2, global step 134: 'val_roc_auc' reached 0.89212 (best 0.89212), saving model to '/root/Dropbox/MP/AutogluonModels/ag-20231207_033914/epoch=2-step=134.ckpt' as top 3
Epoch 2, global step 161: 'val_roc_auc' reached 0.89695 (best 0.89695), saving model to '/root/Dropbox/MP/AutogluonModels/ag-20231207_033914/epoch=2-step=161.ckpt' as top 3
Epoch 3, global step 188: 'val_roc_auc' reached 0.89258 (best 0.89695), saving model to '/root/Dropbox/MP/AutogluonModels/ag-20231207_033914/epoch=3-step=188.ckpt' as top 3
Epoch 3, global step 215: 'val_roc_auc' reached 0.89418 (best 0.89695), saving model to '/root/Dropbox/MP/AutogluonModels/ag-20231207_033914/epoch=3-step=215.ckpt' as top 3
Epoch 4, global step 242: 'val_roc_auc' was not in top 3
Epoch 4, global step 269: 'val_roc_auc' was not in top 3
Epoch 5, global step 296: 'val_roc_auc' was not in top 3
Epoch 5, global step 323: 'val_roc_auc' was not in top 3
Epoch 6, global step 350: 'val_roc_auc' was not in top 3
Epoch 6, global step 377: 'val_roc_auc' was not in top 3
Epoch 7, global step 404: 'val_roc_auc' was not in top 3
Epoch 7, global step 431: 'val_roc_auc' was not in top 3
Start to fuse 3 checkpoints via the greedy soup algorithm.
You are using a CUDA device ('NVIDIA A100-SXM4-80GB MIG 7g.80gb') that has Tensor Cores. To properly utilize them, you should set `torch.set_float32_matmul_precision('medium' | 'high')` which will trade-off precision for performance. For more details, read https://pytorch.org/docs/stable/generated/torch.set_float32_matmul_precision.html#torch.set_float32_matmul_precision
You are using a CUDA device ('NVIDIA A100-SXM4-80GB MIG 7g.80gb') that has Tensor Cores. To properly utilize them, you should set `torch.set_float32_matmul_precision('medium' | 'high')` which will trade-off precision for performance. For more details, read https://pytorch.org/docs/stable/generated/torch.set_float32_matmul_precision.html#torch.set_float32_matmul_precision
You are using a CUDA device ('NVIDIA A100-SXM4-80GB MIG 7g.80gb') that has Tensor Cores. To properly utilize them, you should set `torch.set_float32_matmul_precision('medium' | 'high')` which will trade-off precision for performance. For more details, read https://pytorch.org/docs/stable/generated/torch.set_float32_matmul_precision.html#torch.set_float32_matmul_precision
AutoMM has created your model 🎉🎉🎉
- To load the model, use the code below:
```python
from autogluon.multimodal import MultiModalPredictor
predictor = MultiModalPredictor.load("/root/Dropbox/MP/AutogluonModels/ag-20231207_033914")
```
- You can open a terminal and launch Tensorboard to visualize the training log:
```shell
# Assume you have installed tensorboard
tensorboard --logdir /root/Dropbox/MP/AutogluonModels/ag-20231207_033914
```
- If you are not satisfied with the model, try to increase the training time,
adjust the hyperparameters (https://auto.gluon.ai/stable/tutorials/multimodal/advanced_topics/customization.html),
or post issues on GitHub: https://github.com/autogluon/autogluon
You are using a CUDA device ('NVIDIA A100-SXM4-80GB MIG 7g.80gb') that has Tensor Cores. To properly utilize them, you should set `torch.set_float32_matmul_precision('medium' | 'high')` which will trade-off precision for performance. For more details, read https://pytorch.org/docs/stable/generated/torch.set_float32_matmul_precision.html#torch.set_float32_matmul_precision