OpenOrca ML UI
Axolotl Training Config
All settings required to update a training are shown by default.
Model
Pick the base model you want to do your fine tuning on top of.
View models of HF
Training Type
QLoRA
LoRA
Full Fine Tune
GPU
RTX 3090
RTX 3080
RTX 3070
RTX 4070
RTX 4080
RTX 4090
A6000
A100 40GB
A100 80GB
H100
L40
A40
RTX A5000
RTX A4500
RTX A4000
GPU Quantity
Datasets
Add Dataset
Dataset Path
X
The dataset you want to train on. A dataset name from HuggingFace, or a relative path.
Features:
Prompt Type
custom
alpaca
sharegpt:chat
completion
oasst
gpteacher
reflection
explainchoice
concisechoice
summarizetldr
alpaca_chat
alpaca_chat.load_qa
alpaca_chat.load_concise
alpaca_chat.load_camel_ai
alpaca_w_system.load_open_orca
context_qa
context_qa.load_404
creative_acr.load_answer
creative_acr.load_critique
creative_acr.load_revise
pygmalion
sharegpt_simple.load_role
sharegpt_simple.load_guanaco
sharegpt_jokes
The prompt format the dataset will use. See
Axolotl Docs
for more information.
Epoch(s)
The number of times the AI model will read through the dataset.
Hyperparameters
Sequence Length
512
1024
2048
4096
8192
16384
32768
Microbatch Size
Learning Rate
Gradient Accum
LR Scheduler
cosine
linear
cosine_with_restarts
polynomial
constant
constant_with_warmup
inverse_sqrt
reduce_lr_on_plateau
Optimizer
paged_adamw_8bit
paged_adamw_32bit
adamw_bnb_8bit
adamw_torch
Weight Decay
Max Grad Norm
Adam Beta 2
Adam Beta 1
Weights & Bias
LoRA Config
Optimization
Checkpoints
All Settings
Download YAML