Skip to content

Commit

Permalink
Fix logic and update changelog
Browse files Browse the repository at this point in the history
  • Loading branch information
chimezie committed Feb 22, 2025
1 parent a1367e5 commit 25832a5
Show file tree
Hide file tree
Showing 2 changed files with 3 additions and 1 deletion.
2 changes: 2 additions & 0 deletions CHANGELOG.md
Original file line number Diff line number Diff line change
Expand Up @@ -11,9 +11,11 @@ Major synchronization to changes in MLX
- Configuration for DoRA fine tuning
- Summary of how to perform mlx CLI fine tuning using generated parameters
- Composable configurations
- axolotl-like configuration parameters for automatically determining values for the mlx_lm fine-tuning parameters

### Removed
- Schedule configiration (use MLXes)
- Colorize option
-


2 changes: 1 addition & 1 deletion src/mlx_tuning_fork/training.py
Original file line number Diff line number Diff line change
Expand Up @@ -116,7 +116,7 @@ def composably_train(args, config, config_file, model, summary, tokenizer, train
scaled_steps_per_eval = int(num_iterations * args.validation_interval_proportion)
scaled_val_batches = int(args.validations_per_train_item * args.validation_interval_proportion * num_iterations)
scaled_val_batches = max(1, scaled_val_batches)
scaled_steps_per_report = int(args.reporting_interval_proportion * num_iterations)
scaled_steps_per_report = max(1, int(args.reporting_interval_proportion * num_iterations))
if args.saves_per_epoch:
scaled_save_every = int(epoch_num_steps / args.saves_per_epoch)
else:
Expand Down

0 comments on commit 25832a5

Please sign in to comment.