Skip to content

Change 8bit optimizer blocksize 2048->256; additional bf16 support#1365

Merged
matthewdouglas merged 9 commits intomainfrom
optim-blocksize-256
Sep 20, 2024
Merged

Change 8bit optimizer blocksize 2048->256; additional bf16 support#1365
matthewdouglas merged 9 commits intomainfrom
optim-blocksize-256

Conversation

@matthewdouglas
Copy link
Copy Markdown
Member

This PR stacks on top of #1360 with an update to the blocksize for the 8bit blockwise optimizers. Additionally, bf16 support is added for RMSprop, Adagrad, and Momentum.

@TimDettmers @Titus-von-Koeller

@matthewdouglas matthewdouglas added the Enhancement New feature or request label Sep 19, 2024
@github-actions
Copy link
Copy Markdown

The docs for this PR live here. All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update.

Copy link
Copy Markdown
Collaborator

@TimDettmers TimDettmers left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This looks all good. The only thing that we might want to check is if the error boundaries in the tests are now improved. We want to keep the absolute and relative error tight, so that if we have a slight degradation, the tests fail again.

@matthewdouglas
Copy link
Copy Markdown
Member Author

@TimDettmers I was able to tighten some of the tolerances.

@matthewdouglas matthewdouglas changed the base branch from ademamix to main September 20, 2024 19:52
@matthewdouglas matthewdouglas merged commit aa57bd8 into main Sep 20, 2024
matthewdouglas added a commit to matthewdouglas/bitsandbytes that referenced this pull request Oct 28, 2024
…itsandbytes-foundation#1365)

* Change 8bit optimizer blocksize 2048->256; additional bf16 support
* Update tolerances for 8bit optimizer tests
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

Enhancement New feature or request

Projects

None yet

Development

Successfully merging this pull request may close these issues.

2 participants