Skip to content

A pull request for #242 (greedy decoding and vectorization in attention.py)#257

Merged
neubig merged 4 commits intoclab:masterfrom
emanjavacas:attn
Jan 20, 2017
Merged

A pull request for #242 (greedy decoding and vectorization in attention.py)#257
neubig merged 4 commits intoclab:masterfrom
emanjavacas:attn

Conversation

@emanjavacas
Copy link
Copy Markdown
Contributor

This addresses both issues. I've kept comments with tensor dimensionality in the computation of the attention weights, just tell me if I you'd prefer them removed.

@neubig
Copy link
Copy Markdown
Contributor

neubig commented Jan 18, 2017

This is great, but w1dt = w1 * input_mat can be done only once at the beginning of the sentence and cached. This is a big performance win, so it'd be nice to add that as well (maybe with a comment).

@emanjavacas
Copy link
Copy Markdown
Contributor Author

Oh, you are right. Here is the new version.

@neubig neubig merged commit 95cac6b into clab:master Jan 20, 2017
@neubig
Copy link
Copy Markdown
Contributor

neubig commented Jan 20, 2017

Looks good! Thanks.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

2 participants