Skip to content

Integrate LightGlueStick into glue-factory#158

Open
aubingazhib wants to merge 15 commits intocvg:mainfrom
aubingazhib:LightGlueStick
Open

Integrate LightGlueStick into glue-factory#158
aubingazhib wants to merge 15 commits intocvg:mainfrom
aubingazhib:LightGlueStick

Conversation

@aubingazhib
Copy link
Copy Markdown

  • Add LightGlueStick training code
  • Fix small bugs in eth3d eval & cache loader for lines
  • Use batched lsd

@iago-suarez iago-suarez requested a review from Copilot September 2, 2025 05:35
Copy link
Copy Markdown

Copilot AI left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Pull Request Overview

This PR integrates LightGlueStick into glue-factory by adding training code, fixing small bugs in ETH3D evaluation and cache loading for lines, and using batched LSD for improved efficiency.

  • Adds LightGlueStick training code with pretrained model support
  • Fixes bugs in ETH3D evaluation and cache loader for line features
  • Replaces parallel LSD detection with batched LSD processing

Reviewed Changes

Copilot reviewed 13 out of 13 changed files in this pull request and generated 1 comment.

Show a summary per file
File Description
pyproject.toml Adds LightGlueStick dependency
gluefactory/utils/export_predictions.py Fixes line scaling condition
gluefactory/scripts/export_megadepth.py Adds SP-LSD-Wireframe configuration
gluefactory/models/utils/losses.py Adds prefix parameter for multi-loss support
gluefactory/models/matchers/lightgluestick_pretrained.py Adds pretrained LightGlueStick wrapper
gluefactory/models/matchers/lightgluestick.py Implements full LightGlueStick training model
gluefactory/models/lines/lsd.py Replaces parallel processing with batched LSD
gluefactory/models/cache_loader.py Fixes line junction scaling bug
gluefactory/eval/eth3d.py Removes unused configuration parameter
Multiple config files Adds training/evaluation configurations
README.md Adds documentation for LightGlueStick

Tip: Customize your code reviews with copilot-instructions.md. Create the file or learn how to get started.

def _forward(self, data):
return self.net(data)

def loss(pred, data):
Copy link

Copilot AI Sep 2, 2025

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The loss method is missing the 'self' parameter. It should be 'def loss(self, pred, data):' to be a proper instance method.

Suggested change
def loss(pred, data):
def loss(self, pred, data):

Copilot uses AI. Check for mistakes.
Copy link
Copy Markdown
Collaborator

@iago-suarez iago-suarez left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@aubingazhib , I suggest you add the code directly to the repo instead of adding a submodule.

i also left you a couple of changes about the ransac_th

eval:
use_lines: True
estimator: homography_est
ransac_th: -1 # [1., 1.5, 2., 2.5, 3.]
Copy link
Copy Markdown
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Maybe we want to set this to a fixed value, to make the evaluation faster. Could you please set here the value that provides best results?

resize: 1600
eval:
estimator: poselib
ransac_th: -1
Copy link
Copy Markdown
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

same here.

Comment thread gluefactory/models/lines/lsd.py Outdated
# LSD detection in parallel
segs = batched_lsd(image)

if b_size == 1:
Copy link
Copy Markdown
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

We don't need the check if batch_size == 1 here anymore. Simply looping through all segs should also work for b_size = 1.

Comment on lines +23 to +45
@torch.cuda.amp.custom_fwd(cast_inputs=torch.float32)
def normalize_keypoints(
kpts: torch.Tensor, size: Optional[torch.Tensor] = None
) -> torch.Tensor:
if size is None:
size = 1 + kpts.max(-2).values - kpts.min(-2).values
elif not isinstance(size, torch.Tensor):
size = torch.tensor(size, device=kpts.device, dtype=kpts.dtype)
size = size.to(kpts)
shift = size / 2
scale = size.max(-1).values / 2
kpts = (kpts - shift[..., None, :]) / scale[..., None, None]
return kpts


def rotate_half(x: torch.Tensor) -> torch.Tensor:
x = x.unflatten(-1, (-1, 2))
x1, x2 = x.unbind(dim=-1)
return torch.stack((-x2, x1), dim=-1).flatten(start_dim=-2)


def apply_cached_rotary_emb(freqs: torch.Tensor, t: torch.Tensor) -> torch.Tensor:
return (t * freqs[0]) + (rotate_half(t) * freqs[1])
Copy link
Copy Markdown
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

If this is the exact same as in LightGlue, let's directly import from there maybe? The same applies for LearnableFourierPositionalEncoding, TokenConfidence, Attention, SelfBlock, CrossBlock, etc.

Copy link
Copy Markdown
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Agree, let's try to minimize duplicated code

name: gluefactory.models.matchers.lightgluestick
input_dim: 256
descriptor_dim: 256
flash: false
Copy link
Copy Markdown
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Shouldn't it be true? And same for the megadepth training.

Copy link
Copy Markdown
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Also, the mixed precision is set to False by default. Was it trained with or without it for the final checkpoint?

"checkpointed": False,
"weights": None, # either a path or the name of pretrained weights (disk, ...)
"keypoint_encoder": [32, 64, 128, 256],
"weights_from_version": "v0.1_arxiv",
Copy link
Copy Markdown
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I guess this was copy-pasted from LightGlue and should be adapted?

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

4 participants