You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
title = {The Atomic Simulation Environment---a {{Python}} Library for Working with Atoms},
53
56
author = {Hjorth Larsen, Ask and J{\o}rgen Mortensen, Jens and Blomqvist, Jakob and Castelli, Ivano E and Christensen, Rune and Du{\l}ak, Marcin and Friis, Jesper and Groves, Michael N and Hammer, Bj{\o}rk and Hargus, Cory and Hermes, Eric D and Jennings, Paul C and Bjerre Jensen, Peter and Kermode, James and Kitchin, John R and Leonhard Kolsbjerg, Esben and Kubal, Joseph and Kaasbjerg, Kristen and Lysgaard, Steen and Bergmann Maronsson, J{\'o}n and Maxson, Tristan and Olsen, Thomas and Pastewka, Lars and Peterson, Andrew and Rostgaard, Carsten and Schi{\o}tz, Jakob and Sch{\"u}tt, Ole and Strange, Mikkel and Thygesen, Kristian S and Vegge, Tejs and Vilhelmsen, Lasse and Walter, Michael and Zeng, Zhenhua and Jacobsen, Karsten W},
@@ -70,26 +73,21 @@ @article{Thompson-22-02
70
73
urldate = {2022-11-08}
71
74
}
72
75
73
-
74
-
@misc{Simeon-23-06,
75
-
title = {{{TensorNet}}: {{Cartesian Tensor Representations}} for {{Efficient Learning}} of {{Molecular Potentials}}},
76
-
author = {Simeon, Guillem and {de Fabritiis}, Gianni},
77
-
year = {2023},
78
-
number = {arXiv:2306.06482},
79
-
eprint = {2306.06482},
80
-
primaryclass = {physics},
81
-
archiveprefix = {arXiv}
76
+
@inproceedings{Simeon-23-06,
77
+
title = {TensorNet: Cartesian Tensor Representations for Efficient Learning of Molecular Potentials},
78
+
author = {Simeon, Guillem and {de Fabritiis}, Gianni},
79
+
booktitle = {Thirty-seventh Conference on Neural Information Processing Systems},
title = {{{MatterTune}}: {{An Integrated}}, {{User-Friendly Platform}} for {{Fine-Tuning Atomistic Foundation Models}} to {{Accelerate Materials Simulation}} and {{Discovery}}},
357
-
author = {Kong, Lingyu and Shoghi, Nima and Hu, Guoxiang and Li, Pan and Fung, Victor},
358
-
year = {2025},
359
-
month = apr,
360
-
number = {arXiv:2504.10655},
361
-
eprint = {2504.10655},
362
-
primaryclass = {cond-mat},
363
-
doi = {10.48550/arXiv.2504.10655},
364
-
urldate = {2025-04-16},
365
-
archiveprefix = {arXiv}
353
+
@article{Kong-25-08,
354
+
title = {{{MatterTune}}: An Integrated, User-Friendly Platform for Fine-Tuning Atomistic Foundation Models to Accelerate Materials Simulation and Discovery},
355
+
author = {Kong, Lingyu and Shoghi, Nima and Hu, Guoxiang and Li, Pan and Fung, Victor},
Copy file name to clipboardExpand all lines: paper/paper.md
+2-2Lines changed: 2 additions & 2 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -51,7 +51,7 @@ The `graph-pes` package provides a **unified interface and framework** for defin
51
51
52
52
A number of existing packages offer training and validation pipelines for particular ML-PES architectures, including `schnetpack`[@schutt2019schnetpack; @schutt2023schnetpack], `deepmd-kit`[@Wang-18-07; @Zeng-23-08], `nequip`[@Batzner-22-05], `mace-torch`[@Batatia-22-10], `torchmd-net`[@TorchMDNet], and `fairchem`[@fairchem].
53
53
These frameworks focus on their associated model families and do not share a common interface for training.
54
-
While `MatterTune`[@Kong-25-04] offers a unified interface for foundation model fine-tuning, it does not easily support training arbitrary models from scratch.
54
+
While `MatterTune`[@Kong-25-08] offers a unified interface for foundation model fine-tuning, it does not easily support training arbitrary models from scratch.
55
55
In contrast to these, `graph-pes` is a general, model-agnostic framework, designed to enable exact side-by-side comparisons, easy implementation of arbitrary new architectures, and standardized training and evaluation workflows.
56
56
57
57
# Features and implementation
@@ -86,7 +86,7 @@ As well as training from scratch, we also support the fine-tuning of existing mo
86
86
Under the hood, `graph-pes-train` builds upon the `PyTorch Lightning`[@Lightning] training loop, allowing the user to configure a variety of common training features and callbacks.
87
87
We also support the use of arbitrary, user-defined components, including custom loss functions, model architectures, optimisers, and datasets.
88
88
89
-
Because all models conform to the same interface, all training features can be used with any model architecture. Similarly, all downstream model uses can be written in an architecture-agnostic manner, allowing for MD, relaxations, and other scripts to be written once, and then used with any MLIP architecture, _e.g._ for extended validation beyond simple error metrics [@Morrow-23-03].
89
+
Because all models conform to the same interface, all training features can be used with any model architecture. Similarly, all downstream model uses can be written in an architecture-agnostic manner, allowing for MD, relaxations, and other scripts to be written once, and then used with any MLIP architecture, _e.g._, for extended validation beyond simple error metrics [@Morrow-23-03].
0 commit comments