This directory contains curated Jupyter notebooks demonstrating VQE, ADAPT-VQE, LR-VQE, EOM-VQE, QSE, EOM-QSE, SSVQE, VQD, QPE, VarQITE, and VarQRTE workflows using the packaged code in:
vqe/qpe/qite/common/
Most notebooks are written as pure package clients: they use the repository’s public package APIs and do not redefine their own devices, QNodes, engines, caching, or plotting utilities.
A smaller number of notebooks also build exact reference spectra locally (for example via pennylane.qchem) for validation plots and energy-difference checks.
For the main project docs, see:
- README.md — project overview and quickstart
- USAGE.md — CLI and Python workflows
- THEORY.md — algorithms and methodology
- BENCHMARK_ROADMAP.md — recommended next benchmark / research notebooks
These notebooks are intended to serve three roles:
- conceptual introductions for learning the algorithms
- package-client examples showing recommended API usage
- benchmark / comparison workflows for reproducible experiments
Assumptions:
- the repository has been installed into the active Python environment
- package imports such as
vqe,qpe,qite, andcommonare available - generated outputs are written through the package’s standard caching and plotting utilities
Default output locations:
results/vqe/,results/qpe/,results/qite/— JSON run recordsimages/vqe/,images/qpe/,images/qite/— saved plots
notebooks/
├── README_notebooks.md
│
├── benchmarks/
│ ├── comparisons/
│ ├── defaults/
│ ├── non_molecule/
│ ├── qite/
│ ├── qpe/
│ └── vqe/
│
├── getting_started/
│ ├── 01_vqe_vs_qpe_from_scratch_h2.ipynb
│ └── 11_getting_started_qrte_h2.ipynb
│
├── vqe/
│ ├── H2/
│ └── H2O/
│
└── qite/
└── H2/
If you are new to the repository, begin with:
notebooks/getting_started/01_vqe_vs_qpe_from_scratch_h2.ipynb
This notebook provides compact, conceptual implementations of VQE and QPE before moving to the packaged workflows used elsewhere in the repository.
Fast path:
- start with
getting_started/02_getting_started_vqe_h2.ipynbfor the basic VQE API - use
getting_started/07_getting_started_qite_h2.ipynbfor VarQITE - use
getting_started/11_getting_started_qrte_h2.ipynbfor prepared-state VarQRTE usage - use
benchmarks/qite/H2/Exact_QRTE_Benchmark.ipynbwhen you want to validate VarQRTE against exact evolution
Path: notebooks/vqe/H2/
H2 is the main educational benchmark in this repository: small enough to run quickly, but rich enough to demonstrate ansatz choice, optimizer behaviour, geometry dependence, noise modelling, and excited-state workflows.
| Notebook | Purpose | Style |
|---|---|---|
QSE.ipynb |
QSE spectrum from a converged VQE reference vs exact eigenvalues | Package client |
EOM_QSE.ipynb |
EOM-QSE roots from a converged VQE reference vs exact eigenvalues | Package client |
LR_VQE.ipynb |
LR-VQE tangent-space excitations vs exact eigenvalues | Package client |
EOM_VQE.ipynb |
EOM-VQE full-response tangent-space excitations vs exact eigenvalues | Package client |
SSVQE.ipynb |
Excited-state calculation with SSVQE, including validation against exact energies | Package client |
VQD.ipynb |
Excited-state calculation with VQD, including validation against exact energies | Package client |
Notes:
QSE.ipynb,EOM_QSE.ipynb,LR_VQE.ipynb, andEOM_VQE.ipynbare all post-VQE workflows built on converged noiseless VQE reference states.EOM_QSE.ipynbstudies a generally non-Hermitian reduced problem and uses physical-root selection heuristics.LR_VQE.ipynbdemonstrates the tangent-space Tamm-Dancoff approximation (TDA).EOM_VQE.ipynbdemonstrates the full-response tangent-space workflow.
Path: notebooks/vqe/H2O/
H2O is included primarily to demonstrate a bond-angle scan workflow.
| Notebook | Purpose | Style |
|---|---|---|
Bond_Angle.ipynb |
Two-stage H–O–H angle scan with local refinement and geometry visualization using the package geometry-scan API | Package client |
QPE introductory material now lives in notebooks/getting_started/, while QPE benchmarking and calibration workflows live under notebooks/benchmarks/qpe/.
Path: notebooks/qite/H2/
VarQITE and VarQRTE are demonstrated on H2 as package-client workflows.
| Notebook | Purpose | Style |
|---|---|---|
Real_Time.ipynb |
Noiseless VarQRTE on H2 | Package client |
getting_started/11_getting_started_qrte_h2.ipynb |
Prepared-state VarQRTE intro on H2 | Package client |
Note:
- if a noisy-evaluation notebook is added in future, it should follow the repository convention used elsewhere: perform the parameter-update stage noiselessly, then evaluate the converged circuit under noise
- both VarQITE and VarQRTE are implemented as projected pure-state variational dynamics workflows; they do not optimize under mixed-state noise
These notebooks are dedicated comparison, scan, or exact-reference workflows and now live under notebooks/benchmarks/.
Paths:
notebooks/benchmarks/vqe/H2/notebooks/benchmarks/vqe/H3plus/
| Notebook | Purpose | Style |
|---|---|---|
benchmarks/vqe/H2/Ansatz_Comparison.ipynb |
Compare H2 VQE ansätze, including exact-reference checks | Mixed |
benchmarks/vqe/H2/Mapping_Comparison.ipynb |
Compare fermion-to-qubit mappings for H2 | Package client |
benchmarks/vqe/H2/Noise_Scan.ipynb |
Multi-seed H2 noise statistics benchmark | Package client |
benchmarks/vqe/H2/Noise_Robustness_Benchmark.ipynb |
Compare built-in H2 VQE noise channels under one shared multi-seed protocol and rank sensitivity | Mixed |
benchmarks/vqe/H2/Noisy_Ansatz_Comparison.ipynb |
Compare H2 ansatz families under noise | Package client |
benchmarks/vqe/H2/Noisy_Ansatz_Convergence.ipynb |
Compare noisy H2 ansatz convergence traces | Package client |
benchmarks/vqe/H2/Noisy_Optimizer_Comparison.ipynb |
Compare H2 optimizers under noise | Package client |
benchmarks/vqe/H2/Noisy_Optimizer_Convergence.ipynb |
Compare noisy H2 optimizer convergence traces | Package client |
benchmarks/vqe/H2/SSVQE_Comparisons.ipynb |
Noiseless SSVQE sweeps across ansatz / optimizer choices | Package client |
benchmarks/vqe/H2/VQD_Comparisons.ipynb |
Noiseless VQD sweeps across ansatz / optimizer choices | Package client |
benchmarks/vqe/H3plus/Ansatz_Comparison_Noiseless.ipynb |
Noiseless H3plus ansatz comparison for UCCS / UCCD / UCCSD | Package client |
benchmarks/vqe/H3plus/Ansatz_Comparison_Noisy.ipynb |
Noisy H3plus ansatz comparison for UCCS / UCCD / UCCSD | Package client |
Path: notebooks/benchmarks/qpe/H2/
| Notebook | Purpose | Style |
|---|---|---|
benchmarks/qpe/H2/Noisy.ipynb |
Noisy QPE distribution and multi-seed noise sweep | Package client |
benchmarks/qpe/H2/Calibration_Sweep.ipynb |
Diagnostic QPE calibration sweep with analytic baselines, phase-bin diagnostics, and aliasing / branch-selection checks against an exact H2 reference | Mixed |
benchmarks/qpe/H2/Calibration_Decision_Map.ipynb |
Convert QPE calibration results into ranked configuration tables, default-adequacy checks, and branch / dominant-bin / alias-risk diagnostics | Mixed |
Paths:
notebooks/benchmarks/comparisons/H2/notebooks/benchmarks/comparisons/LiH/
| Notebook | Purpose | Style |
|---|---|---|
benchmarks/comparisons/H2/Cross_Method_Comparison.ipynb |
Compare VQE, QPE, and VarQITE on one shared H2 Hamiltonian and exact reference | Mixed |
benchmarks/comparisons/LiH/Cross_Method_Comparison.ipynb |
Extend the same cross-method comparison pattern to LiH with one shared active-space Hamiltonian, exact reference, and cache/runtime reporting | Mixed |
benchmarks/comparisons/H2/Reproducibility_Benchmark.ipynb |
Measure seed spread, cache timing, and noisy-vs-noiseless variance on one shared H2 problem | Mixed |
benchmarks/comparisons/LiH/Reproducibility_Benchmark.ipynb |
Extend the reproducibility benchmark to LiH with the shared active-space setup, seed-spread analysis, and cache-hit versus forced-rerun timing | Mixed |
benchmarks/comparisons/multi_molecule/Scaling_Benchmark.ipynb |
Compare runtime, qubit count, exact-energy error, and proxy-size metrics across H2, LiH, and BeH2 | Mixed |
benchmarks/comparisons/multi_molecule/Low_Qubit_VQE_Benchmark.ipynb |
Benchmark small-system VQE across the supported sub-12-qubit registry molecules with exact-ground error and runtime summaries | Mixed |
benchmarks/comparisons/multi_molecule/Hydrogen_Family_VQE_Benchmark.ipynb |
Benchmark VQE across neutral and charged small hydrogen systems through the standard molecule pipeline with registry-backed multiplicity defaults and same-electron-sector exact references | Mixed |
benchmarks/comparisons/multi_molecule/Registry_Coverage.ipynb |
Summarize the built-in chemistry registry with charge, multiplicity, qubit count, term count, and exact-ground reference energy using the shared registry-coverage helper | Mixed |
benchmarks/comparisons/multi_molecule/Atomic_Ionization_Energy_Benchmark.ipynb |
Compare neutral/cation registry atom pairs beyond the H2/LiH panel with exact ionization-energy, qubit-count, and Hamiltonian-term diagnostics | Mixed |
Path: notebooks/benchmarks/non_molecule/
| Notebook | Purpose | Style |
|---|---|---|
benchmarks/non_molecule/TFIM_Cross_Method_Benchmark.ipynb |
Compare exact diagonalization, VQE, VarQITE, and QPE on a transverse-field Ising chain using expert-mode Hamiltonian inputs and ansatz_name="auto" |
Mixed |
benchmarks/non_molecule/Heisenberg_Chain_Benchmark.ipynb |
Compare exact diagonalization, VQE, VarQITE, and QPE on an open XXZ Heisenberg chain using expert-mode Hamiltonian inputs and ansatz_name="auto" |
Mixed |
benchmarks/non_molecule/SSH_Chain_Benchmark.ipynb |
Compare exact diagonalization, VQE, VarQITE, and QPE on an open Su-Schrieffer-Heeger chain using expert-mode Hamiltonian inputs and ansatz_name="auto" |
Mixed |
Path: notebooks/benchmarks/defaults/
| Notebook | Purpose | Style |
|---|---|---|
benchmarks/defaults/VQE_Default_Calibration.ipynb |
Calibrate robust VQE defaults across ansatz, optimizer, stepsize, step budget, and seeds | Mixed |
benchmarks/defaults/VarQITE_Default_Calibration.ipynb |
Calibrate robust VarQITE defaults across ansatz, dtau, step budget, and seeds |
Mixed |
benchmarks/defaults/QPE_Default_Calibration.ipynb |
Calibrate baseline QPE defaults across ancillas, evolution time, Trotter depth, shots, and seeds | Mixed |
Path: notebooks/benchmarks/qite/H2/
| Notebook | Purpose | Style |
|---|---|---|
benchmarks/qite/H2/Exact_QRTE_Benchmark.ipynb |
Exact-vs-VarQRTE quench benchmark on H2 | Mixed |
Notes:
benchmarks/qite/H2/Exact_QRTE_Benchmark.ipynbis the main small-system correctness notebook for VarQRTE: it compares the variational trajectory against exact real-time evolution of the same post-quench initial state- benchmark notebooks are meant to complement, not replace, the smaller usage demos in
getting_started/and the specialized algorithm notebooks that remain undernotebooks/vqe/andnotebooks/qite/ - the benchmark backlog is tracked in
notebooks/BENCHMARK_ROADMAP.md
-
Conceptual starting point
getting_started/01_vqe_vs_qpe_from_scratch_h2.ipynb
-
Core VQE workflow
getting_started/02_getting_started_vqe_h2.ipynbgetting_started/09_bond_scan_h2.ipynb
-
Noise studies
benchmarks/comparisons/H2/Reproducibility_Benchmark.ipynbbenchmarks/vqe/H2/Noise_Scan.ipynbbenchmarks/qpe/H2/Noisy.ipynb
-
Default calibration
benchmarks/defaults/VQE_Default_Calibration.ipynbbenchmarks/defaults/VarQITE_Default_Calibration.ipynbbenchmarks/defaults/QPE_Default_Calibration.ipynb
-
Excited-state methods
vqe/H2/QSE.ipynbvqe/H2/EOM_QSE.ipynbvqe/H2/LR_VQE.ipynbvqe/H2/EOM_VQE.ipynbvqe/H2/SSVQE.ipynbvqe/H2/VQD.ipynb
-
Larger systems and geometry
getting_started/10_adapt_vqe_h3plus.ipynbbenchmarks/comparisons/multi_molecule/Scaling_Benchmark.ipynbbenchmarks/comparisons/multi_molecule/Registry_Coverage.ipynbbenchmarks/vqe/H3plus/Ansatz_Comparison_Noiseless.ipynbbenchmarks/vqe/H3plus/Ansatz_Comparison_Noisy.ipynbvqe/H2O/Bond_Angle.ipynb
-
Projected dynamics
getting_started/07_getting_started_qite_h2.ipynbgetting_started/11_getting_started_qrte_h2.ipynbbenchmarks/qite/H2/Exact_QRTE_Benchmark.ipynbqite/H2/Real_Time.ipynb
-
Non-molecule model Hamiltonians
benchmarks/non_molecule/TFIM_Cross_Method_Benchmark.ipynbbenchmarks/non_molecule/Heisenberg_Chain_Benchmark.ipynbbenchmarks/non_molecule/SSH_Chain_Benchmark.ipynb
These notebooks use the same caching, naming, and output conventions as the package CLI workflows described in USAGE.md.
That means:
- repeated runs can reuse cached results when configurations match
- prebuilt-Hamiltonian benchmark runs reuse cached records using canonical Pauli-term fingerprints, reference bitstrings, resolved ansatz metadata, and solver settings
- comparison notebooks generally prefer
force=False, while fresh-run tutorials and quench workflows keepforce=Truewhere recomputation is part of the notebook's purpose - plot and JSON output naming follows the shared repository conventions
- notebook experiments are aligned with the packaged solver infrastructure rather than separate one-off code paths
Sid Richards
- LinkedIn: sid-richards-21374b30b
- GitHub: SidRichardsQuantum
MIT. See LICENSE.