Skip to content
Merged
Show file tree
Hide file tree
Changes from 1 commit
Commits
File filter

Filter by extension

Filter by extension


Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
2 changes: 2 additions & 0 deletions Cargo.lock

Some generated files are not rendered by default. Learn more about how customized files appear on GitHub.

1 change: 1 addition & 0 deletions Cargo.toml
Original file line number Diff line number Diff line change
Expand Up @@ -21,6 +21,7 @@ hashbrown.version = "0.15.5"
num-bigint = "0.4"
num-complex = "0.4"
nalgebra = "0.33"
faer = "0.24"
numpy = "0.28"
ndarray = "0.16"
smallvec = "1.15"
Expand Down
1 change: 1 addition & 0 deletions crates/accelerate/Cargo.toml
Original file line number Diff line number Diff line change
Expand Up @@ -26,6 +26,7 @@ qiskit-circuit.workspace = true
qiskit-transpiler.workspace = true
qiskit-quantum-info.workspace = true
nalgebra.workspace = true
faer.workspace = true
Comment thread
alexanderivrii marked this conversation as resolved.
Outdated

[dependencies.smallvec]
workspace = true
Expand Down
2 changes: 1 addition & 1 deletion crates/synthesis/Cargo.toml
Original file line number Diff line number Diff line change
Expand Up @@ -26,7 +26,7 @@ num-traits.workspace = true
rand.workspace = true
rand_pcg.workspace = true
rand_distr.workspace = true
faer = "0.24"
faer.workspace = true
Copy link
Copy Markdown
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

We don't really need this change anymore but it doesn't hurt either, since we've agreed at this point we'll rely on faer for all the heavy linear algebra. So there is a fairly good chance we'll use it in another crate at some point.

rustworkx-core.workspace = true
rustiq-core = "0.0.11"
rsgridsynth = "0.2.0"
Expand Down
95 changes: 95 additions & 0 deletions crates/synthesis/src/linalg/mod.rs
Original file line number Diff line number Diff line change
Expand Up @@ -11,11 +11,13 @@
// that they have been altered from the originals.

use approx::{abs_diff_eq, relative_ne};
use faer::Mat;
use faer::MatRef;
use nalgebra::{DMatrix, DMatrixView, Dim, Dyn, MatrixView, ViewStorage};
use ndarray::ArrayView2;
use ndarray::ShapeBuilder;
use num_complex::Complex64;
use qiskit_circuit::util::C_ZERO;

pub mod cos_sin_decomp;

Expand Down Expand Up @@ -195,6 +197,7 @@ pub fn svd_decomposition(
) -> (DMatrix<Complex64>, DMatrix<Complex64>, DMatrix<Complex64>) {
let mat_view: DMatrixView<Complex64> = mat.as_view();
let faer_mat: MatRef<Complex64> = nalgebra_to_faer(mat_view);

let faer_svd = faer_mat.svd().unwrap();

let u_faer = faer_svd.U();
Expand Down Expand Up @@ -222,6 +225,98 @@ pub fn svd_decomposition(
(u_na.into(), s_na, v_na)
}

pub fn svd_decomposition_faer(
Comment thread
alexanderivrii marked this conversation as resolved.
Outdated
mat: MatRef<Complex64>,
) -> (Mat<Complex64>, Mat<Complex64>, Mat<Complex64>) {
let svd = mat.svd().expect("Call to faer SVD failed");
Copy link
Copy Markdown
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Should we handle this and return a user facing error like we do for nalgebra. From what I can see in the faer docs this will be a convergence error which I'm not sure we'd want to result in a panic.

Copy link
Copy Markdown
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I have improved the error checking as part of a4a1172, however I did not succeed to wrap errors produced by faer similarly to errors coming from python or circuit data (while there is a class FaerError, see https://docs.rs/faer/latest/faer/sparse/enum.FaerError.html, it does not seem to account for errors coming from say SVD decomposition).


let u = svd.U().to_owned();
let s = svd.S();
let v = svd.V().adjoint().to_owned();

let sigma = Mat::from_fn(
u.ncols(),
v.nrows(),
|i, j| {
if i == j { s[i] } else { C_ZERO }
},
);

(u, sigma, v)
}

pub fn closest_unitary_faer(mat: MatRef<Complex64>) -> Mat<Complex64> {
let (u, _sigma, v_t) = svd_decomposition_faer(mat);
u.as_ref() * v_t.as_ref()
Comment thread
alexanderivrii marked this conversation as resolved.
Outdated
}

pub fn from_diagonal_faer(diag: &[Complex64]) -> Mat<Complex64> {
let n = diag.len();
Mat::from_fn(n, n, |i, j| {
if i == j {
diag[i]
} else {
Complex64::new(0., 0.)
}
})
}

/// Returns a block matrix `[a, b; c, d]`.
/// The matrices `a`, `b`, `c`, `d` are all assumed to be square matrices of the same size
pub fn block_matrix_faer(
a: MatRef<Complex64>,
b: MatRef<Complex64>,
c: MatRef<Complex64>,
d: MatRef<Complex64>,
) -> Mat<Complex64> {
let n = a.nrows();
let mut block_matrix = Mat::<Complex64>::zeros(2 * n, 2 * n);
Copy link
Copy Markdown
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Do you need to initialize this with zero? It feels unnecessary because you should be populating every element in the matrix based on the blocks.

Copy link
Copy Markdown
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The only way I can see how to avoid this (keeping the current code structure) is doing something like

 let mut block_matrix = Mat::<Complex64>::with_capacity(2 * n, 2 * n);
 unsafe { block_matrix.set_dims(2 * n, 2 * n) };

However, I semi-expect that the compiler should be able to optimize this itself. I have kept the original code for now, but replaced the loops as per Shelly's suggestion. I am also not very worried about this specific code as for now it's only used in debug assertions.

Copy link
Copy Markdown
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I was thinking something along those lines. I wasn't sure if faer's Mat type had a method for creating an uninitialized matrix directly or you had to do something like that. But it would be inside an unsafe either way. The other option would be to use from_fn and have conditions on the index ranges.

I find it unlikely that the compiler will able to optimize it away because Mat::zeros() is calling Mat::from_fn which is explicitly calling the closure that returns zero for every element in the array. But it's only a small performance difference, it's something we can explore later if this becomes a bottleneck.

for i in 0..n {
for j in 0..n {
block_matrix[(i, j)] = a[(i, j)];
Comment thread
alexanderivrii marked this conversation as resolved.
Outdated
}
}
for i in 0..n {
for j in 0..n {
block_matrix[(i, j + n)] = b[(i, j)];
}
}
for i in 0..n {
for j in 0..n {
block_matrix[(i + n, j)] = c[(i, j)];
}
}
for i in 0..n {
for j in 0..n {
block_matrix[(i + n, j + n)] = d[(i, j)];
}
}
block_matrix
}

/// Verifies the given matrix U is unitary by comparing U*U to the identity matrix
pub fn verify_unitary_faer(u: MatRef<Complex64>) -> bool {
let n = u.shape().0;

let id_mat = Mat::<Complex64>::identity(n, n);
let uu = u.adjoint() * u;

(uu.as_ref() - id_mat.as_ref()).norm_max() < 1e-7
Comment thread
alexanderivrii marked this conversation as resolved.
Outdated
}

// check whether a matrix is zero (up to tolerance)
pub fn is_zero_matrix_faer(mat: MatRef<Complex64>, atol: Option<f64>) -> bool {
let atol = atol.unwrap_or(1e-12);
for i in 0..mat.nrows() {
for j in 0..mat.ncols() {
if !abs_diff_eq!(mat[(i, j)], Complex64::ZERO, epsilon = atol) {
return false;
}
}
}
true
}

#[cfg(test)]
mod test {
use super::*;
Expand Down
Loading