001/* 002 * Licensed to the Apache Software Foundation (ASF) under one or more 003 * contributor license agreements. See the NOTICE file distributed with 004 * this work for additional information regarding copyright ownership. 005 * The ASF licenses this file to You under the Apache License, Version 2.0 006 * (the "License"); you may not use this file except in compliance with 007 * the License. You may obtain a copy of the License at 008 * 009 * http://www.apache.org/licenses/LICENSE-2.0 010 * 011 * Unless required by applicable law or agreed to in writing, software 012 * distributed under the License is distributed on an "AS IS" BASIS, 013 * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. 014 * See the License for the specific language governing permissions and 015 * limitations under the License. 016 */ 017package org.apache.commons.math3.linear; 018 019import org.apache.commons.math3.exception.DimensionMismatchException; 020import org.apache.commons.math3.exception.MaxCountExceededException; 021import org.apache.commons.math3.exception.NullArgumentException; 022import org.apache.commons.math3.exception.util.ExceptionContext; 023import org.apache.commons.math3.util.FastMath; 024import org.apache.commons.math3.util.IterationManager; 025import org.apache.commons.math3.util.MathUtils; 026 027/** 028 * <p> 029 * Implementation of the SYMMLQ iterative linear solver proposed by <a 030 * href="#PAIG1975">Paige and Saunders (1975)</a>. This implementation is 031 * largely based on the FORTRAN code by Pr. Michael A. Saunders, available <a 032 * href="http://www.stanford.edu/group/SOL/software/symmlq/f77/">here</a>. 033 * </p> 034 * <p> 035 * SYMMLQ is designed to solve the system of linear equations A · x = b 036 * where A is an n × n self-adjoint linear operator (defined as a 037 * {@link RealLinearOperator}), and b is a given vector. The operator A is not 038 * required to be positive definite. If A is known to be definite, the method of 039 * conjugate gradients might be preferred, since it will require about the same 040 * number of iterations as SYMMLQ but slightly less work per iteration. 041 * </p> 042 * <p> 043 * SYMMLQ is designed to solve the system (A - shift · I) · x = b, 044 * where shift is a specified scalar value. If shift and b are suitably chosen, 045 * the computed vector x may approximate an (unnormalized) eigenvector of A, as 046 * in the methods of inverse iteration and/or Rayleigh-quotient iteration. 047 * Again, the linear operator (A - shift · I) need not be positive 048 * definite (but <em>must</em> be self-adjoint). The work per iteration is very 049 * slightly less if shift = 0. 050 * </p> 051 * <h3>Preconditioning</h3> 052 * <p> 053 * Preconditioning may reduce the number of iterations required. The solver may 054 * be provided with a positive definite preconditioner 055 * M = P<sup>T</sup> · P 056 * that is known to approximate 057 * (A - shift · I)<sup>-1</sup> in some sense, where matrix-vector 058 * products of the form M · y = x can be computed efficiently. Then 059 * SYMMLQ will implicitly solve the system of equations 060 * P · (A - shift · I) · P<sup>T</sup> · 061 * x<sub>hat</sub> = P · b, i.e. 062 * A<sub>hat</sub> · x<sub>hat</sub> = b<sub>hat</sub>, 063 * where 064 * A<sub>hat</sub> = P · (A - shift · I) · P<sup>T</sup>, 065 * b<sub>hat</sub> = P · b, 066 * and return the solution 067 * x = P<sup>T</sup> · x<sub>hat</sub>. 068 * The associated residual is 069 * r<sub>hat</sub> = b<sub>hat</sub> - A<sub>hat</sub> · x<sub>hat</sub> 070 * = P · [b - (A - shift · I) · x] 071 * = P · r. 072 * </p> 073 * <p> 074 * In the case of preconditioning, the {@link IterativeLinearSolverEvent}s that 075 * this solver fires are such that 076 * {@link IterativeLinearSolverEvent#getNormOfResidual()} returns the norm of 077 * the <em>preconditioned</em>, updated residual, ||P · r||, not the norm 078 * of the <em>true</em> residual ||r||. 079 * </p> 080 * <h3><a id="stopcrit">Default stopping criterion</a></h3> 081 * <p> 082 * A default stopping criterion is implemented. The iterations stop when || rhat 083 * || ≤ δ || Ahat || || xhat ||, where xhat is the current estimate of 084 * the solution of the transformed system, rhat the current estimate of the 085 * corresponding residual, and δ a user-specified tolerance. 086 * </p> 087 * <h3>Iteration count</h3> 088 * <p> 089 * In the present context, an iteration should be understood as one evaluation 090 * of the matrix-vector product A · x. The initialization phase therefore 091 * counts as one iteration. If the user requires checks on the symmetry of A, 092 * this entails one further matrix-vector product in the initial phase. This 093 * further product is <em>not</em> accounted for in the iteration count. In 094 * other words, the number of iterations required to reach convergence will be 095 * identical, whether checks have been required or not. 096 * </p> 097 * <p> 098 * The present definition of the iteration count differs from that adopted in 099 * the original FOTRAN code, where the initialization phase was <em>not</em> 100 * taken into account. 101 * </p> 102 * <h3><a id="initguess">Initial guess of the solution</a></h3> 103 * <p> 104 * The {@code x} parameter in 105 * <ul> 106 * <li>{@link #solve(RealLinearOperator, RealVector, RealVector)},</li> 107 * <li>{@link #solve(RealLinearOperator, RealLinearOperator, RealVector, RealVector)}},</li> 108 * <li>{@link #solveInPlace(RealLinearOperator, RealVector, RealVector)},</li> 109 * <li>{@link #solveInPlace(RealLinearOperator, RealLinearOperator, RealVector, RealVector)},</li> 110 * <li>{@link #solveInPlace(RealLinearOperator, RealLinearOperator, RealVector, RealVector, boolean, double)},</li> 111 * </ul> 112 * should not be considered as an initial guess, as it is set to zero in the 113 * initial phase. If x<sub>0</sub> is known to be a good approximation to x, one 114 * should compute r<sub>0</sub> = b - A · x, solve A · dx = r0, 115 * and set x = x<sub>0</sub> + dx. 116 * </p> 117 * <h3><a id="context">Exception context</a></h3> 118 * <p> 119 * Besides standard {@link DimensionMismatchException}, this class might throw 120 * {@link NonSelfAdjointOperatorException} if the linear operator or the 121 * preconditioner are not symmetric. In this case, the {@link ExceptionContext} 122 * provides more information 123 * <ul> 124 * <li>key {@code "operator"} points to the offending linear operator, say L,</li> 125 * <li>key {@code "vector1"} points to the first offending vector, say x, 126 * <li>key {@code "vector2"} points to the second offending vector, say y, such 127 * that x<sup>T</sup> · L · y ≠ y<sup>T</sup> · L 128 * · x (within a certain accuracy).</li> 129 * </ul> 130 * </p> 131 * <p> 132 * {@link NonPositiveDefiniteOperatorException} might also be thrown in case the 133 * preconditioner is not positive definite. The relevant keys to the 134 * {@link ExceptionContext} are 135 * <ul> 136 * <li>key {@code "operator"}, which points to the offending linear operator, 137 * say L,</li> 138 * <li>key {@code "vector"}, which points to the offending vector, say x, such 139 * that x<sup>T</sup> · L · x < 0.</li> 140 * </ul> 141 * </p> 142 * <h3>References</h3> 143 * <dl> 144 * <dt><a id="PAIG1975">Paige and Saunders (1975)</a></dt> 145 * <dd>C. C. Paige and M. A. Saunders, <a 146 * href="http://www.stanford.edu/group/SOL/software/symmlq/PS75.pdf"><em> 147 * Solution of Sparse Indefinite Systems of Linear Equations</em></a>, SIAM 148 * Journal on Numerical Analysis 12(4): 617-629, 1975</dd> 149 * </dl> 150 * 151 * @since 3.0 152 */ 153public class SymmLQ 154 extends PreconditionedIterativeLinearSolver { 155 156 /* 157 * IMPLEMENTATION NOTES 158 * -------------------- 159 * The implementation follows as closely as possible the notations of Paige 160 * and Saunders (1975). Attention must be paid to the fact that some 161 * quantities which are relevant to iteration k can only be computed in 162 * iteration (k+1). Therefore, minute attention must be paid to the index of 163 * each state variable of this algorithm. 164 * 165 * 1. Preconditioning 166 * --------------- 167 * The Lanczos iterations associated with Ahat and bhat read 168 * beta[1] = ||P * b|| 169 * v[1] = P * b / beta[1] 170 * beta[k+1] * v[k+1] = Ahat * v[k] - alpha[k] * v[k] - beta[k] * v[k-1] 171 * = P * (A - shift * I) * P' * v[k] - alpha[k] * v[k] 172 * - beta[k] * v[k-1] 173 * Multiplying both sides by P', we get 174 * beta[k+1] * (P' * v)[k+1] = M * (A - shift * I) * (P' * v)[k] 175 * - alpha[k] * (P' * v)[k] 176 * - beta[k] * (P' * v[k-1]), 177 * and 178 * alpha[k+1] = v[k+1]' * Ahat * v[k+1] 179 * = v[k+1]' * P * (A - shift * I) * P' * v[k+1] 180 * = (P' * v)[k+1]' * (A - shift * I) * (P' * v)[k+1]. 181 * 182 * In other words, the Lanczos iterations are unchanged, except for the fact 183 * that we really compute (P' * v) instead of v. It can easily be checked 184 * that all other formulas are unchanged. It must be noted that P is never 185 * explicitly used, only matrix-vector products involving are invoked. 186 * 187 * 2. Accounting for the shift parameter 188 * ---------------------------------- 189 * Is trivial: each time A.operate(x) is invoked, one must subtract shift * x 190 * to the result. 191 * 192 * 3. Accounting for the goodb flag 193 * ----------------------------- 194 * When goodb is set to true, the component of xL along b is computed 195 * separately. From Paige and Saunders (1975), equation (5.9), we have 196 * wbar[k+1] = s[k] * wbar[k] - c[k] * v[k+1], 197 * wbar[1] = v[1]. 198 * Introducing wbar2[k] = wbar[k] - s[1] * ... * s[k-1] * v[1], it can 199 * easily be verified by induction that wbar2 follows the same recursive 200 * relation 201 * wbar2[k+1] = s[k] * wbar2[k] - c[k] * v[k+1], 202 * wbar2[1] = 0, 203 * and we then have 204 * w[k] = c[k] * wbar2[k] + s[k] * v[k+1] 205 * + s[1] * ... * s[k-1] * c[k] * v[1]. 206 * Introducing w2[k] = w[k] - s[1] * ... * s[k-1] * c[k] * v[1], we find, 207 * from (5.10) 208 * xL[k] = zeta[1] * w[1] + ... + zeta[k] * w[k] 209 * = zeta[1] * w2[1] + ... + zeta[k] * w2[k] 210 * + (s[1] * c[2] * zeta[2] + ... 211 * + s[1] * ... * s[k-1] * c[k] * zeta[k]) * v[1] 212 * = xL2[k] + bstep[k] * v[1], 213 * where xL2[k] is defined by 214 * xL2[0] = 0, 215 * xL2[k+1] = xL2[k] + zeta[k+1] * w2[k+1], 216 * and bstep is defined by 217 * bstep[1] = 0, 218 * bstep[k] = bstep[k-1] + s[1] * ... * s[k-1] * c[k] * zeta[k]. 219 * We also have, from (5.11) 220 * xC[k] = xL[k-1] + zbar[k] * wbar[k] 221 * = xL2[k-1] + zbar[k] * wbar2[k] 222 * + (bstep[k-1] + s[1] * ... * s[k-1] * zbar[k]) * v[1]. 223 */ 224 225 /** 226 * <p> 227 * A simple container holding the non-final variables used in the 228 * iterations. Making the current state of the solver visible from the 229 * outside is necessary, because during the iterations, {@code x} does not 230 * <em>exactly</em> hold the current estimate of the solution. Indeed, 231 * {@code x} needs in general to be moved from the LQ point to the CG point. 232 * Besides, additional upudates must be carried out in case {@code goodb} is 233 * set to {@code true}. 234 * </p> 235 * <p> 236 * In all subsequent comments, the description of the state variables refer 237 * to their value after a call to {@link #update()}. In these comments, k is 238 * the current number of evaluations of matrix-vector products. 239 * </p> 240 */ 241 private static class State { 242 /** The cubic root of {@link #MACH_PREC}. */ 243 static final double CBRT_MACH_PREC; 244 245 /** The machine precision. */ 246 static final double MACH_PREC; 247 248 /** Reference to the linear operator. */ 249 private final RealLinearOperator a; 250 251 /** Reference to the right-hand side vector. */ 252 private final RealVector b; 253 254 /** {@code true} if symmetry of matrix and conditioner must be checked. */ 255 private final boolean check; 256 257 /** 258 * The value of the custom tolerance δ for the default stopping 259 * criterion. 260 */ 261 private final double delta; 262 263 /** The value of beta[k+1]. */ 264 private double beta; 265 266 /** The value of beta[1]. */ 267 private double beta1; 268 269 /** The value of bstep[k-1]. */ 270 private double bstep; 271 272 /** The estimate of the norm of P * rC[k]. */ 273 private double cgnorm; 274 275 /** The value of dbar[k+1] = -beta[k+1] * c[k-1]. */ 276 private double dbar; 277 278 /** 279 * The value of gamma[k] * zeta[k]. Was called {@code rhs1} in the 280 * initial code. 281 */ 282 private double gammaZeta; 283 284 /** The value of gbar[k]. */ 285 private double gbar; 286 287 /** The value of max(|alpha[1]|, gamma[1], ..., gamma[k-1]). */ 288 private double gmax; 289 290 /** The value of min(|alpha[1]|, gamma[1], ..., gamma[k-1]). */ 291 private double gmin; 292 293 /** Copy of the {@code goodb} parameter. */ 294 private final boolean goodb; 295 296 /** {@code true} if the default convergence criterion is verified. */ 297 private boolean hasConverged; 298 299 /** The estimate of the norm of P * rL[k-1]. */ 300 private double lqnorm; 301 302 /** Reference to the preconditioner, M. */ 303 private final RealLinearOperator m; 304 305 /** 306 * The value of (-eps[k+1] * zeta[k-1]). Was called {@code rhs2} in the 307 * initial code. 308 */ 309 private double minusEpsZeta; 310 311 /** The value of M * b. */ 312 private final RealVector mb; 313 314 /** The value of beta[k]. */ 315 private double oldb; 316 317 /** The value of beta[k] * M^(-1) * P' * v[k]. */ 318 private RealVector r1; 319 320 /** The value of beta[k+1] * M^(-1) * P' * v[k+1]. */ 321 private RealVector r2; 322 323 /** 324 * The value of the updated, preconditioned residual P * r. This value is 325 * given by {@code min(}{@link #cgnorm}{@code , }{@link #lqnorm}{@code )}. 326 */ 327 private double rnorm; 328 329 /** Copy of the {@code shift} parameter. */ 330 private final double shift; 331 332 /** The value of s[1] * ... * s[k-1]. */ 333 private double snprod; 334 335 /** 336 * An estimate of the square of the norm of A * V[k], based on Paige and 337 * Saunders (1975), equation (3.3). 338 */ 339 private double tnorm; 340 341 /** 342 * The value of P' * wbar[k] or P' * (wbar[k] - s[1] * ... * s[k-1] * 343 * v[1]) if {@code goodb} is {@code true}. Was called {@code w} in the 344 * initial code. 345 */ 346 private RealVector wbar; 347 348 /** 349 * A reference to the vector to be updated with the solution. Contains 350 * the value of xL[k-1] if {@code goodb} is {@code false}, (xL[k-1] - 351 * bstep[k-1] * v[1]) otherwise. 352 */ 353 private final RealVector xL; 354 355 /** The value of beta[k+1] * P' * v[k+1]. */ 356 private RealVector y; 357 358 /** The value of zeta[1]^2 + ... + zeta[k-1]^2. */ 359 private double ynorm2; 360 361 /** The value of {@code b == 0} (exact floating-point equality). */ 362 private boolean bIsNull; 363 364 static { 365 MACH_PREC = FastMath.ulp(1.); 366 CBRT_MACH_PREC = FastMath.cbrt(MACH_PREC); 367 } 368 369 /** 370 * Creates and inits to k = 1 a new instance of this class. 371 * 372 * @param a the linear operator A of the system 373 * @param m the preconditioner, M (can be {@code null}) 374 * @param b the right-hand side vector 375 * @param goodb usually {@code false}, except if {@code x} is expected 376 * to contain a large multiple of {@code b} 377 * @param shift the amount to be subtracted to all diagonal elements of 378 * A 379 * @param delta the δ parameter for the default stopping criterion 380 * @param check {@code true} if self-adjointedness of both matrix and 381 * preconditioner should be checked 382 */ 383 State(final RealLinearOperator a, 384 final RealLinearOperator m, 385 final RealVector b, 386 final boolean goodb, 387 final double shift, 388 final double delta, 389 final boolean check) { 390 this.a = a; 391 this.m = m; 392 this.b = b; 393 this.xL = new ArrayRealVector(b.getDimension()); 394 this.goodb = goodb; 395 this.shift = shift; 396 this.mb = m == null ? b : m.operate(b); 397 this.hasConverged = false; 398 this.check = check; 399 this.delta = delta; 400 } 401 402 /** 403 * Performs a symmetry check on the specified linear operator, and throws an 404 * exception in case this check fails. Given a linear operator L, and a 405 * vector x, this method checks that 406 * x' · L · y = y' · L · x 407 * (within a given accuracy), where y = L · x. 408 * 409 * @param l the linear operator L 410 * @param x the candidate vector x 411 * @param y the candidate vector y = L · x 412 * @param z the vector z = L · y 413 * @throws NonSelfAdjointOperatorException when the test fails 414 */ 415 private static void checkSymmetry(final RealLinearOperator l, 416 final RealVector x, final RealVector y, final RealVector z) 417 throws NonSelfAdjointOperatorException { 418 final double s = y.dotProduct(y); 419 final double t = x.dotProduct(z); 420 final double epsa = (s + MACH_PREC) * CBRT_MACH_PREC; 421 if (FastMath.abs(s - t) > epsa) { 422 final NonSelfAdjointOperatorException e; 423 e = new NonSelfAdjointOperatorException(); 424 final ExceptionContext context = e.getContext(); 425 context.setValue(SymmLQ.OPERATOR, l); 426 context.setValue(SymmLQ.VECTOR1, x); 427 context.setValue(SymmLQ.VECTOR2, y); 428 context.setValue(SymmLQ.THRESHOLD, Double.valueOf(epsa)); 429 throw e; 430 } 431 } 432 433 /** 434 * Throws a new {@link NonPositiveDefiniteOperatorException} with 435 * appropriate context. 436 * 437 * @param l the offending linear operator 438 * @param v the offending vector 439 * @throws NonPositiveDefiniteOperatorException in any circumstances 440 */ 441 private static void throwNPDLOException(final RealLinearOperator l, 442 final RealVector v) throws NonPositiveDefiniteOperatorException { 443 final NonPositiveDefiniteOperatorException e; 444 e = new NonPositiveDefiniteOperatorException(); 445 final ExceptionContext context = e.getContext(); 446 context.setValue(OPERATOR, l); 447 context.setValue(VECTOR, v); 448 throw e; 449 } 450 451 /** 452 * A clone of the BLAS {@code DAXPY} function, which carries out the 453 * operation y ← a · x + y. This is for internal use only: no 454 * dimension checks are provided. 455 * 456 * @param a the scalar by which {@code x} is to be multiplied 457 * @param x the vector to be added to {@code y} 458 * @param y the vector to be incremented 459 */ 460 private static void daxpy(final double a, final RealVector x, 461 final RealVector y) { 462 final int n = x.getDimension(); 463 for (int i = 0; i < n; i++) { 464 y.setEntry(i, a * x.getEntry(i) + y.getEntry(i)); 465 } 466 } 467 468 /** 469 * A BLAS-like function, for the operation z ← a · x + b 470 * · y + z. This is for internal use only: no dimension checks are 471 * provided. 472 * 473 * @param a the scalar by which {@code x} is to be multiplied 474 * @param x the first vector to be added to {@code z} 475 * @param b the scalar by which {@code y} is to be multiplied 476 * @param y the second vector to be added to {@code z} 477 * @param z the vector to be incremented 478 */ 479 private static void daxpbypz(final double a, final RealVector x, 480 final double b, final RealVector y, final RealVector z) { 481 final int n = z.getDimension(); 482 for (int i = 0; i < n; i++) { 483 final double zi; 484 zi = a * x.getEntry(i) + b * y.getEntry(i) + z.getEntry(i); 485 z.setEntry(i, zi); 486 } 487 } 488 489 /** 490 * <p> 491 * Move to the CG point if it seems better. In this version of SYMMLQ, 492 * the convergence tests involve only cgnorm, so we're unlikely to stop 493 * at an LQ point, except if the iteration limit interferes. 494 * </p> 495 * <p> 496 * Additional upudates are also carried out in case {@code goodb} is set 497 * to {@code true}. 498 * </p> 499 * 500 * @param x the vector to be updated with the refined value of xL 501 */ 502 void refineSolution(final RealVector x) { 503 final int n = this.xL.getDimension(); 504 if (lqnorm < cgnorm) { 505 if (!goodb) { 506 x.setSubVector(0, this.xL); 507 } else { 508 final double step = bstep / beta1; 509 for (int i = 0; i < n; i++) { 510 final double bi = mb.getEntry(i); 511 final double xi = this.xL.getEntry(i); 512 x.setEntry(i, xi + step * bi); 513 } 514 } 515 } else { 516 final double anorm = FastMath.sqrt(tnorm); 517 final double diag = gbar == 0. ? anorm * MACH_PREC : gbar; 518 final double zbar = gammaZeta / diag; 519 final double step = (bstep + snprod * zbar) / beta1; 520 // ynorm = FastMath.sqrt(ynorm2 + zbar * zbar); 521 if (!goodb) { 522 for (int i = 0; i < n; i++) { 523 final double xi = this.xL.getEntry(i); 524 final double wi = wbar.getEntry(i); 525 x.setEntry(i, xi + zbar * wi); 526 } 527 } else { 528 for (int i = 0; i < n; i++) { 529 final double xi = this.xL.getEntry(i); 530 final double wi = wbar.getEntry(i); 531 final double bi = mb.getEntry(i); 532 x.setEntry(i, xi + zbar * wi + step * bi); 533 } 534 } 535 } 536 } 537 538 /** 539 * Performs the initial phase of the SYMMLQ algorithm. On return, the 540 * value of the state variables of {@code this} object correspond to k = 541 * 1. 542 */ 543 void init() { 544 this.xL.set(0.); 545 /* 546 * Set up y for the first Lanczos vector. y and beta1 will be zero 547 * if b = 0. 548 */ 549 this.r1 = this.b.copy(); 550 this.y = this.m == null ? this.b.copy() : this.m.operate(this.r1); 551 if ((this.m != null) && this.check) { 552 checkSymmetry(this.m, this.r1, this.y, this.m.operate(this.y)); 553 } 554 555 this.beta1 = this.r1.dotProduct(this.y); 556 if (this.beta1 < 0.) { 557 throwNPDLOException(this.m, this.y); 558 } 559 if (this.beta1 == 0.) { 560 /* If b = 0 exactly, stop with x = 0. */ 561 this.bIsNull = true; 562 return; 563 } 564 this.bIsNull = false; 565 this.beta1 = FastMath.sqrt(this.beta1); 566 /* At this point 567 * r1 = b, 568 * y = M * b, 569 * beta1 = beta[1]. 570 */ 571 final RealVector v = this.y.mapMultiply(1. / this.beta1); 572 this.y = this.a.operate(v); 573 if (this.check) { 574 checkSymmetry(this.a, v, this.y, this.a.operate(this.y)); 575 } 576 /* 577 * Set up y for the second Lanczos vector. y and beta will be zero 578 * or very small if b is an eigenvector. 579 */ 580 daxpy(-this.shift, v, this.y); 581 final double alpha = v.dotProduct(this.y); 582 daxpy(-alpha / this.beta1, this.r1, this.y); 583 /* 584 * At this point 585 * alpha = alpha[1] 586 * y = beta[2] * M^(-1) * P' * v[2] 587 */ 588 /* Make sure r2 will be orthogonal to the first v. */ 589 final double vty = v.dotProduct(this.y); 590 final double vtv = v.dotProduct(v); 591 daxpy(-vty / vtv, v, this.y); 592 this.r2 = this.y.copy(); 593 if (this.m != null) { 594 this.y = this.m.operate(this.r2); 595 } 596 this.oldb = this.beta1; 597 this.beta = this.r2.dotProduct(this.y); 598 if (this.beta < 0.) { 599 throwNPDLOException(this.m, this.y); 600 } 601 this.beta = FastMath.sqrt(this.beta); 602 /* 603 * At this point 604 * oldb = beta[1] 605 * beta = beta[2] 606 * y = beta[2] * P' * v[2] 607 * r2 = beta[2] * M^(-1) * P' * v[2] 608 */ 609 this.cgnorm = this.beta1; 610 this.gbar = alpha; 611 this.dbar = this.beta; 612 this.gammaZeta = this.beta1; 613 this.minusEpsZeta = 0.; 614 this.bstep = 0.; 615 this.snprod = 1.; 616 this.tnorm = alpha * alpha + this.beta * this.beta; 617 this.ynorm2 = 0.; 618 this.gmax = FastMath.abs(alpha) + MACH_PREC; 619 this.gmin = this.gmax; 620 621 if (this.goodb) { 622 this.wbar = new ArrayRealVector(this.a.getRowDimension()); 623 this.wbar.set(0.); 624 } else { 625 this.wbar = v; 626 } 627 updateNorms(); 628 } 629 630 /** 631 * Performs the next iteration of the algorithm. The iteration count 632 * should be incremented prior to calling this method. On return, the 633 * value of the state variables of {@code this} object correspond to the 634 * current iteration count {@code k}. 635 */ 636 void update() { 637 final RealVector v = y.mapMultiply(1. / beta); 638 y = a.operate(v); 639 daxpbypz(-shift, v, -beta / oldb, r1, y); 640 final double alpha = v.dotProduct(y); 641 /* 642 * At this point 643 * v = P' * v[k], 644 * y = (A - shift * I) * P' * v[k] - beta[k] * M^(-1) * P' * v[k-1], 645 * alpha = v'[k] * P * (A - shift * I) * P' * v[k] 646 * - beta[k] * v[k]' * P * M^(-1) * P' * v[k-1] 647 * = v'[k] * P * (A - shift * I) * P' * v[k] 648 * - beta[k] * v[k]' * v[k-1] 649 * = alpha[k]. 650 */ 651 daxpy(-alpha / beta, r2, y); 652 /* 653 * At this point 654 * y = (A - shift * I) * P' * v[k] - alpha[k] * M^(-1) * P' * v[k] 655 * - beta[k] * M^(-1) * P' * v[k-1] 656 * = M^(-1) * P' * (P * (A - shift * I) * P' * v[k] -alpha[k] * v[k] 657 * - beta[k] * v[k-1]) 658 * = beta[k+1] * M^(-1) * P' * v[k+1], 659 * from Paige and Saunders (1975), equation (3.2). 660 * 661 * WATCH-IT: the two following lines work only because y is no longer 662 * updated up to the end of the present iteration, and is 663 * reinitialized at the beginning of the next iteration. 664 */ 665 r1 = r2; 666 r2 = y; 667 if (m != null) { 668 y = m.operate(r2); 669 } 670 oldb = beta; 671 beta = r2.dotProduct(y); 672 if (beta < 0.) { 673 throwNPDLOException(m, y); 674 } 675 beta = FastMath.sqrt(beta); 676 /* 677 * At this point 678 * r1 = beta[k] * M^(-1) * P' * v[k], 679 * r2 = beta[k+1] * M^(-1) * P' * v[k+1], 680 * y = beta[k+1] * P' * v[k+1], 681 * oldb = beta[k], 682 * beta = beta[k+1]. 683 */ 684 tnorm += alpha * alpha + oldb * oldb + beta * beta; 685 /* 686 * Compute the next plane rotation for Q. See Paige and Saunders 687 * (1975), equation (5.6), with 688 * gamma = gamma[k-1], 689 * c = c[k-1], 690 * s = s[k-1]. 691 */ 692 final double gamma = FastMath.sqrt(gbar * gbar + oldb * oldb); 693 final double c = gbar / gamma; 694 final double s = oldb / gamma; 695 /* 696 * The relations 697 * gbar[k] = s[k-1] * (-c[k-2] * beta[k]) - c[k-1] * alpha[k] 698 * = s[k-1] * dbar[k] - c[k-1] * alpha[k], 699 * delta[k] = c[k-1] * dbar[k] + s[k-1] * alpha[k], 700 * are not stated in Paige and Saunders (1975), but can be retrieved 701 * by expanding the (k, k-1) and (k, k) coefficients of the matrix in 702 * equation (5.5). 703 */ 704 final double deltak = c * dbar + s * alpha; 705 gbar = s * dbar - c * alpha; 706 final double eps = s * beta; 707 dbar = -c * beta; 708 final double zeta = gammaZeta / gamma; 709 /* 710 * At this point 711 * gbar = gbar[k] 712 * deltak = delta[k] 713 * eps = eps[k+1] 714 * dbar = dbar[k+1] 715 * zeta = zeta[k-1] 716 */ 717 final double zetaC = zeta * c; 718 final double zetaS = zeta * s; 719 final int n = xL.getDimension(); 720 for (int i = 0; i < n; i++) { 721 final double xi = xL.getEntry(i); 722 final double vi = v.getEntry(i); 723 final double wi = wbar.getEntry(i); 724 xL.setEntry(i, xi + wi * zetaC + vi * zetaS); 725 wbar.setEntry(i, wi * s - vi * c); 726 } 727 /* 728 * At this point 729 * x = xL[k-1], 730 * ptwbar = P' wbar[k], 731 * see Paige and Saunders (1975), equations (5.9) and (5.10). 732 */ 733 bstep += snprod * c * zeta; 734 snprod *= s; 735 gmax = FastMath.max(gmax, gamma); 736 gmin = FastMath.min(gmin, gamma); 737 ynorm2 += zeta * zeta; 738 gammaZeta = minusEpsZeta - deltak * zeta; 739 minusEpsZeta = -eps * zeta; 740 /* 741 * At this point 742 * snprod = s[1] * ... * s[k-1], 743 * gmax = max(|alpha[1]|, gamma[1], ..., gamma[k-1]), 744 * gmin = min(|alpha[1]|, gamma[1], ..., gamma[k-1]), 745 * ynorm2 = zeta[1]^2 + ... + zeta[k-1]^2, 746 * gammaZeta = gamma[k] * zeta[k], 747 * minusEpsZeta = -eps[k+1] * zeta[k-1]. 748 * The relation for gammaZeta can be retrieved from Paige and 749 * Saunders (1975), equation (5.4a), last line of the vector 750 * gbar[k] * zbar[k] = -eps[k] * zeta[k-2] - delta[k] * zeta[k-1]. 751 */ 752 updateNorms(); 753 } 754 755 /** 756 * Computes the norms of the residuals, and checks for convergence. 757 * Updates {@link #lqnorm} and {@link #cgnorm}. 758 */ 759 private void updateNorms() { 760 final double anorm = FastMath.sqrt(tnorm); 761 final double ynorm = FastMath.sqrt(ynorm2); 762 final double epsa = anorm * MACH_PREC; 763 final double epsx = anorm * ynorm * MACH_PREC; 764 final double epsr = anorm * ynorm * delta; 765 final double diag = gbar == 0. ? epsa : gbar; 766 lqnorm = FastMath.sqrt(gammaZeta * gammaZeta + 767 minusEpsZeta * minusEpsZeta); 768 final double qrnorm = snprod * beta1; 769 cgnorm = qrnorm * beta / FastMath.abs(diag); 770 771 /* 772 * Estimate cond(A). In this version we look at the diagonals of L 773 * in the factorization of the tridiagonal matrix, T = L * Q. 774 * Sometimes, T[k] can be misleadingly ill-conditioned when T[k+1] 775 * is not, so we must be careful not to overestimate acond. 776 */ 777 final double acond; 778 if (lqnorm <= cgnorm) { 779 acond = gmax / gmin; 780 } else { 781 acond = gmax / FastMath.min(gmin, FastMath.abs(diag)); 782 } 783 if (acond * MACH_PREC >= 0.1) { 784 throw new IllConditionedOperatorException(acond); 785 } 786 if (beta1 <= epsx) { 787 /* 788 * x has converged to an eigenvector of A corresponding to the 789 * eigenvalue shift. 790 */ 791 throw new SingularOperatorException(); 792 } 793 rnorm = FastMath.min(cgnorm, lqnorm); 794 hasConverged = (cgnorm <= epsx) || (cgnorm <= epsr); 795 } 796 797 /** 798 * Returns {@code true} if the default stopping criterion is fulfilled. 799 * 800 * @return {@code true} if convergence of the iterations has occurred 801 */ 802 boolean hasConverged() { 803 return hasConverged; 804 } 805 806 /** 807 * Returns {@code true} if the right-hand side vector is zero exactly. 808 * 809 * @return the boolean value of {@code b == 0} 810 */ 811 boolean bEqualsNullVector() { 812 return bIsNull; 813 } 814 815 /** 816 * Returns {@code true} if {@code beta} is essentially zero. This method 817 * is used to check for early stop of the iterations. 818 * 819 * @return {@code true} if {@code beta < }{@link #MACH_PREC} 820 */ 821 boolean betaEqualsZero() { 822 return beta < MACH_PREC; 823 } 824 825 /** 826 * Returns the norm of the updated, preconditioned residual. 827 * 828 * @return the norm of the residual, ||P * r|| 829 */ 830 double getNormOfResidual() { 831 return rnorm; 832 } 833 } 834 835 /** Key for the exception context. */ 836 private static final String OPERATOR = "operator"; 837 838 /** Key for the exception context. */ 839 private static final String THRESHOLD = "threshold"; 840 841 /** Key for the exception context. */ 842 private static final String VECTOR = "vector"; 843 844 /** Key for the exception context. */ 845 private static final String VECTOR1 = "vector1"; 846 847 /** Key for the exception context. */ 848 private static final String VECTOR2 = "vector2"; 849 850 /** {@code true} if symmetry of matrix and conditioner must be checked. */ 851 private final boolean check; 852 853 /** 854 * The value of the custom tolerance δ for the default stopping 855 * criterion. 856 */ 857 private final double delta; 858 859 /** 860 * Creates a new instance of this class, with <a href="#stopcrit">default 861 * stopping criterion</a>. Note that setting {@code check} to {@code true} 862 * entails an extra matrix-vector product in the initial phase. 863 * 864 * @param maxIterations the maximum number of iterations 865 * @param delta the δ parameter for the default stopping criterion 866 * @param check {@code true} if self-adjointedness of both matrix and 867 * preconditioner should be checked 868 */ 869 public SymmLQ(final int maxIterations, final double delta, 870 final boolean check) { 871 super(maxIterations); 872 this.delta = delta; 873 this.check = check; 874 } 875 876 /** 877 * Creates a new instance of this class, with <a href="#stopcrit">default 878 * stopping criterion</a> and custom iteration manager. Note that setting 879 * {@code check} to {@code true} entails an extra matrix-vector product in 880 * the initial phase. 881 * 882 * @param manager the custom iteration manager 883 * @param delta the δ parameter for the default stopping criterion 884 * @param check {@code true} if self-adjointedness of both matrix and 885 * preconditioner should be checked 886 */ 887 public SymmLQ(final IterationManager manager, final double delta, 888 final boolean check) { 889 super(manager); 890 this.delta = delta; 891 this.check = check; 892 } 893 894 /** 895 * Returns {@code true} if symmetry of the matrix, and symmetry as well as 896 * positive definiteness of the preconditioner should be checked. 897 * 898 * @return {@code true} if the tests are to be performed 899 */ 900 public final boolean getCheck() { 901 return check; 902 } 903 904 /** 905 * {@inheritDoc} 906 * 907 * @throws NonSelfAdjointOperatorException if {@link #getCheck()} is 908 * {@code true}, and {@code a} or {@code m} is not self-adjoint 909 * @throws NonPositiveDefiniteOperatorException if {@code m} is not 910 * positive definite 911 * @throws IllConditionedOperatorException if {@code a} is ill-conditioned 912 */ 913 @Override 914 public RealVector solve(final RealLinearOperator a, 915 final RealLinearOperator m, final RealVector b) throws 916 NullArgumentException, NonSquareOperatorException, 917 DimensionMismatchException, MaxCountExceededException, 918 NonSelfAdjointOperatorException, NonPositiveDefiniteOperatorException, 919 IllConditionedOperatorException { 920 MathUtils.checkNotNull(a); 921 final RealVector x = new ArrayRealVector(a.getColumnDimension()); 922 return solveInPlace(a, m, b, x, false, 0.); 923 } 924 925 /** 926 * Returns an estimate of the solution to the linear system (A - shift 927 * · I) · x = b. 928 * <p> 929 * If the solution x is expected to contain a large multiple of {@code b} 930 * (as in Rayleigh-quotient iteration), then better precision may be 931 * achieved with {@code goodb} set to {@code true}; this however requires an 932 * extra call to the preconditioner. 933 * </p> 934 * <p> 935 * {@code shift} should be zero if the system A · x = b is to be 936 * solved. Otherwise, it could be an approximation to an eigenvalue of A, 937 * such as the Rayleigh quotient b<sup>T</sup> · A · b / 938 * (b<sup>T</sup> · b) corresponding to the vector b. If b is 939 * sufficiently like an eigenvector corresponding to an eigenvalue near 940 * shift, then the computed x may have very large components. When 941 * normalized, x may be closer to an eigenvector than b. 942 * </p> 943 * 944 * @param a the linear operator A of the system 945 * @param m the preconditioner, M (can be {@code null}) 946 * @param b the right-hand side vector 947 * @param goodb usually {@code false}, except if {@code x} is expected to 948 * contain a large multiple of {@code b} 949 * @param shift the amount to be subtracted to all diagonal elements of A 950 * @return a reference to {@code x} (shallow copy) 951 * @throws NullArgumentException if one of the parameters is {@code null} 952 * @throws NonSquareOperatorException if {@code a} or {@code m} is not square 953 * @throws DimensionMismatchException if {@code m} or {@code b} have dimensions 954 * inconsistent with {@code a} 955 * @throws MaxCountExceededException at exhaustion of the iteration count, 956 * unless a custom 957 * {@link org.apache.commons.math3.util.Incrementor.MaxCountExceededCallback callback} 958 * has been set at construction of the {@link IterationManager} 959 * @throws NonSelfAdjointOperatorException if {@link #getCheck()} is 960 * {@code true}, and {@code a} or {@code m} is not self-adjoint 961 * @throws NonPositiveDefiniteOperatorException if {@code m} is not 962 * positive definite 963 * @throws IllConditionedOperatorException if {@code a} is ill-conditioned 964 */ 965 public RealVector solve(final RealLinearOperator a, 966 final RealLinearOperator m, final RealVector b, final boolean goodb, 967 final double shift) throws NullArgumentException, 968 NonSquareOperatorException, DimensionMismatchException, 969 MaxCountExceededException, NonSelfAdjointOperatorException, 970 NonPositiveDefiniteOperatorException, IllConditionedOperatorException { 971 MathUtils.checkNotNull(a); 972 final RealVector x = new ArrayRealVector(a.getColumnDimension()); 973 return solveInPlace(a, m, b, x, goodb, shift); 974 } 975 976 /** 977 * {@inheritDoc} 978 * 979 * @param x not meaningful in this implementation; should not be considered 980 * as an initial guess (<a href="#initguess">more</a>) 981 * @throws NonSelfAdjointOperatorException if {@link #getCheck()} is 982 * {@code true}, and {@code a} or {@code m} is not self-adjoint 983 * @throws NonPositiveDefiniteOperatorException if {@code m} is not positive 984 * definite 985 * @throws IllConditionedOperatorException if {@code a} is ill-conditioned 986 */ 987 @Override 988 public RealVector solve(final RealLinearOperator a, 989 final RealLinearOperator m, final RealVector b, final RealVector x) 990 throws NullArgumentException, NonSquareOperatorException, 991 DimensionMismatchException, NonSelfAdjointOperatorException, 992 NonPositiveDefiniteOperatorException, IllConditionedOperatorException, 993 MaxCountExceededException { 994 MathUtils.checkNotNull(x); 995 return solveInPlace(a, m, b, x.copy(), false, 0.); 996 } 997 998 /** 999 * {@inheritDoc} 1000 * 1001 * @throws NonSelfAdjointOperatorException if {@link #getCheck()} is 1002 * {@code true}, and {@code a} is not self-adjoint 1003 * @throws IllConditionedOperatorException if {@code a} is ill-conditioned 1004 */ 1005 @Override 1006 public RealVector solve(final RealLinearOperator a, final RealVector b) 1007 throws NullArgumentException, NonSquareOperatorException, 1008 DimensionMismatchException, NonSelfAdjointOperatorException, 1009 IllConditionedOperatorException, MaxCountExceededException { 1010 MathUtils.checkNotNull(a); 1011 final RealVector x = new ArrayRealVector(a.getColumnDimension()); 1012 x.set(0.); 1013 return solveInPlace(a, null, b, x, false, 0.); 1014 } 1015 1016 /** 1017 * Returns the solution to the system (A - shift · I) · x = b. 1018 * <p> 1019 * If the solution x is expected to contain a large multiple of {@code b} 1020 * (as in Rayleigh-quotient iteration), then better precision may be 1021 * achieved with {@code goodb} set to {@code true}. 1022 * </p> 1023 * <p> 1024 * {@code shift} should be zero if the system A · x = b is to be 1025 * solved. Otherwise, it could be an approximation to an eigenvalue of A, 1026 * such as the Rayleigh quotient b<sup>T</sup> · A · b / 1027 * (b<sup>T</sup> · b) corresponding to the vector b. If b is 1028 * sufficiently like an eigenvector corresponding to an eigenvalue near 1029 * shift, then the computed x may have very large components. When 1030 * normalized, x may be closer to an eigenvector than b. 1031 * </p> 1032 * 1033 * @param a the linear operator A of the system 1034 * @param b the right-hand side vector 1035 * @param goodb usually {@code false}, except if {@code x} is expected to 1036 * contain a large multiple of {@code b} 1037 * @param shift the amount to be subtracted to all diagonal elements of A 1038 * @return a reference to {@code x} 1039 * @throws NullArgumentException if one of the parameters is {@code null} 1040 * @throws NonSquareOperatorException if {@code a} is not square 1041 * @throws DimensionMismatchException if {@code b} has dimensions 1042 * inconsistent with {@code a} 1043 * @throws MaxCountExceededException at exhaustion of the iteration count, 1044 * unless a custom 1045 * {@link org.apache.commons.math3.util.Incrementor.MaxCountExceededCallback callback} 1046 * has been set at construction of the {@link IterationManager} 1047 * @throws NonSelfAdjointOperatorException if {@link #getCheck()} is 1048 * {@code true}, and {@code a} is not self-adjoint 1049 * @throws IllConditionedOperatorException if {@code a} is ill-conditioned 1050 */ 1051 public RealVector solve(final RealLinearOperator a, final RealVector b, 1052 final boolean goodb, final double shift) throws NullArgumentException, 1053 NonSquareOperatorException, DimensionMismatchException, 1054 NonSelfAdjointOperatorException, IllConditionedOperatorException, 1055 MaxCountExceededException { 1056 MathUtils.checkNotNull(a); 1057 final RealVector x = new ArrayRealVector(a.getColumnDimension()); 1058 return solveInPlace(a, null, b, x, goodb, shift); 1059 } 1060 1061 /** 1062 * {@inheritDoc} 1063 * 1064 * @param x not meaningful in this implementation; should not be considered 1065 * as an initial guess (<a href="#initguess">more</a>) 1066 * @throws NonSelfAdjointOperatorException if {@link #getCheck()} is 1067 * {@code true}, and {@code a} is not self-adjoint 1068 * @throws IllConditionedOperatorException if {@code a} is ill-conditioned 1069 */ 1070 @Override 1071 public RealVector solve(final RealLinearOperator a, final RealVector b, 1072 final RealVector x) throws NullArgumentException, 1073 NonSquareOperatorException, DimensionMismatchException, 1074 NonSelfAdjointOperatorException, IllConditionedOperatorException, 1075 MaxCountExceededException { 1076 MathUtils.checkNotNull(x); 1077 return solveInPlace(a, null, b, x.copy(), false, 0.); 1078 } 1079 1080 /** 1081 * {@inheritDoc} 1082 * 1083 * @param x the vector to be updated with the solution; {@code x} should 1084 * not be considered as an initial guess (<a href="#initguess">more</a>) 1085 * @throws NonSelfAdjointOperatorException if {@link #getCheck()} is 1086 * {@code true}, and {@code a} or {@code m} is not self-adjoint 1087 * @throws NonPositiveDefiniteOperatorException if {@code m} is not 1088 * positive definite 1089 * @throws IllConditionedOperatorException if {@code a} is ill-conditioned 1090 */ 1091 @Override 1092 public RealVector solveInPlace(final RealLinearOperator a, 1093 final RealLinearOperator m, final RealVector b, final RealVector x) 1094 throws NullArgumentException, NonSquareOperatorException, 1095 DimensionMismatchException, NonSelfAdjointOperatorException, 1096 NonPositiveDefiniteOperatorException, IllConditionedOperatorException, 1097 MaxCountExceededException { 1098 return solveInPlace(a, m, b, x, false, 0.); 1099 } 1100 1101 /** 1102 * Returns an estimate of the solution to the linear system (A - shift 1103 * · I) · x = b. The solution is computed in-place. 1104 * <p> 1105 * If the solution x is expected to contain a large multiple of {@code b} 1106 * (as in Rayleigh-quotient iteration), then better precision may be 1107 * achieved with {@code goodb} set to {@code true}; this however requires an 1108 * extra call to the preconditioner. 1109 * </p> 1110 * <p> 1111 * {@code shift} should be zero if the system A · x = b is to be 1112 * solved. Otherwise, it could be an approximation to an eigenvalue of A, 1113 * such as the Rayleigh quotient b<sup>T</sup> · A · b / 1114 * (b<sup>T</sup> · b) corresponding to the vector b. If b is 1115 * sufficiently like an eigenvector corresponding to an eigenvalue near 1116 * shift, then the computed x may have very large components. When 1117 * normalized, x may be closer to an eigenvector than b. 1118 * </p> 1119 * 1120 * @param a the linear operator A of the system 1121 * @param m the preconditioner, M (can be {@code null}) 1122 * @param b the right-hand side vector 1123 * @param x the vector to be updated with the solution; {@code x} should 1124 * not be considered as an initial guess (<a href="#initguess">more</a>) 1125 * @param goodb usually {@code false}, except if {@code x} is expected to 1126 * contain a large multiple of {@code b} 1127 * @param shift the amount to be subtracted to all diagonal elements of A 1128 * @return a reference to {@code x} (shallow copy). 1129 * @throws NullArgumentException if one of the parameters is {@code null} 1130 * @throws NonSquareOperatorException if {@code a} or {@code m} is not square 1131 * @throws DimensionMismatchException if {@code m}, {@code b} or {@code x} 1132 * have dimensions inconsistent with {@code a}. 1133 * @throws MaxCountExceededException at exhaustion of the iteration count, 1134 * unless a custom 1135 * {@link org.apache.commons.math3.util.Incrementor.MaxCountExceededCallback callback} 1136 * has been set at construction of the {@link IterationManager} 1137 * @throws NonSelfAdjointOperatorException if {@link #getCheck()} is 1138 * {@code true}, and {@code a} or {@code m} is not self-adjoint 1139 * @throws NonPositiveDefiniteOperatorException if {@code m} is not positive 1140 * definite 1141 * @throws IllConditionedOperatorException if {@code a} is ill-conditioned 1142 */ 1143 public RealVector solveInPlace(final RealLinearOperator a, 1144 final RealLinearOperator m, final RealVector b, 1145 final RealVector x, final boolean goodb, final double shift) 1146 throws NullArgumentException, NonSquareOperatorException, 1147 DimensionMismatchException, NonSelfAdjointOperatorException, 1148 NonPositiveDefiniteOperatorException, IllConditionedOperatorException, 1149 MaxCountExceededException { 1150 checkParameters(a, m, b, x); 1151 1152 final IterationManager manager = getIterationManager(); 1153 /* Initialization counts as an iteration. */ 1154 manager.resetIterationCount(); 1155 manager.incrementIterationCount(); 1156 1157 final State state; 1158 state = new State(a, m, b, goodb, shift, delta, check); 1159 state.init(); 1160 state.refineSolution(x); 1161 IterativeLinearSolverEvent event; 1162 event = new DefaultIterativeLinearSolverEvent(this, 1163 manager.getIterations(), 1164 x, 1165 b, 1166 state.getNormOfResidual()); 1167 if (state.bEqualsNullVector()) { 1168 /* If b = 0 exactly, stop with x = 0. */ 1169 manager.fireTerminationEvent(event); 1170 return x; 1171 } 1172 /* Cause termination if beta is essentially zero. */ 1173 final boolean earlyStop; 1174 earlyStop = state.betaEqualsZero() || state.hasConverged(); 1175 manager.fireInitializationEvent(event); 1176 if (!earlyStop) { 1177 do { 1178 manager.incrementIterationCount(); 1179 event = new DefaultIterativeLinearSolverEvent(this, 1180 manager.getIterations(), 1181 x, 1182 b, 1183 state.getNormOfResidual()); 1184 manager.fireIterationStartedEvent(event); 1185 state.update(); 1186 state.refineSolution(x); 1187 event = new DefaultIterativeLinearSolverEvent(this, 1188 manager.getIterations(), 1189 x, 1190 b, 1191 state.getNormOfResidual()); 1192 manager.fireIterationPerformedEvent(event); 1193 } while (!state.hasConverged()); 1194 } 1195 event = new DefaultIterativeLinearSolverEvent(this, 1196 manager.getIterations(), 1197 x, 1198 b, 1199 state.getNormOfResidual()); 1200 manager.fireTerminationEvent(event); 1201 return x; 1202 } 1203 1204 /** 1205 * {@inheritDoc} 1206 * 1207 * @param x the vector to be updated with the solution; {@code x} should 1208 * not be considered as an initial guess (<a href="#initguess">more</a>) 1209 * @throws NonSelfAdjointOperatorException if {@link #getCheck()} is 1210 * {@code true}, and {@code a} is not self-adjoint 1211 * @throws IllConditionedOperatorException if {@code a} is ill-conditioned 1212 */ 1213 @Override 1214 public RealVector solveInPlace(final RealLinearOperator a, 1215 final RealVector b, final RealVector x) throws NullArgumentException, 1216 NonSquareOperatorException, DimensionMismatchException, 1217 NonSelfAdjointOperatorException, IllConditionedOperatorException, 1218 MaxCountExceededException { 1219 return solveInPlace(a, null, b, x, false, 0.); 1220 } 1221}