eqslv#
- Mapdl.eqslv(lab='', toler='', mult='', keepfile='', **kwargs)#
Specifies the type of equation solver.
APDL Command: EQSLV
- Parameters:
- lab
Equation solver type:
- SPARSE - Sparse direct equation solver. Applicable to
real-value or complex-value symmetric and unsymmetric matrices. Available only for STATIC, HARMIC (full method only), TRANS (full method only), SUBSTR, and PSD spectrum analysis types [ANTYPE]. Can be used for nonlinear and linear analyses, especially nonlinear analysis where indefinite matrices are frequently encountered. Well suited for contact analysis where contact status alters the mesh topology. Other typical well-suited applications are: (a) models consisting of shell/beam or shell/beam and solid elements (b) models with a multi-branch structure, such as an automobile exhaust or a turbine fan. This is an alternative to iterative solvers since it combines both speed and robustness. Generally, it requires considerably more memory (~10x) than the PCG solver to obtain optimal performance (running totally in-core). When memory is limited, the solver works partly in-core and out-of-core, which can noticeably slow down the performance of the solver. See the BCSOPTION command for more details on the various modes of operation for this solver.
This solver can be run in shared memory parallel or distributed memory parallel (Distributed ANSYS) mode. When used in Distributed ANSYS, this solver preserves all of the merits of the classic or shared memory sparse solver. The total sum of memory (summed for all processes) is usually higher than the shared memory sparse solver. System configuration also affects the performance of the distributed memory parallel solver. If enough physical memory is available, running this solver in the in-core memory mode achieves optimal performance. The ideal configuration when using the out-of-core memory mode is to use one processor per machine on multiple machines (a cluster), spreading the I/O across the hard drives of each machine, assuming that you are using a high-speed network such as Infiniband to efficiently support all communication across the multiple machines. - This solver supports use of the GPU accelerator capability.
- JCG - Jacobi Conjugate Gradient iterative equation
solver. Available only for STATIC, HARMIC (full method only), and TRANS (full method only) analysis types [ANTYPE]. Can be used for structural, thermal, and multiphysics applications. Applicable for symmetric, unsymmetric, complex, definite, and indefinite matrices. Recommended for 3-D harmonic analyses in structural and multiphysics applications. Efficient for heat transfer, electromagnetics, piezoelectrics, and acoustic field problems.
This solver can be run in shared memory parallel or distributed memory parallel (Distributed ANSYS) mode. When used in Distributed ANSYS, in addition to the limitations listed above, this solver only runs in a distributed parallel fashion for STATIC and TRANS (full method) analyses in which the stiffness is symmetric and only when not using the fast thermal option (THOPT). Otherwise, this solver runs in shared memory parallel mode inside Distributed ANSYS. - This solver supports use of the GPU accelerator capability. When using the GPU accelerator capability, in addition to the limitations listed above, this solver is available only for STATIC and TRANS (full method) analyses where the stiffness is symmetric and does not support the fast thermal option (THOPT).
- ICCG - Incomplete Cholesky Conjugate Gradient iterative
equation solver. Available for STATIC, HARMIC (full method only), and TRANS (full method only) analysis types [ANTYPE]. Can be used for structural, thermal, and multiphysics applications, and for symmetric, unsymmetric, complex, definite, and indefinite matrices. The ICCG solver requires more memory than the JCG solver, but is more robust than the JCG solver for ill-conditioned matrices.
This solver can only be run in shared memory parallel mode. This is also true when the solver is used inside Distributed ANSYS. - This solver does not support use of the GPU accelerator capability.
- QMR - Quasi-Minimal Residual iterative equation
solver. Available for the HARMIC (full method only) analysis type [ANTYPE]. Can be used for high-frequency electromagnetic applications, and for symmetric, complex, definite, and indefinite matrices. The QMR solver is more stable than the ICCG solver.
This solver can only be run in shared memory parallel mode. This is also true when the solver is used inside Distributed ANSYS. - This solver does not support use of the GPU accelerator capability.
- PCG - Preconditioned Conjugate Gradient iterative equation
solver (licensed from Computational Applications and Systems Integration, Inc.). Requires less disk file space than SPARSE and is faster for large models. Useful for plates, shells, 3-D models, large 2-D models, and other problems having symmetric, sparse, definite or indefinite matrices for nonlinear analysis. Requires twice as much memory as JCG. Available only for analysis types [ANTYPE] STATIC, TRANS (full method only), or MODAL (with PCG Lanczos option only). Also available for the use pass of substructure analyses (MATRIX50). The PCG solver can robustly solve equations with constraint equations (CE, CEINTF, CPINTF, and CERIG). With this solver, you can use the MSAVE command to obtain a considerable memory savings.
The PCG solver can handle ill-conditioned problems by using a higher level of difficulty (see PCGOPT). Ill-conditioning arises from elements with high aspect ratios, contact, and plasticity. - This solver can be run in shared memory parallel or distributed memory parallel (Distributed ANSYS) mode. When used in Distributed ANSYS, this solver preserves all of the merits of the classic or shared memory PCG solver. The total sum of memory (summed for all processes) is about 30% more than the shared memory PCG solver.
- toler
Iterative solver tolerance value. Used only with the Jacobi Conjugate Gradient, Incomplete Cholesky Conjugate Gradient, Pre- conditioned Conjugate Gradient, and Quasi-Minimal Residual equation solvers. For the PCG solver, the default is 1.0E-8. The value 1.0E-5 may be acceptable in many situations. When using the PCG Lanczos mode extraction method, the default solver tolerance value is 1.0E-4. For the JCG and ICCG solvers with symmetric matrices, the default is 1.0E-8. For the JCG and ICCG solvers with unsymmetric matrices, and for the QMR solver, the default is 1.0E-6. Iterations continue until the SRSS norm of the residual is less than TOLER times the norm of the applied load vector. For the PCG solver in the linear static analysis case, 3 error norms are used. If one of the error norms is smaller than TOLER, and the SRSS norm of the residual is smaller than 1.0E-2, convergence is assumed to have been reached. See Iterative Solver in the Mechanical APDL Theory Reference for details.
- mult
収束計算中に実行される反復の最大数を制御するために使用される乗数(非線形解析のデフォルトは2.5、線形解析のデフォルトは1.0)。事前条件付き共役勾配方程式ソルバー(PCG)のみで使用されます。最大反復回数は、乗数(MULT)×自由度(DOF)に等しくなります。MULTが負の値として入力された場合、最大反復回数はabs(MULT)に等しくなります。反復は、最大反復回数または解の収束のどちらかに達するまで続けられます。一般的に、MULT のデフォルト値は収束に達するのに十分である。 しかし、条件の悪い行列(つまり、高いアスペクト比や材料タイプの不連続性を持つ要素を含むモデル)の場合は、収束を達成するために使用する最大反復回数を増やすために乗数を使用することができます。 乗数の推奨範囲は1.0 MULT 3.0です。 通常、3.0 より大きな値を設定しても、収束には何のメリットもなく、単に所要時間が長くなるだけです。 もし解が 1.0 MULT 3.0 で収束しない場合、または 10,000 回未満の反復で収束する場合は、収束の可能性は非常に低く、モデルのさらなる検討をお勧めします。MULTのデフォルト値を上げるよりも、PCGOPTコマンドの難易度(Lev_Diff)を上げることを検討してください。
- keepfile
Determines whether files from a SPARSE solver run should be deleted or retained. Applies only to Lab = SPARSE for static and full transient analyses.