Dear Feap team,
I'm solving a nonlinear problem with the nonsymmetric jacobian matrix.
Firstly, I used direct solver superlu_dist with 16 or 32 processors. It works.
Then, I tried an iterative solver gmres with preconditioner pilut.
mpirun -np 8 $FEAPHOME8_3/parfeap/feap -ksp_type gmres -pc_type hypre -pc_hypre_type pilut
It works for 1 or 2 processers. However, when I tried more processors, It prompts errors as:
8 processors (The first time step has been finished):
Saving Parallel data to PUEX_000000.pvtu
Saving Parallel data to PUEX_000001.pvtu
*** An error occurred in MPI_Irecv
*** reported by process [2564947969,6]
*** on communicator MPI COMMUNICATOR 5 DUP FROM 3
*** MPI_ERR_REQUEST: invalid request
*** MPI_ERRORS_ARE_FATAL (processes in this communicator will now abort,
*** and potentially your MPI job)
16 processors(The first time step was not completed):
Saving Parallel data to PUEX_000000.pvtu
** Error in `/rigel/free/users/rs3741/SourceCode/ShearBands/parfeap/feap': free(): invalid next size (fast): 0x0000000002391ae0 ***
Do you have any ideas on this issue?
Best regards.