Author Topic: staggered in parallel  (Read 9581 times)

blackbird

  • Full Member
  • ***
  • Posts: 100
staggered in parallel
« on: March 25, 2014, 03:25:36 AM »
Hello!

I am trying to do some staggered calculation in parallel. As far as I know the partition command does not work in 8.3 for parallel calculations, so i tried to do it manually. I have 3 dof per node and i created 2 different type of tangent matrices (at each node):
first step
[ X X 0]
[ X X 0]
[ 0 0 0]
second step
[ 0 0 0]
[ 0 0 0]
[ 0 0 X]
I calculate each in the same "TIME"-step. While it seems to work fine there is this warning-line in the output-file which I'm a little bit concerned about:
" *D4TRI WARNING* Reduced diagonal is zero in       717 equations."

Is it serious and affects the calculation or may I ignore it?
Maybe there is a better way to do staggered calculations in parallel?

Prof. S. Govindjee

  • Administrator
  • FEAP Guru
  • *****
  • Posts: 1164
Re: staggered in parallel
« Reply #1 on: March 25, 2014, 09:00:41 AM »
parFEAP does not use DATRI; it uses PETSc.  So something is already wrong there.
That said, PETSc has some built in infrastructure that you may be able to use (see petsc split field).

blackbird

  • Full Member
  • ***
  • Posts: 100
Re: staggered in parallel
« Reply #2 on: April 02, 2014, 12:46:02 AM »
Sorry, you are right. Of course this is a warning of the serial test run. Once I noticed the feap-partition-command does not work in parFEAP I tried to implement it the way I said in the first post. I tested in serial, afterwards in parallel run (parFEAP 8.3 + PETSc3.2-p7). As long as my model is totally constraint (displacement boundary conditions) serial and parallel give approximately equal results (except for speedup). But when I put on force boundary conditions, parallel run does not converge but serial does.

My first thought was it has something to do with the warning I posted before. Now I investigated parallel log which tells me it did not converge because of an indefinite matrix. As I said, the same problem is converging fine in serial. To set up the  parallel run I used petscmpiexec with conjugate gradient solver (-ksp_type cg) and a jacobian preconditioner (-pc_type jacobi) options.

Do you suggest different options for such a problem or is it because of my manual staggerd tangent calculation? Is there anything else I should be aware of?

blackbird

  • Full Member
  • ***
  • Posts: 100
Re: staggered in parallel
« Reply #3 on: April 30, 2014, 12:47:09 AM »
I really wonder why you can't either tell me what is wrong with this way of calculating staggered in parallel or tell me how to do it.

Anyway, I decided to try out the way you proposed and use the PETSc built-in infrastructure. Unfortunately I don't know how to use this PETSc split field. Is this an option to add to the call of mpiexec? Or do I have to change something in parFEAP's source code?


FEAP_Admin

  • Administrator
  • FEAP Guru
  • *****
  • Posts: 993
Re: staggered in parallel
« Reply #4 on: April 30, 2014, 11:20:06 PM »
What information you provide is rather limited so one can really help you.  Though I will point out that the datri warning means that your global tangent matrix is very singular.

blackbird

  • Full Member
  • ***
  • Posts: 100
Re: staggered in parallel
« Reply #5 on: May 02, 2014, 01:35:15 AM »
Ok, I try to make myself clearer:

I have 3 dof per node. 2 of them are mechanical, ie displacements in x and y direction. third one is a fracture mechanics degree of freedom just like temperature would be in thermomechanics. While setting up the element matrices inside the elmtxx.f file i have a flag to tell me which part I want to solve - mechanical or fracture part. Lets imagine a 4 node element. Its element matrice s would contain 12x12 elements. s(1:2,1:2) contains mechnical coefficients for node one, s(3,3) contains fracture coefficients for node one. s(1:2,3) and s(3,1:2) would be coefficients to couple fracture and mechanics, s(1:2,4:5) would be coupling coefficients between node one and two, and so on.... In the case of mechanical solution I would only set coefficients for mechanical solution, ie s(1:2,1:2) for node one, other entries (for that node) are 0. Of course now s(3,3) is zero too, which means I have zero on the main diagonal of the global matrice and therefore singular global tangent.

What I try to tell you is: serial solver seems to get along with this. My problem is solved and the solution seems reasonable. So I just wanted to know, why this is not possible in parallel. I hope now is clear, what I was trying to say.

Anyway, it would be nice to use this PETSc-built-in infrastructure. Could you please advise me how to use this PETSC-FIELD-SPLIT. I found some documentations on it but still I have no idea of how to use it.

Prof. S. Govindjee

  • Administrator
  • FEAP Guru
  • *****
  • Posts: 1164
Re: staggered in parallel
« Reply #6 on: May 03, 2014, 08:31:24 AM »
I am surprised this worked in serial FEAP but you may have gotten lucky with your global matrix.
In parallel FEAP this will definitely not work unless you modify parfeap.  The problem is that FEAP
is allocating a matrix based on all your dofs.  When you only set the thermal dofs, it will assemble
many zeros and your final matrix will be singular -- hence the error you see.

If you want to do this then you will have to instruct FEAP what then matrix structure should look like
for each partition etc.  This is certainly doable but is somewhat involved depending on how general you
would like your implementation to be.

blackbird

  • Full Member
  • ***
  • Posts: 100
Re: staggered in parallel
« Reply #7 on: May 14, 2014, 03:17:59 AM »
I don't want to do this in serial feap. I would really like to concentrate on parFEAP now.

I understood there is a PETSc function PCFILEDSPLIT which may be able to do something like PART-command in serial does. But I don't have any idea of how to use this function or where to start for this at all. I found

http://www.mcs.anl.gov/petsc/petsc-current/docs/manualpages/PC/PCFIELDSPLIT.html

which may be a good description of what the function does. Still I don't have the smallest clue of where or even how to use it. Could you please give me some help in that direction?

FEAP_Admin

  • Administrator
  • FEAP Guru
  • *****
  • Posts: 993
Re: staggered in parallel
« Reply #8 on: May 14, 2014, 04:07:50 PM »
We have not used this ourselves so we can not help too much.  If you look in the PETSc examples/tutorials in the PETSc source files you should find at least one example.  Dr. Waisman's group at Columbia has used this with FEAP so you could also try contacting them.