Author Topic: executing umacr before fork in parallel run  (Read 5851 times)

luc

  • Full Member
  • ***
  • Posts: 53
executing umacr before fork in parallel run
« on: May 31, 2014, 04:00:24 PM »
Hi,
I want to run some parallel simulations using PETSc's fieldsplit capabilities.
To do so I need to define a PetscSection where I associate each degree of freedom to a given field.
This operation needs to be done in the initialization phase of the problem i.e. before feap duplicates itself so that my PetscSection is created only once and has all the information it needs.

How would you suggest doing this, or rather how can I do this before feap is duplicated by mpi?

Also I noticed that this is taken care of in usolve.F (in the parfeap folder) by chosing to write ones code when flags(1) is .True. (see call to MatCreate/MatSetSizes/MatSetType vs call to MatAssembly/VecAssembly/KSPSolve).

Thanks for the help!
« Last Edit: June 02, 2014, 08:08:38 AM by luc »

luc

  • Full Member
  • ***
  • Posts: 53
Re: executing umacr before fork in parallel run
« Reply #1 on: July 02, 2014, 12:00:42 PM »
All right this previous question was badly formulated due to a lack of knowledge on my side.
So here goes again after more in depth study of the problem:

I want to create a tagging scheme for the dofs in my problem. This is very easy to do in serial and using parfeap with one process.
But when I run problems using more than one process:
     mpirun -n X
with X>1, things get less clear.
Say I want to use two processors: X=2, then I first run parfeap on my serial input, which creates:
     Ifile_0001  and  Ifile_0002
Now I run parfeap with mpirun which starts two processes, one of rank 0 and one of rank 1.
The process of rank 0 will read input file 0001, the process of rank 1 will read the other input.
I somehow need to flag all the dofs of my problem based on the information of each input.
So two problems arise:
  • is it enough to create my PETSc Section with: PETSC_COM_WORLD argument to have all processes store their local data in that single 'distributed' variable?
  • how does parfeap store the dofs assigned to each processors? I know that my local numnp is different from my global numnp for instance. Is there a place in the programer manual that deals with programming parfeap?

FEAP_Admin

  • Administrator
  • FEAP Guru
  • *****
  • Posts: 993
Re: executing umacr before fork in parallel run
« Reply #2 on: July 02, 2014, 01:46:20 PM »
I still can not tell what you want to do.  But in response to your second question:

parfeap stores dofs in a given partition only on that processor.  The number is local to the processor and what is shown in Ifile_XXXX.   To communicate dofs to other processors, each Ifile_XXXX has communication data written into is during the partitioning process as well as local to global mapping data.  The parallel manual describes some of this.

luc

  • Full Member
  • ***
  • Posts: 53
Re: executing umacr before fork in parallel run
« Reply #3 on: July 02, 2014, 02:11:37 PM »
So let me try to clarify again:
let say I have n nodes on my mesh, the nodes have 8dofs that I label 1 1 2 4 4 4 4 3.
What I want is to build a list of all the degrees of freedom with their label.
To get this list I run the following code

      do i = 0, numnp-1
          do dof = 1,ndf
              ii   = i*ndf+dof-1
              k    = mr(np(245) + ii)
              jj   = fsind(dof)
              if( k .gt. 0 ) then
                  fsmax(jj)     = fsmax(jj)+1
                  j             = fsmax(jj)
                  indices(jj,j) = k-1
                  call PetscSectionSetDof(FSSection, k, 1, ierr)
                  call PetscSectionSetFieldDof(FSSection, k, jj-1, 1,
     &                 ierr)
              endif
          enddo
      enddo

This work fine in serial (or with parfeap on 1 processor) but when multiple processes are started this crashes.
You see that I check program array np(245) to detect Dirichlet BC and not take these dofs into account in my list.

This list is great because it allows me to use PETSc's FIELDSPLIT preconditioners, but so far I haven't found a stable implementation for multiple partition simulations. I will read the parallel manual again to find the mapping from local to global and try to work with it.
But the major point here is to create the hole list and pass it a PETSc DM, and to do that I need to collect the partial lists of all my partitions and gather them into one big list.
Any idea on how to do that ?

FEAP_Admin

  • Administrator
  • FEAP Guru
  • *****
  • Posts: 993
Re: executing umacr before fork in parallel run
« Reply #4 on: July 02, 2014, 02:56:41 PM »
What if you added this code at the point where the individual Ifile_XXXX are being written.  In other words, generate all the data during partitioning
and write it to the individual input files.  Then when you do your mpirun each processor will read its Ifile_XXXX and find all the information that it
needs.