Author Topic: Exporting projected values to vtu files in parallel  (Read 11134 times)

luc

  • Full Member
  • ***
  • Posts: 53
Exporting projected values to vtu files in parallel
« on: November 13, 2014, 04:18:31 PM »
Hi all,
I am working on some parallel runs and I want to compute the stresses in my solid and export them in vtu files to visualize them with paraview.
I already have routines that export serial data into a single vtu file but things are slightly different in parallel.
So all I need to do is get the correct value for the projected quantities. But when I use the projection routine I get wrong values at the ghost nodes.
This makes sense since the ghost node lack some neighbors. So I can't bluntly print out these local vectors...

What would you recommend here: should I do an MPI communication to update the ghost nodes after the projection step or should I find export the result of the projection without the ghost nodes and somehow connect the subdomains created by metis?

Prof. S. Govindjee

  • Administrator
  • FEAP Guru
  • *****
  • Posts: 1165
Re: Exporting projected values to vtu files in parallel
« Reply #1 on: November 13, 2014, 05:43:06 PM »
My vague recollection is that in parallel FEAP's paraview output files are automatically assembled by paraview in the correct manner.  Is that not the case for your test runs?

luc

  • Full Member
  • ***
  • Posts: 53
Re: Exporting projected values to vtu files in parallel
« Reply #2 on: November 14, 2014, 07:32:21 AM »
No I talked with Colin here at CU and he thinks it is due to the fact that I use local node numbers in my connectivity.
I don't see how things would work without using the local node numbers since paraview cannot guess what global node number is associated to the local list of coordinates on its own.
I will write a small example, may be a 3 by 3 mesh with four partitions and post it here to see how this work and share the solution!

Colin McAuliffe

  • Jr. Member
  • **
  • Posts: 21
Re: Exporting projected values to vtu files in parallel
« Reply #3 on: November 14, 2014, 09:26:22 AM »
I think when I did this for parallel (unfortunately the code is lost) I used the GLOBALID array as well as the ID array in the individual vtu files. Then paraview can easily stitch the data together because nodes with the same global id are the same. Otherwise, for paraview to do this automatically without explicit global ids, the ghost level probably has to be set properly. The default is 0, but this would not be correct since feap has some overlapping elements.

Also, Prof. Govindjee, we have been looking at the lumped projection routine called by STRE,NODE and as far as we can tell there is no interprocessor communication in the assembly of the integrated projection quantities or weights. Is this the case?

Prof. S. Govindjee

  • Administrator
  • FEAP Guru
  • *****
  • Posts: 1165
Re: Exporting projected values to vtu files in parallel
« Reply #4 on: November 14, 2014, 01:19:30 PM »
My memory is starting to come back.

Two methods were developed for examining the parallel models graphically

Option 1 (original method):  One first issues GPLOt commands at the macro prompt during parallel execution.  This dumps per processor files to disk.  Then one opens up a serial run of FEAP and at the plot prompt one issues NDATa commands to plot the re-assembled computation.  See the parallel manual for some details.  With this scheme the node numbering is supposed to be re-mapped to the original serial node numbering.

Option 2 (paraview method):  Is described at http://www.ce.berkeley.edu/~sanjay/FEAP/feap.html (look for the awk script under the paraview information).  This script is supposed to reassemble the vtu files in the proper manner.  I believe it is important to issue a STREss,NODE command before outputing the individual paraview files so that you have correct nodal values.

I haven't used either method in the past few years but they should both still function properly.

Prof. S. Govindjee

  • Administrator
  • FEAP Guru
  • *****
  • Posts: 1165
Re: Exporting projected values to vtu files in parallel
« Reply #5 on: November 14, 2014, 01:22:57 PM »
Colin,
  If I recall correctly we did set up the projections.  They should be ok since the solution at the nodes of the ghost elements are available in each partition.  If you find they are incorrect, please let us know.  One can also cross compare the GPLOt/NDATa method with the paraview method.

luc

  • Full Member
  • ***
  • Posts: 53
Re: Exporting projected values to vtu files in parallel
« Reply #6 on: November 17, 2014, 06:25:02 AM »
Hi Pr. Govindjee,
I am actually working on a large supercomputer: tens of thousands of cpu and that leads to a very large amount of vtu created.
I don't think that post processing them in serial is realistic in my case, so I'll keep digging in vtk specifications to get paraview to do it for me automatically.

Regarding the projection routines, here is what I think is happening:
let's assume I have a 1d mesh with three elements and four nodes
     1---2---3---4
metis partitions this mesh as follows:
     1---2---iii             ii---3---4
with the ghost nodes written in roman.

We can see that the middle element belongs to both partitions and that all nodes belong to a single partition and might be "ghosts" in another.
Before the projection is performed all the values at the nodes and ghost nodes have been updated and are correct.
Now when the projection is performed, the values projected to ghost nodes are wrong since the ghost nodes do not have sufficient neighbors.
When I write the projected values into files I can omit the ghost nodes and I will get correct values at every points.
My issue is if I do not write the ghost nodes, the middle element will be "lost" and I'll have a gap in my output.
I think that this can be sorted using various strategies but it is something that cannot be ignored because it modifies the way the export to vtk files has to be done.

Prof. S. Govindjee

  • Administrator
  • FEAP Guru
  • *****
  • Posts: 1165
Re: Exporting projected values to vtu files in parallel
« Reply #7 on: November 17, 2014, 07:32:51 PM »
luc,
  What you write is correct except for one small point.  At the end of the projection nodes 1, 2, 3, and 4 will have correct projected values.  ii and iii will not but these do not belong to any partition and should not be written to the output files (only up to numpn should be written out, i.e. do not go up to numnp).
   The thing that I believe will be missing from the vtu files will be cell information for elements joining nodes 2 and 3.   Depending on the parallel plotting behavior of paraview there are a couple of possible solutions.   Just to illustrate, assume that paraview does not care if a given element appears in two partitions (assuming the data is the same).  In that case, after the projection takes place, one can scatter the data to the ghost nodes, and the write the vtu files out to numnp.
-sg

luc

  • Full Member
  • ***
  • Posts: 53
Re: Exporting projected values to vtu files in parallel
« Reply #8 on: November 18, 2014, 04:48:38 AM »
Pr. Govindjee,
yes we agree on this I understood that all the projected values where set correctly on their individual process, which makes complete sense and eventually works fine if you gather all the data on say process 0 to write out the vtu (or you can use a script to do it after the simulation) but this lead to serial I/O behavior and that's fine, it's a design choice.
Since I want to do this step in parallel I need to do the scatter as you mention. I think that I will use a routine inspired from parfeap/psetb.F.
The main reason for this is that ParaView asks for a list of coordinates to describe the nodes on each local vtu. Since no id is attached to these coordinates, the connectivity has to be written with local node numbering. So it is in fact impossible to "stitch" two subdomains.
What can be done is specifying the Global Ids of the nodes:
    <PointData GlobalIds="GlobalNodeId" ...>
    <DataArray Name="GlobalNodeId" ...>   write global node number here  </DataArray>
    </PointData>


I might request that last scatter for the projection as a feature in the wish list, this seems really unavoidable to get the I/O to work in parallel.

Prof. S. Govindjee

  • Administrator
  • FEAP Guru
  • *****
  • Posts: 1165
Re: Exporting projected values to vtu files in parallel
« Reply #9 on: November 18, 2014, 09:35:44 AM »
We can look at adding that to the scatter. 

For the global -- local information, it should be in mr(np(244) + ii - 1) where ii runs from 1 to numpn on each processor.  ii is the local node number and the contents of mr(np(244) + ii - 1) should be the global node number.