Author Topic: Compute Compliance  (Read 5658 times)

jbruss

  • Jr. Member
  • **
  • Posts: 36
Compute Compliance
« on: October 19, 2017, 05:19:57 PM »
Hello -

I would like to be able to compute the compliance in a user-macro in parfeap (which is essentially just the inner product of the force vector with the displacement solution vector). I see in pfeapc.h the vectors rhs and sol. If the solution is quasi-static, is the rhs vector just the force vector or does it include additional information? Also, if I perform a UTAN,,1 solve, is the memory free'd for these vectors or can I still perform this computation?

Thank you very much for your help.
Jonathan

Prof. S. Govindjee

  • Administrator
  • FEAP Guru
  • *****
  • Posts: 1160
Re: Compute Compliance
« Reply #1 on: October 19, 2017, 07:56:08 PM »
rhs and sol are the right hand side and the solution to a Ax=b problem.  But that problem is the one that you get from linearizing F_int(u)- F_ext = 0, namely (dF(u_i)/du) Delta u = [F_ext - F(u_i)].  Thus dotting rhs with sol gives you the incremental energy on the iteration (which we actually already compute and use for convergence testing).

Notwithstanding, the total solution is there and the total applied force is available so you could compute the dot product.

Once you have converged the solution, on each processor, its part of the solution is contained in the first numpn*ndf entries of U (hr(np(40)), and the load in is F (hr(np(27)).  If you have changing numbers of active dofs per node then you may have to play around with this.  You could then accumulate that values together.

Note that FEAP will print this number per processor already, type REACtion at the macro prompts.  The only thing that I am not sure of is if it includes the ghost nodes in this computation or not.  So you will have to check that on a simple problem.

jbruss

  • Jr. Member
  • **
  • Posts: 36
Re: Compute Compliance
« Reply #2 on: October 20, 2017, 07:34:19 AM »
Thank you Professor Govindjee. If I understand correctly, I should have the following three options to achieve the same result:

1) Use PETSc to compute the inner product of rhs and sol to obtain the energy if the problem is linear (since I should converge in one iteration).

2) If the problem is nonlinear I could accumulate this incremental energy after each iteration. I'm not sure about this yet because of the nonlinearity (i.e. perhaps it isn't valid to perform the sum).

3) If the problem is nonlinear, compute the inner product using U (hr(np(40)), and F (hr(np(27)) on each processor and call MPI_Reduce to accumulate this quantity across processors. I think this is the way I will likely go so thank you for your suggestion. It seems as though this would avoid the ghost node complexity since numpn does not seem to include the ghost nodes (at least according to my interpretation of the statement "The sum of the nodes in a partition (numpn) and its ghost nodes defines the
total number of nodes in each partitioned data file (i.e., the total number of nodes, numnp, in each mesh partition)." in the parallel manual).

Thank you again,
Jonathan