The application of explicit dynamics to simulate quasi-static events often becomes impractical in terms of computational cost. Different solutions have been investigated in the literature to decrease the simulation time and a family of interesting, increasingly adopted approaches are the ones based on the proper orthogonal decomposition (POD) as a model reduction technique. In this study, the algorithmic framework for the integration of the equation of motions through POD is proposed for discrete linear and nonlinear systems: a low dimensional approximation of the full order system is generated by the so-called proper orthogonal modes (POMs), computed with snapshots from the full order simulation. Aiming to a predictive tool, the POMs are updated in itinere alternating the integration in the complete system, for the snapshots collection, with the integration in the reduced system. The paper discusses details of the transition between the two systems and issues related to the application of essential and natural boundary conditions (BCs). Results show that, for one-dimensional (1D) cases, just few modes are capable of excellent approximation of the solution, even in the case of stress–strain softening behavior, allowing to conveniently increase the critical time-step of the simulation without significant loss in accuracy. For more general three-dimensional (3D) situations, the paper discusses the application of the developed algorithm to a discrete model called lattice discrete particle model (LDPM) formulated to simulate quasi-brittle materials characterized by a softening response. Efficiency and accuracy of the reduced order LDPM response are discussed with reference to both tensile and compressive loading conditions.
Introduction
One of the main issues in computational mechanics is related to the capability of modeling the behavior of highly nonlinear structures subjected to dynamic and quasi-static loading when the integration of the governing equations leads to very expensive numerical simulations. If a large number of degrees-of-freedom (DOF) and nonlinearities are involved, explicit time integration techniques are generally preferred to implicit methods and considered attractive not only for wave propagation problems, where they mostly found applications, but also for slow dynamics and quasi-static problems [1,2]. The main benefit of explicit algorithms is that they are not affected by convergence problems which may occur when dealing with nonsmooth phenomena, like contact, softening damage laws or cohesive zone models, and damage localization. Furthermore, they usually require a lighter computational effort per each time-step, since full mass and stiffness matrices are not assembled. Explicit algorithms, however, are not unconditionally stable and require an accurate evaluation of the numerical stability. In particular, since the time-step decreases with the highest natural frequency of the computational system, a prohibitive number of time increments are required for problems governed by low frequencies. Indeed, in many cases, time steps much larger than the critical one are required for a certain accuracy but cannot be used due to stability requirements (see, among others, [1,3–5]). Mass scaling [6], for instance, was developed to increase the time-step size by adjusting the mass of the most critical elements; significant errors, though, can originate if those elements where the mass scaling is applied have a significant contribution to the global system response and, consequently, more elaborated techniques need to be applied [7,8]. Dynamic condensation has also been used in the literature for this purpose (see e.g., Ref. [9]). Another interesting and promising approach is the application of the proper orthogonal decomposition (POD) as a model reduction technique, which has now found applications in different fields of engineering and physics (e.g., Refs. [10–18]) following the pioneeristic studies of Refs. [19–21]. Dynamic systems can be projected onto subspaces containing the solution of the problem or a good approximation of it, so that a high-dimensional process is converted into a low-dimensional one [22].
The POD application to complex nonlinear problems has been pursued also by some researchers and different numerical techniques are now being investigated to improve both efficiency and accuracy of POD approaches [23–26].
The aim of the present study is to investigate the use of the proper orthogonal decomposition as model reduction technique for the explicit integration of low-dynamic or quasi-static problems featuring strain softening behavior.
Explicit Integration of Equations of Motion
The system 1 needs to be discretized in time to be solved numerically. For this purpose, the middle point rule is quite often used and it gives , and with Δt being the time step and m the number of the current time-step, with . In the elastic regime, stability requirements limit the time-step Δt to be subjected to the constraint Δt < Δtcr = 2/ωmax, where ωmax represents the highest natural frequency of the computational system [39]. It can be shown [40] that ωmax < max(ωi), where max(ωi) are the natural frequencies of portions of the overall system. Hence, the stable time-step can be computed solving, for each possible subsystem, the eigenvalue problem given by , where Ke is the stiffness matrix and Me the mass matrix of the subsystem. A subsystem corresponds to a finite element in a finite element mesh, a subset of interacting particles/nodes in a discrete model, or a lattice element in a lattice model.
Reduced Order Explicit Dynamics
With the application of the POD as a model reduction technique, a low dimensional approximation of the full high dimensional dynamical system can be used to reproduce the trend of the system response and to capture most of the phenomena of interest while reducing the computational cost.
General Introduction to Proper Orthogonal Decomposition.
As explained in detail by Linang et al. [41], among many others, the main idea of the POD is to find a projection of a vector space onto a subspace of fixed dimension k < n, containing the best approximation of the vectors contained in . The method seeks a set of n ordered orthonormal basis vectors for such that the selected first k < n basis vectors generate the optimal orthogonal projection of defined rank k.
where and . By using the mean square error as a measure of the optimal solution, the reduced order solution must satisfy the following condition: where . These special, orthonormal, basis vectors are called the proper orthogonal modes (POMs) for u and can be called the POD of u.
Application of Proper Orthogonal Decomposition in a Finite Dimensional Case.
The Schmidt-Eckart-Young-Mirsky Theorem [43] shows that if one defines the n-by-n real symmetric matrix and denotes with the eigenvalues of C and with their associated eigenvectors such that ; then, the subspace optimizing the orthogonal projection of fixed rank k is the invariant subspace of C associated with the eigenvalues λ1…λk.
where are the eigenvectors of and are the eigenvectors of C [43].
The state variables of interest are sampled from the full order model for each node at different times. The samples are called “snapshots” of the full order model.
Reduced Order Integration Algorithm.
The POD technique can be applied to the reduced-order integration of the system of equations described in Eq. (1).
Let us consider a n-by-Nsnap matrix U, such that each element uij is the value of a DOF in the ith node taken at the jth time snapshot and define the n-by-n real symmetric matrix .
where and are the projections of the internal force vector and mass matrix, respectively, in .
and The unknowns are the coefficients di (i = 1, k), which define the amplitude of each orthogonal deformation mode used to approximate the actual solution. In each time-step, u and are evaluated from the associated projected values and the internal forces, and the residual forces are computed in the full order. The time-step of the reduced system ΔtR is subjected to a different constraint compared to Δt: ΔtR < 2/ωk,max, where ωk,max is computed from the following eigenvalue problem: .
Essential Boundary Conditions.
The POMs satisfy automatically any homogeneous essential (Dirichlet) BCs by construction. In fact, the U matrix has a zero-valued row corresponding to any fixed degree-of-freedom and this information is transferred to the modes themselves and, consequently, to any object in the subspace they span.
Nonhomogeneous essential BCs must be re-applied in the reduced system, since this information is not automatically preserved by POMs. It is not easy, however, to apply the BCs directly to the projected degrees-of-freedom in the reduced subspace, because this leads to an overdetermined system. To overcome this problem, the nonhomogeneous essential BCs can be applied indirectly as equivalent external forces through the penalty method: fext = kp(up − u), where fext is the external force equivalent to the BC, kp is the penalty coefficient, and up is the penalty displacement prescribed by the BC. The penalty enforcement of the boundary conditions in the reduced order model has been previously explored by Kalashnikova and Barone [44].
Due to instability issues, the node mass M where the penalty constraint is applied must be artificially increased (mass scaling) by adding a quantity which is proportional to the penalty coefficient kp and to the square time-step Δt: Mscal = M + 1.1kpΔt2. When the penalty method is applied, the computation of the stable time-step in the projected system must also take into account the increased stiffness due to the penalty constraint.
Mode Updating.
The snapshots can be computed a posteriori, after the end of the regular analysis. This method allows the definition of the reduced system for model validation purposes and the snapshot collection can be the most efficient possible, since the real solution is already known. Aiming to a predictive tool, though, the snapshots should be collected during the analysis itself, in itinere, alternating the integration in the complete system, for the snapshots collection, with the integration in the reduced system, until the snapshots previously collected cease to be a good representation of the actual response. When dealing with quasi-static problems, the complete system can be integrated only for a small initial time interval ΔT, just to capture an adequate number of snapshots, which sufficiently describe the system behavior in average. The corresponding reduced system can, then, be computed from the first k POMs and the analysis carried out in the reduced system with a larger time-step. Obviously, the snapshots may need to be updated to take into account any variation of the external input (for instance, in case of changes in applied forces, displacements, velocities) or nonlinearities (for instance, due to nonlinear constitutive law).
After a time interval ΔTR, when the POMs are not representative anymore, the analysis could be carried out back through the complete system to recompute a new set of POMs. Different amplitudes of ΔT and ΔTR lead to different accuracy levels of the approximation, since the definition of the reduced order subspace and the frequency of its updates are directly affected. Once the snapshots have been collected and the corresponding reduced space defined, the integration in the reduced system can start, using as initial condition the following values, , where and u are the values of the nodal unknowns computed in the last time-step of the complete system integration.
As already explained, the stable time-step in the reduced system is higher than the one in the complete system ΔtR > Δt. To allow a proper transition from the integration scheme in the complete space to the reduced one and vice versa, an appropriate time-step definition must be used. The time-step ΔtV = (ΔtR + Δt)/2 is used for the integration of velocities in both first and last time-step in the reduced system. For the displacement integration, instead, ΔtD = ΔtR is used for the first time-step and ΔtD = Δt is used for the last time-step. Figures 1(b) and 1(c) clarify the time steps definition in the transition from one integration scheme to the other, considering that the velocities are defined at half interval according to the middle point rule, whereas the displacements are defined at the end of the interval.
Equation (8) provides the initial conditions for the integration in the complete system. While the displacement is computed directly from the projection matrix B, the velocity is averaged from the displacements in the relevant interval. This approach was shown to provide a smooth transition of the system response from the reduced to the complete integration, as opposed to the option of using as initial condition the following expression: . In the latter case, abnormal oscillations were observed in the very initial part of the response after the transition to the complete system.
One-Dimensional Implementation and Analysis
System Description.
In order to explore the potential applicability of the POD to nonlinear problems and to understand how this technique might be used to extract qualitative and quantitative information about the response of large mechanical systems, a simple one-dimensional (1D) benchmark example is described here. As shown in Fig. 2, a simple discrete system is considered in which a number n of masses mi = ρAil are connected to each other (and to a wall at one end) by nonlinear springs.
In the elastic regime, the spring force is proportional to the relative displacements of the two adjacent nodes: Pi = kiδi, δi = ui+1 − ui where ki = EAi/li, E is the elastic modulus, and Ai and li are area and length, respectively, of the ith spring. The loading condition consists of an imposed constant velocity, v0, applied to the last node on the right-hand side of the system.
During the softening regime, instead, the spring force Pi is computed incrementally as and is subjected to the following constraints , where , and lt is the material characteristic length.
For the benchmark study discussed later, the following parameters are selected: E = 30,000 MPa, σt = 3 MPa, lt = 100 mm, with the total number of springs n being variable, Ai = A = 5 cm2, ρ = 2500 kg/m3, and v0 = 10 mm/s.
Linear Elastic Behavior.
The main goal of this section is to understand how the accuracy of the approximation and the minimum time-step required for the stability are affected by the dimension of the full order system. Systems with 26, 51, 101, and 201 DOFs (corresponding to number of springs = 25, 50, 100, 200) are considered in the analysis and they are solved for a total time ttot = 0.01 s.
The number of snapshots to be collected for the accurate definition of the POMs depends on the problem being solved. As far as the elastic behavior is concerned, a small number of snapshots homogeneously distributed in the interval of interest is enough for the purpose. The results in this section are obtained with 20 equally spaced snapshots collected in the first 2 × 10−4 s corresponding to 2% of the total simulation time.
Figure 3 shows the shape of the first 6 POMs. The shape of the modes is independent of the number of degrees-of-freedom since even the system with 26 DOFs has enough resolution for these modes. Of course, this is not necessarily the case for higher modes. The POMs can be interpreted as shape functions with global support: if the eigenvectors are properly normalized, then the coefficients of the linear combination used for the integration of the reduced system can be considered the displacement fractions associated with that mode.
The first POM is a straight line, which can be considered as the static response of the system, while the other modes account for the vibrations due to dynamical effects.
The ratio between the stable time-step in the reduced system, ΔtR, and the stable time-step in the original system, Δt, increases with the number of DOFs and, consequently, the so-called performance improvement factor (PIF), also increases with the number of DOFs [3]. PIF is defined as the ratio between the total CPU time, T, needed to solve the complete system and the total CPU time, Tr, required to solve the reduced system, including the snapshots collection and the singular value decomposition computation: PIF = T/Tr
It is worth pointing out that the stable time-step during the reduced order integration is a decreasing function of the penalty stiffness. Hence, larger values of the penalty stiffness, while providing more accurate results in terms of the application of the BCs, lead to smaller values of the PIF. For the simulations discussed in this section, the penalty stiffness was set to kp = 103km, where . The accuracy of the reduced order model can be assessed by calculating the external energy error, e, at the end of the analysis. That is e = 100(W* − W)/W in which W and W* are the external work in the complete and reduced systems, respectively, at the end of the simulation.
If only the first POM is used for the reduced system, the following results are found: ΔtR/Δt = 4.5, 12.5, 34.0, and 90.0; PIF = 6.7, 19.4, 46.1, and 112.2; e = 0.8%, 0.6%, 1.2%, and 2.0%; for 26, 51, 101, and 201 degrees-of-freedom, respectively.
The analysis of the results shows that the first mode is sufficient to represent accurately the response of the system in terms of displacements, forces, and energy. It accounts for more than 99.9% of the total sum of all eigenvalues, independent of the number of degrees-of-freedom of the original problem. The gain in terms of the PIF increases when the number of DOFs increases. For each node, during the whole time history, the displacements are well approximated, especially with reference to the average (quasi-static) response by the POD algorithm.
Figure 4 compares, for the 51 DOFs system, the full and reduced order solutions in terms of load (calculated at the fixed end) versus applied displacement curves for 1 POM, 3 POMs, and 5 POMs. As shown in Fig. 4(a), the first mode describes the system behavior in average, filtering out all the high frequency vibrations, which do not provide significant enhancement in accuracy especially if one is interested in capturing the quasi-static response. It is important to mention that this result is not general but rather problem dependent. Other problems with different material behavior and loading conditions might require more modes as it will be evident later in this paper.
Obviously, the higher the number of modes considered by POD, the greater the subspace where the integration takes place and, thereby, the closer the projected subspace to the full space containing the real solution. So, as more modes are included in the subspace definition, the POD algorithm is capable of representing a higher number of details in terms of system response.
Figures 4(b) and 4(c) illustrate this behavior. By adding two more modes, the solution is able to represent some vibrations in the response and the following modes allow the approximate solution to include more vibrations. When the number of modes is equal to the number of degrees-of-freedom of the problem, the POD solution and the real solution coincide. However, as the number of modes increases, the stable time-step of the analysis becomes smaller because the POD subspace size becomes closer to the size of the full space: ΔtR = 12.5 Δt for 1 mode, ΔtR = 6.0 Δt for 3 modes, and ΔtR = 4.0 Δt for 5 modes. By enlarging the subspace, the stable time-step ΔtR reduces progressively and it becomes exactly equal to Δt when all POMs are used.
Softening Behavior.
The same geometry, material parameters, and BCs described in Sec. 4.1 are considered in this section. The simulations are relevant, unless otherwise specified, to the system with 51 DOFs and with POMs computed, as for the linear elastic case, with 20 snapshots equally spaced over a full order simulation interval of 2 × 10−4 s. In order for the fracture process to be realistically activated, one spring (the central one) is assigned a reduced strength of σt = 2.5 MPa. The overall response features a softening postpeak branch and deformation localization in the weaker spring.
As long as the tensile stress in each element is smaller than the tensile strength, the behavior of the system is linear elastic and what was observed in Sec. 4.2, for the elastic behavior, identically holds. However, when the tensile strength of the weaker element is reached, the softening mechanisms are activated and the fracture process starts. The elastic subspace, originated from the snapshots collected at the beginning of the analysis, cannot represent well the real solution anymore and a new adequate subspace needs to be defined for the POD algorithm to continue. The spectral modes computed before fracture occurs do not allow for the displacement discontinuity due to fracture to be represented. As a consequence, the overall response of the reduced order system is significantly more ductile than that of the full order system as comparison of the relevant load versus displacement curves demonstrates in Fig. 5(a).
Therefore, integration of the full order system must resume just before the maximum tensile stress is reached in the weak spring and a new set of snapshots in the softening branch, just before and after the point of maximum tensile stress, must be collected for the computation of a new proper orthogonal subspace. While the modes computed in the elastic regime do not account for the displacement discontinuity due to fracture, the new modes do as shown by the first two modes plotted in Figs. 6(a) and 6(d).
The first mode alone, however, is still not sufficient because as the displacement at the right end of the system increases the displacement discontinuity (crack opening) at the fracture location and the displacement gradient away from the discontinuity both increase (see Figs. 6(b) and 6(c)). Since away from the discontinuity the material behavior is elastic, the displacement gradient is proportional to the stress which, consequently, increases in the softening regime in contrast with the stress decrease required by the softening law in the fracturing spring and the series coupling of all the springs in the system. The obtained load versus displacement response is again more ductile than the full order one (Fig. 5(b)) and the force distribution along the system becomes more and more imbalanced as the simulation progresses.
A different behavior is observed when the reduced integration uses two POMs. For this case, Figs. 6(e) and 6(f) report the displacement distribution along the system coordinate at two selected times at the beginning of the softening regime and during the softening regime. As the softening progresses, the displacement discontinuity increases while the slope of the displacement outside the discontinuity decreases. With 2 POMs, the reduced order response approximates very well the full order solution also in terms of load versus displacements curve as shown in Fig. 5(c).
Figure 5(d) shows that using the same subspace with 2 POMs, unloading–reloading behavior can be also reproduced with no need to switch to the complete system in order to update the POMs. It is worth noting that the unloading–reloading rule used in the calculations and featuring a large hysteresis is not supposed to be representative of any real material behavior but just an example to demonstrate how loading–reloading conditions can be handled by the proposed POD approach.
The aforementioned strategy cannot be applied if the time at which the mode updating is required is a priori unknown or cannot be estimated. In this case, it is possible to automatically update the snapshots switching to the complete system after a predetermined number of time intervals. More frequent snapshots updating leads to more accurate reduced order solution but to less reduction of the computational cost. Setting the time interval for the fully explicit algorithm to run and the snapshots to be collected equal to ΔT = 0.25 ms and the time interval for the reduced integration with 2 POMs equal to ΔTR = 0.55 ms (12 updates), e < 3% can be achieved in the softening branch (Fig. 5(e)) with PIF = 3.2. If the number of updates is reduced, the accuracy also reduces, as shown in Fig. 5(f) in which the snapshots are updated 8 times during the reduced integration, using ΔT = 0.25 ms and ΔTR = 0.95 ms, with an energy error of around 18%. When the reduced order solution deviates significantly from the full order solution, at the transition between the two integration schemes, the equilibrium needs to be reestablished dynamically and this leads to an excessive oscillatory behavior.
As described earlier, the reduced order simulations of the nonlinear behavior provide an accurate approximation of the response with just 2 POMs. It is important to point out, however, that the number of snapshots collected for the definition of the subspace of interest plays also an important role in terms of accuracy of the solution with reference to the local values of forces, especially if the POMs are not updated periodically. To investigate this aspect, Fig. 7 reports the results of simulations based on POMs computed by collecting 20, 50, and 300 snapshots out of the 600 available from the full order simulation. The snapshots are chosen to be homogeneously spaced in the interval of the update in order to capture the trend of the response in that selected interval. As one can see, if the snapshots collected are not representative enough of the overall behavior, the solution is sought in a subspace which is too limited and, therefore, the system is forced with additional constrains which are generated from the restricted dimension of the subspace where the solution can be sought, and consequently, it needs to be equilibrated by increased internal forces.
This is the reason for the wide variation of the internal force along the system coordinate reported in Fig. 7(a) for the case of 20 snapshots and two simulations with 2 and 20 POMs. As one can see, such variation depends very little on the number of POMs. However, the response converges to the full-order solution by increasing the number of snapshots as illustrated in Fig. 7(b) for 50 snapshots and 7(c) for 300 snapshots. Analysis of the results in terms of error en provides similar conclusions. For 2 POMS, en = 0.2%, 0.8%, and 6% for 300, 50, and 20 snapshots, respectively; for 20 POMS, en < 0.1%, 0.3%, and 4.5% for 300, 50, and 20 snapshots, respectively. The number of snapshots has negligible influence on the PIF, which is mainly affected by the dimension of the subspace, given by the number of POMs.
Finally, it is interesting to investigate the improvement in computational cost when the size of the full-order system increases. For comparison, the simulations are performed by updating the POMs around the peak. For systems with 16, 51, 101, and 201 DOFs (corresponding to number of springs n = 25, 50, 100, and 200), one has ΔtR/Δt = 4, 9, 25, and 60; PIF = 4.8, 13.2, 20.2, and 45.5; e = 0.5%, 0.2%, 0.6%, and 0.4%, respectively. Similar to the elastic case, the improvement of the computational cost increases with the size of the system making the proposed reduced order approach best suited for large computational systems.
Mass Scaling and Proper Orthogonal Decomposition.
A further improvement of the reduced system efficiency in terms of computational gain can be obtained by the use of mass scaling combined with POD. Applying the mass scaling during the reduced integration only, the snapshot computation and the POMs remain unchanged, while the critical time-step ΔtR in the corresponding subspace is allowed to increase arbitrarily large. Obviously, accuracy considerations are required in order to obtain satisfactory results.
The scaled mass matrix for the reduced integration is computed as , where and is the highest eigenvalue of the matrix . Combining POD and mass scaling provides a considerable computational saving and it allows an optimal compromise between accuracy and increased time-step. For an elastic material, Fig. 8 highlights the improved efficiency of the mass scaling technique when coupled with the proper orthogonal decomposition. The mass scaling technique alone is not accurate enough (see Figs. 8(a) and 8(b)) when the increased time-step ΔtR is much bigger than the actual stable time-step Δt, because the increased mass causes the development of high inertial forces. When the mass scaling is combined with POD, this effect is much smaller. Figures 8(c) and 8(d) show the comparison of the full order response in terms of global load versus displacement curve and displacement versus time curve for a node in the middle of the system for ΔtR = 50Δt. Similarly, Figs. 8(e) and 8(f) report the same comparison for ΔtR = 1000Δt. As one can see, for both cases, the POD mass-scaled response is basically overlapped with the full order response. Minor deviations appear only for very large time steps as shown in Fig. 8(f). It is worth observing that even in the case of mass scaling, the internal energy is various order of magnitude larger than the kinetic energy. This ensures that the explicit dynamic response is a good approximation of the quasi-static response.
Three-Dimensional Application to the Lattice
Discrete Particle Model
In this section, the POD framework is applied to three-dimensional (3D) simulations performed with the LDPM in order to evaluate the potential of the reduced-order algorithm for more complex numerical models when fracture and other nonlinear phenomena are involved.
Brief Review of Lattice Discrete Particle Model.
LDPM was proposed for the first time by Cusatis and coworkers in 2011 [27,45] to simulate the behavior of quasi-brittle materials at the mesoscale level by modeling the interaction of material heterogeneities. The geometrical mesostructure of the material is obtained through the following steps: (1) Material heterogeneities are assumed to be spherical particles and are introduced into the specimen volume V by sampling an assumed particle size distribution function, which, for example for cementitious composites, is derived from a set of mix-design parameters (cement content c, water-to-cement ratio w/c, aggregate-to-cement ratio a/c, maximum aggregate size da, minimum aggregate size d0, governing the model resolution, and Fuller coefficient nf). (2) Zero-radius particles are randomly distributed over the external surfaces to facilitate the application of boundary conditions. (3) Delaunay tetrahedralization of the generated particle centers and the associated three-dimensional domain tessellation are then carried out to obtain a network of triangular facets inside each tetrahedral element as shown in Fig. 10(b). A portion of the tetrahedral element associated with one of its four nodes I and the corresponding facets are shown in Fig. 10(c). Combining such portions from all tetrahedral elements connected to the same node I, a corresponding polyhedral particle is obtained. Each couple of adjacent polyhedral particles interacts through shared triangular facets (Fig. 10(a)). The triangular facets, where strain and stress quantities are defined in vectorial form, are assumed to be the potential material failure locations. Three sets of equations are necessary to complete the discrete model framework: definition of strain on each facet, constitutive equation which relates facet stress vector to facet strain vector, and particle equilibrium equations.
where α = N, M, L (εN is the facet normal strain component; εM and εL the facet tangential strain components); r is the length of the line that connects the nodes sharing the facet and corresponding to the associated tetrahedron edge (see Fig. 10(b)); eα are unit vectors defining a facet local Cartesian system of reference such that eN is orthogonal to the facet, and eM and eL are the facet tangential unit vectors (see Fig. 10(c)). It was recently demonstrated [32,46] that this definition of strain is consistent with the projection of the classical micropolar strain tensor into the local system of reference attached to each facet.
Facet vectorial constitutive equations. A vectorial constitutive law governing the behavior of the material is imposed at the centroid of each facet. Formally, one can write t = f(εN, εM, εL) where is the traction vector applied on each triangular facet. Details of the constitutive equations used in this work are reported in Appendix.
where is the set of facets that form the polyhedral particle, A is the facet area, V is the particle volume, b0 is the body force vector, and w is the moment of t with respect to the node which is located inside the polyhedral particle.
Lattice discrete particle model is implemented in a computational software named mars [47] and was used successfully to simulate concrete behavior in different types of laboratory experiments [45]. Furthermore, LDPM has shown superior capabilities in modeling concrete behavior under dynamic loading [28,48], alkali silica reaction deterioration [49–51], fracture simulation of fiber reinforced polymers (FRP) reinforced concrete [52], failure of fiber-reinforced concrete [53–55], and early age behavior of ultra high performance concrete [30,56,57]. LDPM was also recently formulated to simulate sandstone [58], shale [31,59], and waterless concrete [29].
Direct Tension Test.
This section presents the application of the POD approach to the analysis of a direct tension test. The simulations were performed on a dogbone shaped specimen (Fig. 11(a)) of 100 mm height, 30 mm thickness, 100 mm (wider ends) and 50 mm (narrower cross section) widths. A constant velocity of was applied to the nodes belonging to the top surface by using the penalty approach described earlier in this paper. Figure 11(b) shows the fracture pattern obtained in a typical LDPM simulation.
The simulations used initially a coarse LDPM system (304 nodes, 1824 DOFs) in order to evaluate the effect of the number of snapshots used to create the reduced order space and the number of POMs used in the reduced order calculations during the initial elastic response and the subsequent fracture process. Different numbers of snapshots (out of the 1000 available in the chosen intervals) were collected at the beginning of the simulation (from δ = 0 mm to δ = 0.004 mm) and then updated at the end of the linear elastic phase (from δ = 0.01 mm to δ = 0.02 mm, see Fig. 12(a)). As one can see in Figs. 12(a) and 12(b), a large number of snapshots and a large number of POMs are required in order to capture the softening behavior in the LDPM simulations. While the linear branch is easily captured with a limited number of snapshots and of POMs, this is no longer true when the fracturing process starts and the POMs are updated only once. The progressive development of cracks requires a larger subspace in order to be correctly represented and a large number of snapshot for its computation. The accuracy of the solution from the reduced system improves progressively by enlarging the subspace computed from the same set of snapshots (Fig. 12(a)) and, likewise, by increasing the number of snapshots used for a subspace of the same dimension (Fig. 12(b)).
It is possible to keep the subspace small while achieving a reasonable accuracy if the POMs are updated frequently enough to prevent the loss of accuracy and to capture the softening branch. Indeed, the problem is that during the fracture evolution, a set of calculated modes soon become no longer representative of the ongoing response due to the initiation and coalescence of many cracks in 3D. Therefore, the POMs need to be recomputed from a new set of snapshots in the full order system. Figure 12(c) shows the results in case the POMs are automatically updated every Δδ = 0.005 mm increment of displacement. With this approach, the error with respect to the reference (full order) solution is kept acceptably small. In this case, one obtains ΔtR/Δt = 3.5, PIF = 1.7, and e = 9%. The computational efficiency is relatively modest because of the small size of the system. For a larger system with 505 nodes (3003 DOFs), the automatic update of the POMs results in ΔtR/Δt = 5.0, PIF = 2.9, and en = 11%. The application of the proposed approach to very large systems (with hundreds of thousands of DOFs) is expected to lead to much larger improvement of the computational efficiency.
Compression Tests.
An even more challenging problem, addressed in this section, is relevant to compression tests. In this case, especially in the absence of transverse confinement, the crack pattern is quite complex and evolves significantly in the nonlinear regime. Simulations were performed on unconfined and confined cylinders of 100 mm length and 50 mm diameter. The samples consisted of 90 nodes (540 DOFs) and were loaded with a compressive velocity .
Figures 13(a) and 13(b) compare, for the unconfined case, the POD solutions computed in subspaces of increasing dimensions. The snapshots were collected at the beginning of the simulations and updated at the peak load (from δ = 0.2 mm to δ = 0.3 mm). In this case, 500 snapshots (out of the 2500 available) were collected in the interval of interest.
One can see in Fig. 13(a) that the number of POMs necessary for an accurate approximation is considerably high (450 POMs out of 540 DOF for the considered example) and the accuracy decreases as the number of spanning modes decreases. From a mechanical point of view, the reason for this reduced efficiency can be related to the different failure modes of the material in tension and compression. As a matter of fact, when a concrete dogbone specimen is loaded in pure tension up to failure, damage localization occurs. On the contrary, in the case of a concrete cylinder loaded in unconfined compression, the fracture process leads to a complex three-dimensional crack system. It is worth observing that when a limited number of POMs are used, the response is characterized by a more gradual softening during the postpeak. This is the result of a “confinement effect” due to the fact that the solution is sought in a reduced order subspace which cannot contain the full order solution so that the algorithm provides additional artificial constraints to the system.
Figure 13(b) shows the effect of the number of snapshots on the accuracy of the reduced order response for simulations performed with 100 POMs. One can see that, in this case, the accuracy of the reduced order approximation depends mostly on the number of POMs rather than the number of snapshots.
Similar to the case of tension, better results can be obtained by an automatic mode updating strategy. Figure 13(b) shows the POD response obtained with a 2 POMs built from 10 snapshots homogeneously distributed in the collection interval (Δδ = 0.034 mm, with 850 snapshots available) and automatically updated every Δδ = 0.1 mm of vertical displacement. The POD response is characterized by ΔtR/Δt = 3.0, PIF = 1.7, and en = 6%.
The simulation of confined compression consisted of the same concrete cylinders simulated for unconfined compression wrapped with FRP sheets, which were simulated according to Ref. [52]. Figure 13(d) confirms, once again, that the number of POMs required for an accurate simulation of the inelastic behavior is considerably high, almost coincident with the degrees-of-freedom of the system. The accuracy improves also by increasing the number of snapshots collected in the update interval, but with a weaker effect (Fig. 13(e)). One can observe that the slope of the inelastic branch of the load versus displacement curve increases progressively with the decreasing dimension of the subspace generated by POMs due to the numerical “confinement effect” provided by the limited number of modes spanning the subspace. Finally, Fig. 13(e) shows the response with automatic update and characterized by ΔtR/Δt = 85, PIF = 6.8, and en = 3%.
It is worth mentioning that the POD approximation is more accurate for the FRP confined-compression test than for the unconfined compression test and a higher stable time-step can be achieved in the former case. Indeed, in the inelastic phase fracture localization, events are less pronounced when the specimen is laterally confined and the crack pattern is more homogeneous. For this reason, the reduced order solution tends to be more accurate.
Conclusions.
This paper discussed the reduced order solution of discrete systems with softening response by means of POD. Of particular interest was the application of the POD to the simulation the quasi-static behavior of such systems with an explicit dynamic algorithm. Based on the presented results, the following conclusions can be drawn:
- (1)
The proper orthogonal decomposition is a powerful tool to build reduced order approximations of the response of large systems, both linear and nonlinear, solved with explicit dynamics algorithms.
- (2)
Characteristic spectral modes are computed by collecting snapshots of the full order response in certain time intervals.
- (3)
The spectral modes serve the role of shape functions with global support that approximate the actual system deformation with a reduced number of degrees of freedom.
- (4)
The spectral modes constrain the high-frequency deformation modes of the full order system leading to a larger stable time step for the explicit integration of the reduced order equations of motion. However, full advantage of the increased time step is observed mostly in the case when only few spectral modes are used.
- (5)
Accuracy and efficiency of the reduced order model depend on the number of snapshots used to build the reduced order space and on the number of spectral modes used in the simulations. However, a large number of used snapshots increase the computational cost to build the reduced order space and such increase quickly offsets the computational gain of the reduced order integration.
- (6)
Homogeneous essential boundary conditions are automatically transferred from the full order system to the reduced order system. Nonhomogeneous essential boundary conditions can be imposed through a penalty algorithm.
- (7)
Significant reduction in computational cost can be obtained by combining POD with classical mass-scaling approaches.
- (8)
The spectral modes must be updated when nonlinear behavior leads to a significant change on the deformation characteristics of the system. This is particularly important for 3D applications with softening that are characterized by complex crack patterns. The best results are obtained by a periodic update of the spectral modes during the simulations.
- (9)
Better results are expected for the 3D case if mass scaling is combined with POD. This topic is being investigated by the authors in the ongoing studies.
Funding Data
The National Science Foundation (Grant No. CMMI-1435923).
ES3 R&D resources.
Appendix: Lattice Discrete Particle Model Constitutive Equations
Full details of the LDPM constitutive equations used in this paper are reported in Refs. [27,45,52]. In the elastic regime, the normal and shear stresses are proportional to the corresponding strains: , where EN = E0 and ET = αE0 with E0 being the effective normal modulus and α the shear–normal coupling parameter. For stresses and strains beyond the elastic limit, concrete mesoscale nonlinear phenomena are characterized by three mechanisms and the corresponding facet level vectorial constitutive equations are here briefly described.
Fracture and Friction Due to Tension and Tension–Shear.
Compaction and Pore Collapse in Compression.
where for (εV > 0); and for (εV ≤ 0), in which , and are material parameters.
Friction Due to Compression-Shear.
where σN0 is the transitional normal stress; μ0 and μ∞ are the initial and final internal friction coefficients, respectively.
In the simulations presented in the paper, the following LDPM mesoscale parameters were used: c = 280 kg/m3 (cement content), w/c = 0.77 (water to cement ratio), a/c = 7.5 (aggregate to cement ratio), nf = 0.5 (Fuller curve exponent); E0 = 40,000 MPa (normal elastic modulus), α = 0.25 (shear–normal coupling parameter), σt = 3.65 MPa (tensile strength), lt = 200 mm (characteristic length), σs/σt = 2.5 (shear-to-tensile strength ratio), nt = 0.2 (softening exponent), σc0 = 45 MPa (yielding compressive stress), Hc0/E0 = 0.3 (initial hardening modulus in compression), κc0 = 4 (transitional train ratio), κc1 = 1 (first deviatoric-to-volumetric strain ratio), κc2 = 5 (second deviatoric-to-volumetric strain ratio), μ0 = 0.2 (internal friction coefficient), μ∞ = 0 (internal asymptotic friction), σN0 = 600 MPa (transitional stress), and Ed/E0 = 1 (densified normal modulus).