Farkas / Jarmai | Design and Optimization of Metal Structures | E-Book | sack.de
E-Book

Farkas / Jarmai Design and Optimization of Metal Structures

E-Book, Englisch, 328 Seiten

Reihe: Woodhead Publishing Series in Civil and Structural Engineering

ISBN: 978-1-78242-047-7
Verlag: Elsevier Science & Techn.
Format: EPUB
Kopierschutz: 6 - ePub Watermark



An industrial book that analyses various theoretical problems, optimizes numerical applications and addresses industrial problems such as belt-conveyor bridge, pipeline, wind turbine power, large-span suspended roof and offshore jacket member. Multi-storey frames and pressure vessel-supporting frames are discussed in detail. The book's emphasis is on economy and cost calculation, making it possible to compare costs and make significant savings in the design stages, by, for example, comparing the costs of stiffened and un-stiffened structural versions of plates and shells. In this respect, this book will be an invaluable aid for designers, students, researchers and manufacturers to find better, optimal, competitive structural solutions. - Emphasis is placed on economy and cost calculation, making it possible to compare costs and make significant savings in the design stages of metal structures - Optimizes numerical applications and analyses various theoretical and industrial problems, such as belt-conveyor bridge, pipeline, wind turbine power, large-span suspended roof and offshore jacket member - An invaluable aid for designers, students, researchers and manufacturers to find better, optimal, competitive structural solutions

Jozsef Farkas, University of Miskolc, Hungary
Farkas / Jarmai Design and Optimization of Metal Structures jetzt bestellen!

Autoren/Hrsg.


Weitere Infos & Material


1 Newer Mathematical Optimization Methods
Publisher Summary
In the structural optimization process for an engineer, it is important to know the behavior of the structure well, the stresses, the deformations, the stability, the eigenfrequency, the damping, etc. It is as important to have a reliable optimization technique to find the optimum. For the user, always that kind of method is the best, which he knows the best. None of the algorithm is superior. All of them can have benefits and disadvantages. There are a great number of methods available for single objective optimization. Methods without derivatives are like: Complex, Flexible Tolerance, and Hillclimb. Methods with first derivatives are: Sequential Unconstrained Minimization Technique (SUMT), Davidon-Fletcher-Powell, etc. Methods with second derivatives are: Newton, Sequential Quadratic Programming, SQP, the Feasible SQP. There are also other classes of techniques like the Optimality Criteria methods (OC) or the discrete methods like Backtrack, the entropy-based method. Multi-criteria optimization is used when more objectives are important to find the compromise solution. 1.1 INTRODUCTION
In the structural optimization process for an engineer it is important to know the behaviour of the structure well, the stresses, deformations, stability, eigenfrequency, damping, etc. It is as important to have a reliable optimization technique to find the optimum. The question is always the same: which is the best, which is the most reliable technique? The answer is that for the user always that kind of method is the best, which he knows the best. Non of the algorithm is superior. All of them can have benefits and disadvantages. In our practice on structural optimization we have used several techniques in the last decades. We have published them in our books and gave several examples as engineering applications (Farkas 1984, Farkas & Jármai 1997, 2003). Most of the techniques were modified to be a good engineering tool in this work. There are a great number of methods available for single objective optimization as it was described in Farkas & Jármai (1997). Methods without derivatives like: Complex (Box 1965), Flexible Tolerance (Himmelblau 1971) and Hillclimb (Rosenbrock 1960). Methods with first derivatives such as: Sequential Unconstrained Minimization Technique (SUMT) (Fiacco & McCormick 1968), Davidon-Fletcher-Powell (Rao 1984), etc. Methods with second derivatives such as: Newton (Mordecai 2003), Sequential Quadratic Programming, SQP (Fan et al. 1988), the Feasible SQP (Zhou & Tits 1996). There are also other classes of techniques like Optimality Criteria methods (OC) (Rozvany 1997), or the discrete methods like Backtrack (Golomb & Baumert (1965), Annamalai 1970), the entropy-based method (Simões & Negrão 2000) (Farkas et al. 2005). Multicriteria optimization is used when more objectives are important to find the compromise solution (Osyczka 1984, 1992, Koski 1994). The general formulation of a single-criterion non-linear programming problem is the following: f(x)x1,x2,…,xN, (1.1) (1.1)  togj(x)=0,j=1,2,…,P, (1.2) (1.2) i (x)=0i=P+1,…P+M, (1.3) (1.3) f(x) is a multivariable non-linear function, gj(x) and hi(x) are non-linear inequality and equality constraints, respectively. In the last two decades some new techniques appeared e.g. the evolutionary techniques, like Genetic Algorithm, GA by Goldberg (1989), the Differential Evolution, DE method of Storn & Price (1995), the Ant Colony Technique (Dorigo et al. 1999), the Particle Swarm Optimization, PSO by Kennedy & Eberhart (1995), Millonas (1994) and the Artificial Immune System, AIS (Farmer et al. (1986), de Castro & Timmis (2001), Dasgupta (1999). Some other high performance techniques such as leap-frog with the analogue of potential energy minimum (Snyman 1983, 2005), similar to the FEM technique, have also been developed. 1.2 THE SNYMAN-FATTI METHOD
The global method described here, namely the Snyman-Fatti (SF) multi-start global minimization algorithm with dynamic search trajectories for global continuous unconstrained optimization (Snyman & Fatti 1987, Groenwold & Snyman 2002), was recently reassessed and refined (Snyman & Kok 2007) to improve its efficiency and to be applicable to constrained problems. The resultant improved computer code has been shown to be competitive with of the best evolutionary global optimization algorithms currently available when tested on standard test problems. Here we wish to apply it to the practical stiffened plate problem. For a detailed presentation and discussion of the motivation and theorems on which the SF algorithm is based, the reader is referred to the original paper of Snyman and Fatti (1987). Here we restrict ourselves to a summary giving the essentials of the multi-start global optimization methodology using dynamic search trajectories. Consider the general inequality constrained problem: w.r.t.xf(x),x=[x,,x2,…,xn]T?Rn (1.4) (1.4) subject to inequality constraints: j (x)=0,j=1,2,…,m. The optimum solution to this problem is denoted by x* with associate optimum function valued f(x*). We address the constrained problem (1.4) by transforming it to an unconstrained problem via the formulation of the penalty function F(x), to which the unconstrained global SF optimization algorithm is applied. The penalty function F(x) is defined as (x)=f(x)+?j=1m?j{g(x)}2 (1.5) (1.5) where ?j = 0 if gj(x) =0, else ?j = µ (a large number). Thus we consider the unconstrained global optimization problem that can be stated: for a continuously differentiable objective function F(x): find a point x*(µ) in the set ?Rn such that *=F(x*(µ)=minimum of F(x) over x?X) (1.6) (1.6) The SF algorithm applied to this problem, is basically a multi-start technique in which several starting points are sampled in the domain of interest X (usually defined by a box in Rn), and a local search procedure is applied to each sample point. The method is heuristic in essence with the lowest minimum found after a finite number of searches being taken as an estimate of F*. In the local search the SF algorithm explores the variable space X using search trajectories derived from the differential equation: ¨=-?F(x(t)), (1.7) (1.7) where ? F is the gradient vector of F(x). Equation (1.4) describes the motion of a particle of unit mass in an n-dimensional conservative force field, where F(x(t)) represents the potential energy of the particle at position x(t) The search trajectories generated here are similar to those used in Snyman’s dynamic method for local minimization (Snyman 1982, 1983). In the SF global method, however, the trajectories are modified in a manner that ensures, in the case of multiple local minima, a higher probability of convergence to a lower local minimum than would have been achieved had conventional gradient local search methods been used. The specific modifications employed result in an increase in the regions of convergence of the lower minima including, in particular, that of the global minimum. A stopping rule, derived from a Bayesian probability argument, is used to decide when to end the global sampling and accept the current overall minimum value of F, taken over all sampling points to date, as the global minimum F*. For initial conditions, position x(0) = x0 and velocity ?(0)=v(0)=0, integrating (1,7) from time 0 to t, implies the energy conservation relationship: 2||v(t)||2+F(x(t))=12||v(0)||2+F(x(0))=F(x(0)). (1.8) (1.8) The first term on the left-hand side of (1.8) represents the kinetic energy, whereas the second term represents the potential energy of the particle of unit mass, at...


Ihre Fragen, Wünsche oder Anmerkungen
Vorname*
Nachname*
Ihre E-Mail-Adresse*
Kundennr.
Ihre Nachricht*
Lediglich mit * gekennzeichnete Felder sind Pflichtfelder.
Wenn Sie die im Kontaktformular eingegebenen Daten durch Klick auf den nachfolgenden Button übersenden, erklären Sie sich damit einverstanden, dass wir Ihr Angaben für die Beantwortung Ihrer Anfrage verwenden. Selbstverständlich werden Ihre Daten vertraulich behandelt und nicht an Dritte weitergegeben. Sie können der Verwendung Ihrer Daten jederzeit widersprechen. Das Datenhandling bei Sack Fachmedien erklären wir Ihnen in unserer Datenschutzerklärung.