Essentially, I've picked up for You 4 facts which support the idea that the Lagrangian Equation of Motion is nothing so bad or complicated ; even if it is a socalled partial differential equation (usually stated to be a bad thing but which it is not int this case), either a single one, or a system of them, according to the situation.
These 4 facts give You a very solid base when dealing with this equation ( or a system of them). So that, if any doubts arise, You can grasp these 4 facts and hold them as stong as needed, to dissolve any substancial doubt.
We wrote down the Lagrangian Equation of Motion as
;
the most common way it's written on books and tutorials on Lagrangian Dynamics.
Where
where a generic x_{k} and F_{k} is to be interpreted as illustrated by the graphical reference givel below.
Very fine. When we have more than 1 degree of freedom, we have a system of as many of this equation, as many degrees of freedom our mehanism has.
Where, how many degrees of freedom ( or DOFs for short) a mechanism has, is simply the number of parameters  called generalized coordinates  needed to define univocously it's configuration at a given time.
Once familiar with this compact way of writing the equation, we set out to analyze a bit it's structure.
It looks a pretty bad differential equation, but our task now is to realize that it's not so bad.
As first step, we rewrite it with the d/dt operation developed fully, to get:
;
often written also more compactly as
;
where is either case, all that sort of total derivative, that summatory with the j index, comes out because of the fact that both q and qd, actually are function of t as
q(t) and qd(t)
which forces to do a depth2 composite function's derivative with respect to the t variable. To give the abovereported formula.
The outdeveloped form, for us is certainly the most practical for us, and leads directly to a nice matricial notation like this,
,
as soon as we collect the qdd terms and express the sum as a typical matrixvector product as
A*qdd
and then we pick the term with the qd and express also that as a matrixvector product as
B*qd
and leave at right what was at right, and call it
k (as in the above equation), or b (as it is called throughout the rest of this book), or whatever You like ; a vector made of all known terms.
So, our equation is nicely rewritten as:
A*qdd + B*qd = b ;
here, we could already stop since as soon as we bring B*qd at right and leave b at right and sum it into one vector, we have the qdd explicitely, since it's a simple linear algbraic system, from which we can pick it out since the qdd is the vector of solutions. Simply by running the Gaussian Elimination procedure, for example.
So, fact number one is that the Lagrangian Equation of Motion is not so ugly because it's very very friendly for implenting with it a numerical simulation.
Technically said, the Lagrangian equation of Motion is linear in the qdd term: more on this later if You are interested in it, in the section called How does the Simulator work ? .
Meanwhile, before closing this paragraph, let's clarify a bit why in the context of a numerical simulation, this fact of treating the Lagrangian Equation of Motion rewritten as
A*qdd + B*qd = b
is legal.
This is discussed in detail in part 2/3, but we can summarize the main facts, in short, here. Since the matrixvector products of the form from which started, suggest that all the partial derivatives in the Lagrangian Eqation o Motion concur to make up sorts of total derivatives, clearly they make up some avarages at each simulationstep: so that for a numerical simulation, it's sufficient to just evaluate each of those partial derivatives and consider them constant for that small timestep ; or do a multistepevaluation like the RungeKutta numerical method suggests. This is all legal, said in one sentence, because the relation between the partial derivatives among themselves and their relation with the qdd(t), qd(t) and q(t) terms, is much weaker than the relation between these ; and don't forget that the partial derivatives, as they concur to sorts of total derivatives, are in fact avaraged. Moreover, in simulating a linear dynamical system (discussed later), all the matrices holding the partial derivatives, reduce to constants instead of being results of the evaluation of functions of q and qd ; so this is even more evident. To conclude, we managed to give also a physical justification ; and not only a mathematical one. Hope that You appreciate!
Very fine. Let's pick up some other points in favour of the friendliness of the Lagragian Equation of Motion.
Well, it's all constucted around the T() or L() function, which is simple becase:
a) it's a scalar function.
b) the structure of this function is very characteristic and simple, that is,
,
as proven by my first theorem on Analaytical Mechanics. Let this fact be our point number zero in our list of facts.
In fact we also see that instead of a generic
f_{abc}( q ) function
we can quitely restrict that to be a nice polynomial,
Pol_{abc}( q ) function.
To arrive at the heart of our Simulator's reliable functioning, the polynomial formulation of the Lagrangian Equation of Motion:
[INSERT FORMULA]
which, as we will outline it quickly in the introduction to the part 3/3 of this guide, has all the nice properties that a numerical simulator such as ours, needs to function 100% well in all allowed circumstances.
Now, let's go back to the structure of the Kinetic Energy Function: to illustrate a bit more clearly what it's structure means. In one sentence, it means that the structure of it is the seme, and it's formally a sort of generic square like
( a_{x} + b_{x} + c_{x} + d_{x} + ...)^{2} + ( a_{y} + b_{y} + c_{y} + d_{y} + ...)^{2} ( a_{z} + b_{z} + c_{z} + d_{z} + ... )^{2}
but with all the coefficients of the final result being a custom value. A custom value depending on the q vector of gneralized coordinates, according to
T_{k} = 0.5*m_{k}*(xd( q, qd )^{2} + yd( q, qd )^{2} zd( q, qd )^{2}) .
It's vaguely similar in structure to the kinetic energy of the single free material point in the context of Newtonian dynamics,
T = 0.5*m*(xd^{2} + yd^{2} zd^{2}) ,
but it's more generic. Because there are not only the squares of each of the variables, but also the products of all the combinations of variables which there are in the expression. As each of the squares in the prevous expression, are squares of polynomes made of an expression of the q values, moltiplied by one (!!) of the qd values. A consequence of the definition of the derivative, with repect to time, of each of the expressions for each of the 3 coordinates, of the position
x_{k} ( q )
of each material point constituting the dynamical system to simulate. It's a fact deriving from the simple application of the Chain Rule for multivariate functions, of the form
f( q_{0}(t), q_{1}(t), q_{2}(t), ..., q_{N_DOF1}(t) )
when performing it's derivative wih respect to t:
d f( q_{0}(t), q_{1}(t), q_{2}(t), ..., q_{N_DOF1}(t) ) / dt ;
or written more compactly,
d f( q(t) ) / dt .
I've prepared for You a more concrete and procedural defintion for the Kinetic Energy Function ; which states formally the seme thing as what I've tried to outline on the last few lines.
Before showing it, let's introduce it... .
So. By looking at
( a + b + c + d + ...)^{2} ,
it's clear that whatever coefficients we put before each term of the result,
( a^{2} + b^{2} + c^{2} + d^{2} + 2ab + 2ac + 2ad + 2bc + 2bd + 2cd + ...),
it always has the seme structure. So that each of the 3 components x, y, z
T = T_{x} + T_{y} + T_{z} ;
of the the kinetic energy function of each material point of a dynamical system ; is always the sum of a pair of 2 functions:
1. an (N_DOF+1)dimensional paraboloid (or ellipsoid?  it may have different names, but it's him) (2 DOF example in the fugure below ) ;
,
2. a (N_DOF+1)dimensional hyperboloid (2 DOF example in the fugure below ): for this one, I've prepares 2 graphics because it's shape is a bit tricky.
,
Where, being clearly the kintic enegy function always positive for physcal reasons, that is because each matrial point of the system is the result of square and so can not have a Kinetic Energy vlause minor than 0, the sum of the 2 functionprototypes listed, is always a paraboloid directed upwards (figure below ).
;
where in this last graphic, note that it's still a paraboloid but it's axes of symmetri are not parallel anymore to the axes of the cartesian space.
A basic knowledge of modern multidimensional geometry allows You to imagine this in the generic case of the Kinetic Energy Function of a dynamical system with N_DOF degrees of freedom.
Enjoy!
The formal definition is reported below.
[FIGURE]
It's form should not come as a surprize in the context of what has been oultined before. It's intendedly but also factually resemblant of the simple
0.5*m*(xd^{2} + yd^{2} + zd^{2})
expression, but it's quite a bit more generic as one could expect.
And this is valid both in the classical as well as in the New Flavour of Lagrangian Dynamics. As an example, look at the Kineric energy function of the classical doublependulum...
as it is in the classical flavour of Lagrangian Dynamics... this structure is evident in the qd_{1} and qd_{2} terms (thich here are called theta_d_{1} and theta_d_{2}). The structure is this:
a^{2} + b^{2} + 2ab ;
which is the result of
( a + b )^{2} ;
whose graph is below, from 2 differen shots.
.
Where it is crucial to note that although the graph is always greater then or equal to 0.0, it's the limit case! For any a*b term with a coefficient greater than 2.0, the graph is like a hyperboloid ; while if it's minor than 2.0, it's a praboloid.
The kinetic energy funciton is a scalar function, R^2*N_DOF  > R^1 ... that means... a scalar ; that shuld be completely clear by now. But before goind on, it's in place to clarify that clearly in case of linear dynamical systems, the stuff ends here, as coefficients are constant. So for those interested in this, the question of how the paraboloid graph of the kinetic energy is in the Ndimensional case, it just a matter of applying some basic notion of multidimensional geometry. While for the Ndimensional cas for nonlinear dynamical systems, well, the structure of the function as it depends ofn the qd vector of the ratesofchange (in time) of the elements of the q vecotr of generalized coordinates, is the seme. But to fully visualize or just study a bit the structure of the kinetic energy funciton, we must take into account also th dependence of it on the q vecot and not only the qd vector of the rates of chenge of its elements. Well, one can choose mixed pair of axis, like a q1 and qd1 pair and so on all the various combinations, and it's fine... while to to it in general, one has to fiddle around a bit more heaviy with some basic concepts of multidimensional geometry : this is because in geometry, surfaces don't generalize so simply as parametrized curves.... while parametrized curves are always parametrized curves in 2D as well as in N dimensions, and one could plot in into one of those fancy hypercubes wich one can visualize form various angles with many programs on the internet ; with surfaces one has to cheat to get something meanigful. Think to the 2D graph of an R > R function (not a 2D prametrized curve!) : it's simple... but it's generalization to 3D is a surface... and look at how more tricky their graps are!  this is because the 3D to 2D projection in this case is a bit chaotic. So, to graph and imagine a 4D surface or then an Ndimensional surface, it tends to get even worse. Sections can be considered, but this is not a book on multidimensional geometry, nor on scientific visualization, so let it end here. The material given on the topic so far is by large sufficient to get along with anything related to the structure of the kinetic energy function.
So, essentially one just has to keep in mind that in the case of a kinetic energy function such as in the example above, it's certainly true that the coefficients are custom, and true also that in nonlinear systems, they are all functions of the q vector of generalized coordinates. But also that the coefficient for the mixedterms (the nonsquare terms such as the a*b term of out previous example) are ALWAYS MINOR than 2.0 !!
Think also that: a dynamical system must always know where it's configuration should evolve in the next instant... an evolution whose direction is given by a sort of minimum of that parabolid stuff, that is the Kinetic Energy Funciton in our case, outlined above.
And this is clearly true even if we don't explicitely prohibit the use of composite functions in the parametrization, which, however is advisable.
In the polynomial formulation, clearly the use of composite funcitons is prohibited as it's simlpy impossible, and the corresponding expression for the Kinetic energy function (around varius configs), is left as an excercise... and it's solution is directly provided amond e samplesimulators made with the LDNFS Simulator.
Maybe this point zero of our list of facts seemed a bit theoretical to You, but... look, I explain it: I had used the Simulator throughout years without being yet able to justify formally why it works well and never fails. But now that I became able to justify it, I did it for You so that You can be really comfortable.
In fact I created it's first prototype back then, to avoid the very ugly failures of the previus naive version which did not restrict the work to the polynomial thing: I found no material whatsoever on why excactly it happened, and so deviced the heart of the New Simulator, that is my first theorem on Numerical Methods and the one on numerical stability which, however is only to keep the numerical simulation as close as possible to an ideal case of very small timesteps and lots of significant digits in all those variables used in the simlation which should behave as ideal real or rational numbers ;
But theorem 1 I had worked out by pure intuition and rough exclusion of possibilities... I knew practically nothing of mathemetics then.
But now, at least I can bring You a more thorough justification of why it works always well, and what this means mathemeatically: So that You can be optimistic and not as desperate as I was when the old simulator used to come out with very ugly failures apparently in unpredictable circumstances. The New Simulator doesn't fail: and that's why I bring it to You, togather with this guide to it's usage and functioning.
You will read some more on these things in the introduction of part 3/3... meanwhile just take it as a guarantee that You are on a safe ground.
The next point, point 2 of our list of facts, is pretty simple.
Well, it's the very form of the solution of the Lagrangian Equation of Motion. It's a plain, very simple parametrized form, and there are practically
no such things as initial conditions and such, since they are that, what intuitively there is, concerning a mechanism whose motion is one's intention to simulate. All what is needed is: an initial configuration, and an initial rateofchange (in time) of each of the generalized coordinates.
Pretty much it's all here. So, it's certainly very simple.
But if we want to dig into it a bit, let's state the thing a bit more rigorously.
We consider the 2 typical scenarios: the analytical solution, hypithetically ideal a, and a typical numerical solution.
Well, in the analytical scenario, once given the vector of initial conditions,
[ q_{0} ; qd_{0} ] ,
that automatically selects  so to speak he right solutionset of out differential equation. And that selection pick up a completely univocouslydefined solution.
Of course we don't have to forget that the analytically tractable cases are a minor subset of all the universe of practical mechanisms  most of which can be simulated only numerically... even if the concept remains the seme. We do a phaseplot (it has an ungly name, but don't worry), just look at the graph...
and can see that the initial cnditions select univocusly a solutionset. Some may display sorts of colosedcircuits, some may not... whatever ; it does not matter. For an 1 DOF sysem, we can plot the q on one axis of a 2D cartesian graph, and the qd on another. for a 2DOF system, we can plot q0 on one and q1 on one, and then do another plot where we put qd0 qd1 on the 2 different axes. Now, for the needs of applied things, the preference in case of a 2 DOFs system, is for the q0 and q1 case... because studying the evolution in time of configuration of the system is them most concrete tast. It' a parametrized curve, where the parameter is t (time, in our case) ; in every case.
In the case of generic N_DOF systems, in fact we have a parametrized curve living in an N_DOFdimensional cartesian space: that is, simply an N_DOFdimensional parametrized curve... as simple as that.
Instead, in the case of numerical simulations, which is here at the center of our attention, the situation is that, after the simulation has proved to be acceptably reliable after confrontatin with realworld measures or just the simpler pretention of a beliavable motion of an articulated mechanism, we just have to plot the q and qd variables on a 2d cartesian space... but that does not say much in practice as outlined before. So, let's take a 2DOFs system, and see what a typical situation similar to the one outlined in the previous situation, would look like:
,
not much different, but things here are not so ideal. But as soon as we get a bunch of curves made of reasonably short segments after running a simulation with a bunch of different initial conditions, we can say that it's all fine. A fact which is, in most cases, also the sign that we did, if not a superrealistic, but at last an acceptable or at least a believable, simulation. That's normal, but now the question is that once we see the similarities in systems tractable both aalytically and both numerically, we can get rid of the consideration of the analytical case, as most practical articulated sysstems are not tractable analytically at all.
So... let's just generalize the numerical situation, and close the discussion.
So. For systems with more DOFs, cleary we always end up with a multidimentionsla parametrized polyline. There's nothing more to say... what in a simple, analytically studyable system proves to be a closed curve, in a numerical simulation is just almost close... it's similar and should be so, but that's it. The reliability of the simulation, in the case of generic polunomical systems, depends on 2 parameters: the value of the steppingtime dt, and the value of the small h value used to numerically get the finitedifference derivatives of all kinds needed, in the simulationprocedure.
As simple as that.
Now instead let's go to the last point of our list of facts, fact number 3: the form of the equation itself. This is the deepest part of it, so if You are tired, take a break here before going on.
So: point 3... let's go.
First, we analyze a bit the dependencies which there are in the Lagrangian Equation of Motion. According to my first theorem on Analytical mechanics, because of the structure of the T() or L() function, we have these dependencies:
A( q )*qdd + B( q, qd )*qd = b( q, qd ) ; where the most stricking fact is that A depends only on q, there is NO
From here, we write, to be sure also the dependences on the time's variable t
A( q(t) )*qdd(t) + B( q(t), qd(t) )*qd(t) = b( q(t), qd(t) ) ; and step to the form which immediately leads to a stepping numerical method:
qdd = A^{1}_{N_DOFxN_DOF}( q, qd ) * b( q, qd ) ;
finally, not even so bad. It is immediate to implement a numerical simulator with it. Yes, that we knew already. But let's come to the last point, to total moreless 4 points to support that the Lagrangian Equation of Motion is, after all, not so ugly.
We analized the dependencies. But we remained a bit in surface. So, let's go deeper:
If we have a linear dynamical system, the B term is all0... it vanishes. And the A reduces to a matrix of contansts, while instead b reduces to a vector of contants if we use T(), and a vector of inear expressions of the q vector. Familiar?
Yes!  it's a linear system of differential equations ; whose prototypic form is this:
;
which can always be rewritten in this form relying on matrixvector products, like this:
as it is commonly written on math books.
But since we are not being mathematicians here, we directly adopt the standart notation with q_{0}, q_{1}, q2,..., q_{N_DOF1}, and rewrite it as:
;
Which, with a practical compressed notation in the case of a generic N_DOF, becomes
.
Where, to be complete, if we want to explicit fully the dependence on t of all the variables in the q vector, we would write it as follows:
;
where for the generalization to the previous notation for generic N_DOF, requires only to replace  in that equation  qdd with qdd(t), qd with qd(t) and q with q(t) .
Hope this clarification was useful rather than confusing. Don't worry if anything is unclear... it's not essential to understand this for the usage of the Simulator.
Instead for those who aim at becoming advanced users of it or aim at becoming experts of Analytical Mechanics' new flavour, or also of the the classical flavour of it, well, we are here to support that purpose. It still may seem an annoyance, but in that case it's very important to become familiar with this notation, because it shows any linear sys of diffequs in a prototypic, fixedscheme, general form.
So, now that You know for sure what were we driving at when we wrote
A( q(t) )*qdd(t) + B( q(t), qd(t) )*qd(t) = b( q(t), qd(t) ) ;
we can now say that A is constant, B vanishes, and b is a vector containing one constant plus optionally some linear expression of the qi values... let's rewrite b as a matrixvecotr product, of the form D*q. That's it.
Now we have
A*qdd(t) = D*q(t) + Fq ; a secondorder linear system of differenial equations, or call it anyway... it's him, impossible to exhabge it with anything else. We can add a term which with funcitonconstants of t but for that it's sufficient to let Fqi depend also on time. Fine, let's do it!
A*qdd(t) = D*q(t) + Fq(t) ;
But we are still there. Something is missing... .
It's linear, yes... but there is no term multiplying a qd, and Fq is a vector of contants which falls in the category of functionconstant so that's fine.In fact the B term is nonzero only in nonlinear systems, which in our case are all polynomial systems.
So? If we produce expressions summarizing the effect of viscous frictional forces as functions of the qd vector (because of linearity it can only be made of contants) and so add a term for this in the equaiton:
A*qdd(t) + C*qd(t)  D*q(t) = Fq(t) ; to write it in conventional form with the functioncontants at right.
This to show You that linear systems are so special also because, in order to so to speak  feed it into a numerical solution program, all that would be neeeded is to give the A, C, and D matrices, and let Fqi remain interactively given or a vector of contants or whatever. So that these matrixes provide a true datastructure to contain, to define, a linear system.
This makes a lot of contrast with geenric polynomial systems, since there we have a much more subtle situation, which now we let in the background. As discussing it, would bring littleno practical advance.
Let it be sufficient to say that in the polynomial case, if we write out everytging explicitely, we ge to the form
A( q )*qdd(t) + B( q, qd )*qd(t) + C( q )*qd(t)  D( q, qd )*q(t) = Fq(t) ; where for major clarity I did not write explicitly the dependence of q ad qd on time when indicating on which of them each matrix depends... this to stress the fact that those matrices' elements are simply formal functions of q or qd, or either both. Anyway, for the seek of completeness I report also the full form where every dependence is explicitly written:
A( q(t) )*qdd(t) + B( q(t), qd(t) )*qd(t) + C( q(t) )*qd(t)  D( q(t), qd(t) )*q(t) = Fq(t) .
Where in either case this all derives from the seme formal rewrites of the Lagrangian Equation of Motion according to (figure with formula) ;
[FIGURE WITH FORMULA: lagrangian with d/ dt developed explicitely.]
where all the elements of all the matrices A, B, C and D in the previus form, are obviously, in general multivariate, polynomials. At least, we are on a very solid ground. Those interested in it, can easily figure out what kind of datastructure that polynomial form would need for it to be contained... interesting theoretically, but of little or no use in practice. All that reguards this and those things we call in this book polynomical dynamical systems, is summarized by my first theorem an Applied Mathematics (branch:differential equations): something not discussed here, but on another book which I shall write to enable advanced users to upgrade the simulator to simulate special dynmaical systems mentioned at (*^{1}) in the chapter on parametrization rules, detailed discussion.
At least we have been as complete as we could, in the little space dedicated to the topic... since I prefer dedicating more to the application now than to the theory. After all, I am here to explain to You how apply the computer program based on the theoretical foundations I took care of: so that others don't have to do so, instead can apply the results immediately for some practical good.
Substancially, we have to abandon the compofortable ideal case of linear systems, and can not take advantage so efficiently of the concept of datastructure holding our system of differential equations: but we can still arrive at some very important and useful considerations. Let's go!
So. If we give up linearity and pass to polynomial case, the B term compeares with utter dependences both on q and qd (in an allornothing style), and both A and C start to show dependencies on q, while D shows it both on q and qd (and has a separate contribution from T(), a diagonal matrix rather illmanipulated, in fact an abuse of notation, and a contribution from the V() potential energy function if we use the equation of motion with the L = TV form)... to get to our generic form discussed previously:
A( q )*qdd(t) + B( q, qd )*qd(t) + C( q )*qd(t)  D( q, qd )*q(t) = Fq(t) ; now written in conventional form with the functioncontants at right.
So, we can assert that all dynamical systems, in the context of the Lagrangian Method, classical or new flavour whichever, all dynamical systems are literally like constructed, built, around a linear core  so to speak.
We know, linear systems never fail in simulation... they have, per definition, no of those crytical configurations (discussed below), as well as neither polynomial systems have them.
The core is always there, but with more intricate dependences instead of simple constants in the matrices. So, the A term is like a sort of oneman army surrecting the poorest cases: linear dynamical systems.
But the core is there, and A is always an invertible matrix, fact absolutely enforced for polynomial systems too.
But once understood this fact, for polynomial as well as all generic nonlinear systems, the C and D matrixes become total abuses of notation and thus, once this analysis is over, return to the form:
A( q(t) )*qdd(t) + B( q(t), qd(t) )*qd(t) = b( q(t), qd(t) ) + Fq(t) ;
From which, since we are anyway in the context of a stepping numerical solution of this, we can leave comfortably out the dependence on t of q, qd as well as qdd and undoubtly of Fqi which in a simulator usually summarizes frictional forces and interactivelyapplied forces from simulationstep to simulationstep, to obtain:
A( q )*qdd + B( q, qd )*qd = b( q, qd ) + Fq ;
fine.
As a finel note, as You could expect, this would not be so smooth with generic nonlinear systems, where the invertibility of A could fail if the system entered a socalled crytical configuration... which is a problem with the prametrization, not discussed here since it was totally eliminated from the new flavour of Lagrangian Dyanmics. In fact in polynomial systems, there are no such crytical cofigurations. In fact the possible failures of generic nonlinear systems are cast completely away from the New Flavour of Lagrangan Dynamics since it restricts to deaing with polynmial dyn systems exclusively... even though without breaking the generality which it achieves by requiring the chaining between the pairs of siulationsteps, or, conceptually, between polynomial parametrizations of the seme system, parametrization whose form remains always the seme, but requires to adjust some constants to when a mehcanism move outside the limit of displacements within the limits of midoscillatory motions.
Now. I have been in truble finding out a way to introduce You enough gradually, but without letting out anything important, to the parametrization techiques and strict rules which have to be respected when feeding into the sourcecode of the simulator, the definition of the articulated mechanism which is one's intention to simulate numerically.
Strict rules have to be respected because the simulator assumes that we are using those socalled polynomial systems.
It assumes, as a hypothesis, as a definition, a basic fact, that our articulated mechanism to simulate is parametrized so as to lead to a socalled polynomial dynamical system  which is, as stated in part 2/3 of this presentation, a fantasyname I gave for these types of systems because while dealing with them, progressively I realized their importance at least in the context of numerical simulations of articulated bodies and to minor extent also electromechanical systems ; and their importance notwithstanding, I never found any explicit reference to them as far as I had been searching, in books and the Internet, on Analytical Mechanics, Numerical Methods, or similar things. So that we can now, at least, refer with a unique name to a type of system which has all the properties needed for a reliable numerical simulation. So that we have now a proper name for  let's say  our favourite types of dyamical systems : polynomial dynamical systems.
This is the assumption: that the parametrization leads to a polynomial system and that thus the simulator works as expected, and works well.
Forgive me if I am a bit reluctant to start quickly, but very special care must be observed here.
Becase on any referencematerial I've seen so far on Analytical Mechanics, only 2 kinds of systems are taken into account: linear and nonlinear dynamical systems... and there is no farther distinction.
And as we'll see when we start introducing the parametrizationtechniqes for doing the 4 most basic jointmechanisms which I intend to discuss here, and which cover practically all the needs for simulating a practical articulated mechanism  whose destination, to say, is an implementation with a metalconstruction, plasticconstruction or similar  only 1 of them, the PRISMATIC joint leads to a linear dynamical system, while the others, according to the traditional classification, would lead to a generic nonlinear dynamical system. But in the New Flavour of Lagrangian Dynamics underlaying the Simulator, these 4 lead to polynomial dynamical systems.
So. That's the reason why I'ts primary now for me to outline that the first type of articulation (and also a subtype of it), the PRISMATIC will be the first to be introduced ; because if only prismatic joints are present in a mechanism, that mechanism leads by itself to a linear dynamical system (the classical linear dyamical system we are well familiar with). So the parametrization of this type of joint will need little or no farther clarification. Even because it is well discussed on any good and detailed reference on Lagrangian Dynamics (or Analytical Mecahanics as it is also often called). So far, so good.
But I decided, and I hope it will prove the lucky decision, that before starting the discussion on how to parametrize the kind of joint which is the PRISMATIC joint and some other similar types of joints which behave similarly, before even starting, in order to make clearer the transition to other types of joints which don't lead to linear dynamical systems, I decided to go on here with some notes about, with some notes which will meke this transition less abrupt and much clearer.
So. We know that linear dynamical systems are usually very simple, and this is good (for those who are interested in this, also finding the analytical solution to the corresponding (ordinary) differential equation or system of them is a matter of play... there's a technique called Laplace Transform  applied to ordinary linear differential equatios or systems of them  which lets solve them algebraically... a techique and underlaying theor which  finally (!!)  gained some fame thanks to it's applications in various modern industrial fields ). But we also know that those mechanisms which qualify as linear dynamical systems, can not cover, even by far, all the prectical needs of a modern industrial world. And not even the needs of a fully complete theoretical framework to do articulatedbody simulation. And so, we need also other types of jointmechanisms, other than the prismatic or similar ones.
And we must be able to simulate as well systems in which a rigidbody can also be totally free: expecially as reguards its orientation, since for it's position, more precisely the position of its conventional geometric center, it's as if it were a simple material point.
In our examples we saw similar cases.
So, a completely free body: free to go around, free to translate ; and also free to have any orientation...  the classical free body.
Or a similar case in which one of its points, which one can choose arbitrarily, is not free to translate around, but is tied to a fix point in a global referenceframe, while it's orientation can be any [min 8:13] ; it can change freely. Then one can attach also spring to it, and so on.
Of course we want to reach total generality.
We want that theese bodies, which are in most cases components of more complicated articulated mechanisms made of many rigidbodies, can have any orientation.
And clearly in the classification of the linearity of a system  as I've already tried to advice about this  in the context of this simulator, we consider the Kinetic Energy function
T( q, qd ) ;
So that the dynamics of the system has to be linear for the system as a whole to qualify as linear ; and the Potential Energy function
V( q )
is not so important for us, since it reduces to a contribution to the rest of the external forces acting on the system, eventually leading to a set of generalized forces, as many as many degrees of freedom the system has. So that the Potential Energy function is just a contribution accoding to
Fq_{i} = dV( q ) / dq_{i} + contribution of frictional and interactivelyapplied forces
If this troubles You, consider it to be polynomial too, or even linear: as for springs it's a always a good approximation even when one end of the spring is not moved along the direction of the line crossing it's 2 ends.
However, there's a reason to classify a system according to the Kinetic Energy function, a to let in the background the Potential Energy function. Mostly because we start designing a system by defining it's geometry ; how it should move intuitively, and not by putting some spring into the void of an empty referenceframe.
So, after discussing the simplest kind of jointmechanism which leads to a linear system so that the parametrization rule to do it is the seme as in the classical approach (so that You can find lots of good references on books or on the Internet). Then we'll transition to te other 3 basic types of joints and the case of the free rigidbody, so that we'll have the FREE BODY, UNIVERSAL JOINT, SPHERICAL JOINT and HINGE JOINT, don't be surpreized when initially we will consider only situations in which the bodies are limited only to a small change in orientation around an initial orientation consiered as default.
And and after having discussed these cases, in which we will consider, in other words systems which exhibit only middleoscillations (similar to smalloscillations but allowing considerably bigger motions and having a clearly much more complicated, colorful, dynamics than simple linear systems), so being all in general polynomial systems (discussed somewat in detail part 2/3 of this guide, which You could have skipped if You were not interested in it at that moment) ; and then we will, at the end learn how to chain these small to middle changes in orientation of the bodies which can change it, to allow any change in orientation for the bodies in which need it in the system they contribute to. So, at the end we will reach full generality and be able to define and simulate any practical mechanism with full functionality of all the free bodies in it or bodies which are tied to some other body or a fix point with a UNIVERSAL joint, a SPHERICAL joint or a HINGE joint.
So, we initially restrict our work to midoscillatory systems: because these lead to polynomial systems.
Now. As we know, polynomial functions, in general are famous for approximations. But approximations which can be any faithful to the original funcitons to approximate. And this is true for simple poly funcs like Taylorseries, Poly interpolations, things where clearly our case is multivariate functions... a simple straighforwar gneralization.
Let be that said as a start for the next paragraph.
Linear approximations can sometimes be an option to start going about how to parametrize typical nonlinear systems, and I'll show that too ; but the degree of falsification would be, in most cases, too large to manage to capture the colorfullness of the real motion of such systems. That's why we use polynomial systems: because also polynomials are famous for being able to approximate almost any function ; but do allow to reduce at arbitrarily small the degree of falsification... that's why... in one sentence.
But with that said, let's elaborate on that a bit more. By saying that what's the particularity of this story about polynomail dynamical systems.
It's this: that as I intend to provide an external video which then I or someone else will transcribe in wrtten form, about the strainge, some interesting properties of generic nonlinear systems, it's claer that the linear systems behave very well, in a numerical simulaiton have excellent nuerical stability and excellent preservation of significant digits for the variables used in a simulator, they are very easily predictable, they are simple and everything, while non linear systems, even if intuitively they are not so bad, they do present some difficulties.
And they can, in some circumsances, turn out to be very bad in the context of a numerical simulation. While the particularit of the polynomial systems is that we chain them in order to do an almost, in the context of a numerical simulation, a practically perfect equivalent of any generic dynamical system.
The fact is that the subtle properties, complications and sometimes bad bahaviour of gneric nonlinear systems in the context of a numerical simulaiton of them, fall literally in between the pairs of simulationsteps or our chain of them. The complications fall in between 2 blocks of this chain: and so, they disappear and don't cause any problem for the numerical simulation.
if it helps, try looking at a sequence of parametrized SPLINE or Bezier curvesegments ( the variants with lowest degree of polynomial is fine) as they approximate a generic parametrized curve. If the we plot the curve and it's approximate version on 2 papers, as well as some kind of plot reporting also the the tangentlines, and the resolution of the media on which we draw the curve or the resolutio of the technique with which e draw the curve falls beyond the amount of difference between the 2 curves, then they are de fact indistiguishable. And to achieve this coincidence of the 2 plots, it's sufficient to make the maximum lenght of the segments enough small: but we do not have to raise the degree of the polynomials to match the curve as well as it's tangent lines. While with a polyline, we could not achive the match of the tangentlines because we would be too rough, and the by reducing max lenght of each segment woul not even make the diminishing of the difference of the 2 plots decrease gradually. While with polynomials, SPLINE of Bezier curvesegment, yes: because polynomials manage to capture the colorfulness of a curve's nature. And similarly, we approximate a generic system as it's cofiguration evolves from instant to instant, with a polynomial system which we update from stp to step according to some rules, and if the steppingtime difference is sufficiently small, the 2 evolutions of configurations become indistinguishable. And the degree of falsification in the approximation with respect to the original, decreases also gradually ; until it reaches the desired amount. While with linearly approximated parametrizations, this would not happen... it would be far too rough.
I hope I've managed to be sufficiently clear within the small space that a preambule allowed.
So. Let's get started.
So.
As a start, we let for now in the background the precize parametrization rules, which will be discussed in detail in another section. Here, just think to the simple examples shown in part 1/3... : we assume that You have the parametrization of the mechanism whose motion You want to simulate... for example, that You have it on a piece of paper.
And with this said, let me first introduce You to how the parametrization, the definition, the geometry of the mechanism to simulate, is fed into the sourcecode of the Simulator. One You grasped the concept, the detiles abut the parametrization rules to follow in the new flavour of Lagrangian Dyanmics, will fit immediately into the picture.
So. Let's get started.
Consider the quantities we have been writing and discussing in part 1/3 on papaer. The seme quaintities are present in the sourcecode of the Simulator, in specific points, and they musy be assigned the right value according to the mechanism to simulate. In those examples I tired to use the seme names as those used in the sourcecode of the Simulaotr of the correspoinding quantities. Some of which are, with the corresponding name they have in the sourcecode, in the table below.
table 3.1: a simple conversiontable allows to transfer  so to speak  a mechanism's definition, parametrization, from a handwritten papersketch into the C sourcecode of the L.D.N.F.S. Simulator? 


//============REOREIENTATION PART================ //. // all what needs to be done between 2 steps of simulation, goes here //. //=============================================== 
Between 2 folowing steps of simulation, the q and qd values must so as to satisfy the Lagrangian Equatons of Motion, which satates a conservation law as we called it.
Cleary, we know that we don't have to care about q because since in a stepping numerical siulation, it can only do as:
q = q + qd*dt ;
to which we also add this.
qd = qd + qdd*dt ;
that was to expect because the Lag Eq Motion is, rougly, has the form of a 2nd order linear diff eq so it's like stepping to do an animation of an articulated body but without knowing anything or the dynamics of articulated bodies ;
For the dynanics to be realistic, we can't give the qdd term... any value we like between 2 steps... it has to receive the values adequate for making the motion realistic.
We arrived at it: the simulator has to ge the qdd term right, and that means so as that between 23 steps, the qd term and through it als the q term satisly the equivalence imposed by the Lagrangian eq of motion.
The correct value of the qdd vector from step to step, comes from the Lagrangian. That said, we are ready for the detailes.
So. Let's get into the details of... how to find the qdd correctly?
The Simulator works simply by solving numerically the system of Lagrangan Equations of motion: which means that each partial derivative figuring in the expression, is calculated as a simple finite difference. For those who are not familiar with the concept of partial derivative, take it like this:
given a function of the form f( q1, q2, q3, ... qN ) and chosen one of it's variables, call it qi, the partial derivative of the function with respect fo qi is the value given by this (scaled) difference:
( f( ...,qi + h,... )  f( ...,qi,... ) )/h . which is valid as long as h is enough small ; and scaled means that the difference is then divided by h.
And You are fine! So, let's go on!
We rewrite the system of Lagrangian Equations of Moton as
qdd = A^{1}_{N_DOFxN_DOF}( q, qd ) * b( q, qd )
where some of the partial derivatives of the T( q, qdd ) or L( q, qdd ) function are the various elements of the A matrix and others are summed to make up the elements of the b vector (vector of known terms). So it, de facto, becomes a linear algebraic system:
qdd = A^{1}_{N_DOFxN_DOF} b ; where, instead of matrix inversion, the simpler Gaussian Elimination algorithm can be used to find the qdd solutionvector.
And then, all that the Simulator has to do, is to excecute over and over again, the procedure based on this, that is:
qdd = A^{1}_{N_DOFxN_DOF} b q = q + qd*dt qd = qd + qdd*dt 
We start out with the notation telling what we are trying to do.


Then we write the definition of derivative with the respect to the innerest variable t, for the abovetype of function.


We rewrite it with the common dotted q_{i} notation.


It's a total derivative, and so there is a nice big summatory. But since the summatory is not too clear for us in this case, we rewrite the seme thing with the notation with uses the nabla operator .


Then we rewrite each element of the nable vector with the raugh definition of derivative as the limit of an incremental ratio, as h tends to zero.


And now, using the concept that with a derivative we can linearly approximate a function around a specific value of it. And since it's so linear, we can bring the qdi multiplyer inside, as in linear functions of the form considered until now, k*[f(x0) + fd(X0)*x ] = [ f(x0) + fd(x0)*k*x ].

extra reference item 3.1 NEED TO CORRECT TYPEMISTAKES HERE, IT'S PRETTY FULL WITH THEM!! : outline of proof of univariate derivative of generic depthN singlevalued function (for vectorvalued it's the seme anyway except that the most external function, in this case "a", is vectorvalued. 
As a safe start, we take the plain definition of derivative with h tendent to 0, d a( q(t) ) / dt = ( a( q(t+h) )  a( q(t) ) )/h ; were we use the equivalence q(t_{0}+h) = q(t_{0}) + qd(t_{0})*h, in which we can turn the t_{0} constant into a generic variable t, to obtain: q(t+h) = q(t) + qd(t)*h ; We then sobstitute q(t+h) wih that expression, and thus obtain the equivalence ( a( q(t+h) )  a( q(t) ) )/h = ( a( q(t) + qd(t)*h )  a( q(t) ) )/h ; end. Clearly, in a numerical simulation which uses this fact, we only have values of q and qd, so the expression becomes one reduced to be true to a sinlge step of simulation, or more in heneral a single step of numerical solution, according to: ( a_{i+1}  a_{i} )/h = circa... ( a_{i}  a_{i1} )/h = (( a_{i} + qd_{i}*h )  a_{i} )/h ; where clearly the a_{i} terms in the last expression delete each other, to leave the rough definition valid in the context of a numerical soluton, which is, I tried to oultine so far, not any far from the analytical context. expression where i indicates he number of step at which the repeated proceudre has arrived so far. Which is valid (for those who are inerested in it) because of the seme principle of linearity of the function f(t) near a specific value of the generic variable t the f(t) function depends on, according to: f( t_{0}+t) = f( t_{0} ) + [d f( t )/ dt]_{t0}*t . Now, at this point the idea is simly that of reducing the task of deriving a heavily composite funciton of the form a(b(c(d(t)))) to repeated excecution of the simple, noncomposite function's derivative with respect to it's simple variable, as a(b), which we are able to derive. Now, by using the f( t_{0}+t) = f( t_{0} ) + [d f( t )/ dt]_{t0}*t rule, by doing some sobstitutions, we are able to arrive at the defnition of chainrule for the derivation of composite functions. In the generic definition of derivative, ( a( b(c(d( t_{0} + h ) )))  a(b(c(d( t_{0} )))) )/h , we sobstitute, in a( b(c(d( t_{0} + h ) ))), the d function with it's linear approximation near t ; to obtain d( t_{0}+t) = d( t_{0} ) + [d d( t )/ dt]_{t0}*h , and then do it also for the c function considere as formal a function of d (don't get confused!!) c( t_{0}+t) = c( d_{0} ) + [d c( d )/ dd]_{d0}*h , and then do it also for the b function considere as formal a function of b (don't get confused!!) b( t_{0}+t) = b( c_{0} ) + [d b( c )/ dc]_{c0}*h , and finally arrive at the toplevel function a( t_{0}+t) = a( b_{0} ) + [d a( b )/ db]_{b0}*h , And arrive at a modified definition of derivative: [PUT THE CORRECT FORMULA HERE!!] ( a( b(c(d(t_{0}))) + db_{c}( t_{0} )*dc_{d}( t_{0} )*dd_{t}( t_{0} )*h ) )  a(b(c(d( t_{0} )))) )/h ; which anyway is sufficient to interpret, trying to guess what that rescaling of h produces in the final value of the derivative. We are almost there! The linear approx of a func near a value, does the job. By applying the definition of linearity, we understand that rescalig h rescales the function's derivative's value! Now very very special care must be done with the equivalence we arrive at: ( a( b(c(d(t_{0}))) + db_{c}( t_{0} )*dc_{d}( t_{0} )*dd_{t}( t_{0} )*h ) )  a(b(c(d( t_{0} )))) )/h = db_{c}( t_{0} )*dc_{d}( t_{0} )*dd_{t}( t_{0} )*[ ( a(t_{0} + h)  a(b(c(d(t_{0})))) )/h ] ; and the usage of the linearity of the generic f(t) function near a given t0 value... this may seem weird but this is actually a hyptheses which simply underlies the whole calculus! And this equivalence's left or right flavour may be used with greater success in some fields than in others. In numerical derivation, the left may be actually prove to be better too. This all leads to the traditional Chain Rule because thus it's true that ( a(b(c( d( t_{0} + dd_{t}(t_{0} )*h ))))  a(b(c(d( t_{0} )))) )/h = dd( t_{0} )*[ ( a(b(c(t_{0} + h )))  a(b(c(d(t_{0})))) )/h ] with h small enough, the smaller, the better. Then, to arrive at the classical Chain Rule, it's sufficient ot go on with the seme sobstituion with the c(d) function, then with b(c), and finally a(b) ; completely procedural, to obtain finally what the Chain Rule states. ( a(b(c( t_{0} + dc_{d}(t_{0} )*h )))  a(b(c(d( t_{0} )))) )/h = dc_{d}( t_{0} )*dd_{t}( t_{0} )*[ ( a( b(t_{0} + h ) )  a(b(c(d(t_{0})))) )/h ] ; and so on, until arriving at: ( a( b(c(d( t_{0} + h ) )))  a(b(c(d( t_{0} )))) )/h = db_{c}( t_{0} )*dc_{d}( t_{0} )*dd_{t}( t_{0} )*[ ( a(t_{0} + h)  a(b(c(d(t_{0})))) )/h ] ; where the derivativedefinition at the right is finally replaced with da_{b}: that is, a's derivative with respect to b. and finally with it's formal eqauivalent d a(t) dt ; and we have the Chain Rule. end. 
[ x_{1} x_{2} ... x_{N_MPOINTS} ] 
[ x_{1} y_{1} z_{1} x_{2} y_{2} z_{2} ... x_{N_MPOINTS_x} y_{N_MPOINTS} z_{N_MPOINTS} ] 
a = pos_val() 
R^{N_DOF} > R^{3*N_MPOINTS} 
For those who have already had some exposure to Analytical Mechanics, I mean, in this case to Lagraingian Dynamics, and so to various parametrization techiques and concept underlaying it, I will underline a bit, I'll hint a bit at, the differences and similarities between the rules of the Lagrangian Dynamics' new flavour and it's traditional flavour.
If You haven't had any exposure to these things, then, better!  just concentrate on the parametrization rules of the Lagrangian Dynamics' new flavour's, which are simple and straightforward, and are thus even easier to undestand than the tradition techniques  so to say  of parametrization.
So, from time to time I'll hint at the differences, as reguards parametrization, between the Ladgranguan Dynamics' new flavour, and it's traditional flavour. So, don't worry about getting cofused: it will be all fine.
I say this Because It's important not to get cofused between the 2 flavours, since in using the Simulator, the rules of the Lagrangian Dynamics' new flavour have must be used ; even if this clearly does not limit the possibilities. For now take it so: advanced users can, eventually, progress to using a mixture of the 2 flavours so don't worry.
In the classical approach, yes... paramerization is more a sort of art a question of taste, than a science. In fact we have planty of freedom in the way in which we parametrize a system, as long as the final degrees of freedom, the final motion which a point, or a set of points, or a singe rigidbody or a set of more rigidbodies, can have.
As long as the motions it can do are those intended, it's fine.
While instead in the new flavour of LD used in the SImulator, parametrization is a fixed sceme with rather strict rules. This, to some extent, however simplifies also the job.
In the traditional approach to Lagrangian Dynamics, we are taught to subtract degrees of freedom. From the total number of degrees of freedom which a set of points have, from the total freedoms of motions  so to speak  , which in 2D is the total number of points multiplied by 2, while in 3D it's the total number of points multiplied by 3.
And in the case of rigidbodies, well, the tottal number of DOFs of a set of N rigidbodies, in 2D, is 3*N ;
and for a set of N free rigidboides, in 3D, we have 6 DOFs for each, so the total n of DOFs of such a system, is N*6.
And as reguards the jointmechanisms, in the classical approach we are tatught that these socalled jointmechansims do subtract DOFs from these total numbers of DOFs I've listed before according to the various 2D and 3D scenarios.
And of course we say that at each constraint, as it is called in the classical approach, a DOF, 1 DOF is subtracted to the total ; and if we consider all the constraints, we have the orignal free systems's number of DOFs, MINUS the number of constraints... so arriving at the total number of DOFs left.
Listen to how it's a bit awkward to say it.
And moreover, different jointmechanisms, like those I've hinted at before like those listed in the table below,
[TABLE for 3D and 2D cases]
[PRISMATIC joint, the Universal JOINT in 3D ]
each does subtract a different number of degrees of freedom, since some of them concretize more than one (socalled) constraint at a time. For completeness, I report each jointmechanicm, in the cassical approach, how many DOFs subttact: look at the following table below...
and then forget it as soon as You can.
The heart of the question is that in the classical approach, by definition, each constraint subtracts one single degree of freedom. Stop.
Instead, in the new flavour of Lagrangian Dynamics, we ADD degrees of freedom ; starting from zero : very simple!
We start from a rigidbody dince for convenience we restricte to the use of rigidbodies only  as introduced a while before (part 1/3 of this presentation and guide ),
and start to  almost literally  to "build" our mechanical system by adding the various types and numbers of freedomofmotion to the system by joining bodies to that initial one, through various jointmechanisms: which, this way, do ADD degrees of freedom.
A jointmechanism ADDS degrees of freedom in our case.
We construct a mechanicsm by combining rigidbodies through various joints ; a body can be either free, or held by a jointmechanism.
And when a to a body a jointmechanism is attached, that body by definition has 0 (zero) DOFs : that's why in our context we say that a joint ADDS, actually, DOFs.
NOTE: it's not only to assure polynomial systems as said before (in the preambule and more in detail in part 2/3), but to make it harder to make mistakes, and make it easier to diagnose mistakes.
However keep in mind that these conventions about DOFs and Parametrization apply fully ONLY in the case of systems with NO CLOSED "CHAINS" of BODIES  that is hinted at here, but discussed in other resources than this.
Even because they are not common in practical situations axcept te piston mechanism, which anyway does classify only partly as a closedchain of bodies and thus can be simulated with ease, after consulting resources I will make availale on advanced parametrization techniques. Generic closedchain boides can be simulated with Lagrangian Mechanics and well as its new flavour, but with some restrictions  discussed in the proper places ; anyways these are very rare and particular applications, so they deserve only the place left to them.
So. Let's discuss how to construct a system, and arrive, in the meanwhile, to the parametrization.
In this book, only the 4 basic jointmechanisms will be discussed, plus obviously the free body. Look at the tables below!
table 3.2: 3D joints and number of DOFs they add in articulated mechanism restricted to ones made of openchains of bodies. Exceptions are discussed later. WARNING: this is a table true in the context of Lagrangian Dynamics' New Flavour... for the classial flavour, plaese look the table dedicated to that. 


table 3.3: 2D joints and number of DOFs they add in articulated mechanism restricted to ones made of openchains of bodies. Exceptions are discussed later. WARNING: this is a table true in the context of Lagrangian Dynamics' New Flavour... for the classial flavour, plaese look the table dedicated to that. 


extra reference item 3.2 (*^{1}) : detailes on exceptions not discussed here, even because much less typical and musch less frequent in practical applications 
Other ones, present but anyway much less frequenty in practical mechanisms, will be discussed in an appendix or straight another book  either in form of video later transcribed to text, or directly as written material. Examples of such particular mechanisms are (look at the figure): [FIGURE: a) point tied to 3D curve ; b) point tied to 3D surface ; c) a void cylinder allowed to roll on a surface, with inside it another similar cylinder with smaller radius, obviously, than the containing one's radius ; d) 2D carriege with 2 wheels rolling on a plain surface attached to main body through 2 prismatic joints and a properly positioned spring for each, connecting proper pairs of points of the system ; e) typical 2D or 3D piston ] Where some of these systems are special inasmuch their simulation requires at least creative usage, and most often also a little customization, modification, of the simulator ; and requires sometimes to understant the limits whithin which the accomplished simulation is valid: a detailed discussion and understaning of my first theoresm on differential equations is vital there: in fact some of these systems are heavily chaotic, but as proved by that theorem, there are 2 types of socalled chaotic systems: those which, even though under no restriction to polynomial parametrization (old flavour of Lagrangian Dynamics) don't ever cause the simulation to fail at a certain point ( in this case via division by zero, yelding as result a socalled nan in most programming languages) but where the reliability of te result has to be doublechecked because slightly different timesteps can lead to a heavy divergence of the result at some point of the simulation ; and those which cause failure anyway in some situations ( even though the restriction to simulation via chaining of polynomial systems), yealding in that specific case to either the attempt to evaluate the squareroot of a negative number (yelding nan in most programming languages), etiher the failure of a procedure performing bisection, returning error (that is, the case when no solution is found). All these are hevily nonlinear systems, and analytical solutions are not even feasible in the second case, while in the first, I really don't know, but at least they are welldefined: as to these, if You do know if there is analytical solution, better... if not, don't worry anyway as it would have little practical use and certainly it would be rather complicated. 
[YET TO WRITE or record in video and than to trascribe...]
And now let' come to the parametrizationrule for this, and some further deepening.
Essentially, it's
x_{k} = S_{1} + q*d ;
where d may be unitlengt or also not unitlenght. I suggest to keep it always unit_lenght anyway.
Combining more prismaticjoints is as follows:
A prismatic joint needs a body to which to attach it except if it is attached to a fixed point. So, once we have a body attached with a prismatic joint to a fixec point, we can attach another bod to that body through a prismatic joint attached to that body, to which so we attach another body through a prismatic joint.
Clearly, we have then S2 not fixed, but seme as the body1's canter plus an arbitrary vector of decentering, PROJECTED ONTO THE BODIES' REFFRAME (!!!).
So, assuming that x_{2} belongs to body 1 and x_{5} to body 2 at the point x_{2}, the parametrization would be:
x_{2} = S_{1} + q_{i}*d_{1} ;
x_{5} = x_{2} + S_{2} + q_{i+1}*d_{2} ;
where the indexes on d are clearly arbitrary, as long as one is being coherent. Clearly, more bodies can be attached to body 1 through more prismatic joints. It's only a question of crativity, as this rule of combining rimsatic joints is pretty much sraightforward.
Here it's very important to observe that the parametrization rules so far outlined for prismatic joints, don't take into account the orientation of body 2 and assume it has identity orientation... that is to say, an orientation which does not change in time. In the next subsection we will discuss how to attach a prismatic joint to a body whose orientation, can change over time and it thus depends on, say,
q0, q1 and q2, while the body 2 is attahced to that body with a prismatic joint displaced along it's original vertical axis, which we call R in this book. It's straightforward to get familiar with the facts to take into account, as soon as the next section becomes clear. As a final note, when one just wants to attach boy 2 in such a scnearo to body 1 but woithout caring about the orientation of body 2, one can assume the orientation of body 2 to be identity... that's like saying it's just a dumb material point. The other, more realistic and concrete cases, are discussed in the next subsession.
comments:
Mechanisms which are made only of primatic joints connecting bodies, are among the simplest: they all lead to linear systems, that is, the simplest possible dynamical systems  a subcase of the more generic polynomial systems on which our simulator and uderlaying theory relies. However, and this is important to note, mechanism made only of bodies connected by prismatic joints are not the only linear, that is very friendly and simple, systems. Also systems containing perfeclty circular motiontrasmission disks and ideal ropes wrapped around them (figure), are linear dynamical systems, in fact. These are important too, but we do not discuss it in detail here, except the simple idea that in general, the wires's total lenght must be enforced. Once this is understood in 2 dimensions, it's immediate to generalize it to the 3D case, in which the different segments of a wire can wrap around also disk which are have different axes of rotation. The idea is the seme also for closedcircuit wires like transmissionchains or transmissionbelts in engines. In the tridimensional case, the principle is the seme: to enforce the fact that the total lenght of the wire or belt or rope, is constant. The dynamics and principle of gearratios and such thigs, comes out from this by itself. Let examples be discussed in some appendix, since these types of mechanisms are not the primary focus here, as they usually are part of more generic mechanisms whose parametrization I have yet to introduce to You.
And so are systems which are a combination of these 2 cases.
For both situations, as well as for the combined scenario, it's the case to restate our use to classify systems as linear or polynomial, according to the Kinetic Energy function, and not the Lagrangian function.
For the seek of completeness, clearly it's easy to not theat if one wants also the Lagrangian function
L(q, qd) = T(q, qd)  V( q )
to be linear or polynomial, also the potential energy function V(q) has to be compatible with a linear, or polynomial, dynamical system. Cleary, as discussed in part 2/3 of this presentation, in a system which is strictly linear, we have Kinetic Energy function T which in which no q contributes (or we could also say that the contribution of each of them is null), so if we want to be strict, for linar systems we would write:
L(q, qd) = T(qd)  V( q )
and not
L(q, qd) = T(q, qd)  V( q )
.
But since linear dynamical systems are a subcase of polynomial dynamical systems, it's best to consider the most generic form even because linear dynamical systems alone can't make justice to the colorfulness of practical ( as well as theoretical ) needs. So, let's go on to discuss a more advanced case: the free body, to which we can readily attach other through prismatic joints to begin dealing with the wide spectrum of advanced practical mechanisms.
In 3D, the position of the conventional goemetric center of the body will be the vector
[ q3 q4 q5 ] ;
that was easy.
While for it's orientation, well, it is my first theorem on numerical methods:
R( q0, q1, q2 ) = Id + E + E^{2} =

+ 

+ 

Well, the universaljoint is essentially a variation to the free body. But it's all about it's orientation being free while the position of the point at which it is supposed to be hung up to a fixed point through a universal joint, is fixed.
The following sequence of fiugres will illustrate roughly the concept.
[FIGUREs: universal joint]
;
comments: ...
;
comments: ... .
Once this concept is there, it's just sufficient to elaborate on this until the meaning of what that point is, becomes clear: so that then we can define it comfortably according to any need.
[...]
examples:
A very simple classical roomexperiment mechanism which can be done with the universaljoint, is the 3D doublependulum made of 2 rigidbodies constructed with universaljoints. Look at the figure below!
[FIGURE: 3D doublependulum made of 2 rigidbodies constructed with universaljoints... it's a sort of physical doublependulum, of full right ]
;
In this case the axes are the default ones.
For this example, as long as one does not cause oscillations of it beyond the limits of midoscillations, no notion is required on the chaining operation between the simulation steps which are supposed to stay within the limits of midoscillations, so this is good. If one onserts the chaning part in the simulation, the mechanism can explore all the freedoms of motion which are possible with the spherical joint.
Among the sampleprograms implemented with the LDNFS Simulator, this second, generic case is implemented.
This may seem the trickiest to some. So let's take the adequate space to discuss it. Because there's a particular case and a general case too ; even if the general case may seem easy to some to grasp as the particualar case may instead result a bit confusing.
Let's start from the simplest case: a body hung up to a fixed point with a hingejoint whose axis is defined by a
d_{0}
versor. Actually, as You could excpect, since allover the simulator we have generalized coordinates, it needs not be unitlenght, that is, it needs not be a versor. But using a versor is better anyway.
Well, we must let the position of the points of the body be rotated around that versor, starting from unitary orientation. Orientation, yes it is.
As soon as it's clear that we have to just eaborate on the orienation, we can be sure that it's something similar to the hingejoint. but We add only 1 DOF and require the change of orientation be like a rotation around ours
d_{0}
versor.
Well, a wild guess could be to apply the definition of what the oriantation matrix is, and say that well, I fix the direction
d_{0}
which ideally would perform a unitangle rotation of the points arount that very axis, and then try to see what happenf if I modulate it's lenght but not it's direction:
0.01*d_{0} ;
does it make sense? Yes! It will produce an R matrix which rotates the points around the d axis, by an angle of 0.01 radians.
Well, we just let that wild 0.01 amount to be a new degree of freedom to the body's points, like
q_{2}*d_{0}
and it's there!
We construct the R matrix for the universaljoint from that
q_{2}*d_{0}
triplet of values, and then choose decentering radius as for the universaljoint, and we have the parametrization of our body with it's freedom to rotate aroung a d axis, by an angle of q2.
It's all here. We don't even have to rewrite the universaljoint's parametrization rules since they are the seme.
The hingejoint thus, is parametrized as a universaljoint with limited freedom of rotation.
But let's analyze a bit how this is done with more bodies, since the choice of how we define this limitation, must be clear. As for more bodies, the choice of the axis of rotation may not be arbitrary.
Now, the general case: a body_1 attached, with a hingejoint, yes, but now not to a point of the global referenceframe but to another body, body_0.
We must understand and define well, what axis of rotation to choose... an axis in the globalframe, or an axis in the refframe of body_0?
To resolve this doubt, well, just look at how a real hingejoint is constructed and how then 2 bodies can be joined with it:
[figure]
It's the ref of body_0, clearly.
Now, the utmost general case: a chain of bodies, with also bi or tri or nforkations at some bodies... THOUGH NO CLOSEDCHANINS (remember the restrictions mentioned earlier! ).
So:
body_{n+1}
is connected through a hingejoint to body
body_{n} ;
and thus the axis of rotation is a generic
d0
axis DEFINED IN THE REFFRAME OF
body_{n} ;
the illusation below and some examples should do it to make You comfortable with this rule.
.
An important note to winish with is, however to note that the typica wheelmechanism is also formealized as a body attached to another throught a hingejoint. The only particularity here is that in this case the body which act like the tyical "wheel", is not only attached at the rotating axis at it's own centerofmass, but that it's also geometrically symmetric (a fact which is relevant only then we want to simulate also the functionality of a wheel as such when it makes contact with a surface acting as a typical "ground" ).
The figure below tries to illustrate this concept ; while for the rest there are the examples implemented with the LDNFS Simulator.
.
Now let some examples follow, in the next pragraph.
examples:
As the main resource for this, check the pos_val() function of some of the simplest samplesimulators. Among those which are ready at the time I'm writing this, are:
1.) The Tricycle with Pendulum, and the
2.) The Tricycle and Car (quadricycle) modeled with the rotation of the wheels considered.
An important example, to be published soon, instead, is a sort of nonammortized tricycle ; and a subcase which is passive so not a real vehicle but very important also for the Epilogue of this book, is the nonammortized, 2wheelded trailer, whose figure is reported below.
[FIGURE: 2 wheel nonammortized trailer]
Actually, this is a sort of false case because a spherical jont can be, and in practice it really is, the combination of 2 hingejoints. So, as the hingejoint, also this one requires some care. But once we understand it's basic logic, we can parametrize a sphericaljoint in one hit, so shortening parametrization and making less likely that we commit errors while doing it. So, let's get started!
I won't be detaled here since this is not so crucial as a jointmechanism, but I try to do well anyway. as for the hingejoint we needed a full reffare of a body, so will we need one with the spherical joint except it's attached to the global reference frame, to a fixed point.
We try to look a the way we used the R matrix in the case of the hingejont. We see that we could vary this a bit... some examples follow
[ u_{0} u_{1} u_{2} ] = q*[ P_{x} P_{y} P_{z} ]
[ u_{0} u_{1} u_{2} ] = q*[ 1 0 0 ]
[ u_{0} u_{1} u_{2} ] = [ q_{0} q_{1} 0 ] = q_{0}*[ 1 0 0 ] + q_{1}*[ 0 1 0 ] ; promittent but it's referred to the global refframe... well, if that's the purspose, it's fine so.
[ u_{0} u_{1} u_{2} ] = [ q_{0} q_{1} q_{2} ] ; alt!  it's the freebody's orientation!! not spherical joint, does not meet the required numbers of DOFs .
[ u_{0} u_{1} u_{2} ] = q_{0}*[ P_{x} P_{y} P_{z} ] + q_{1}*[ Q_{x} Q_{y} Q_{z} ] ; this promizes... but is the combination legal? Yes! It's this.
Well, the last formula is it! Meditate on it! You just have to choose between 3 cases:
[ u_{0} u_{1} u_{2} ] = q_{0}*[ P_{x} P_{y} P_{z} ] + q_{1}*[ Q_{x} Qy Qz ] ;
[ u_{0} u_{1} u_{2} ] = q_{0}*[ P_{x} P_{y} P_{z} ] + q_{1}*[ R_{x} Ry Rz ] ;
[ u_{0} u_{1} u_{2} ] = q_{0}*[ Q_{x} Q_{y} Q_{z} ] + q_{1}*[ R_{x} Ry Rz ] ;
So easy. It is the mathematical combinatin of 2 hingejoint parametrizations. Just be careful to which body to tie another one with a spherical joint, and careful to the coice of the axispair.
Once we geve this warning, a very simple classical roomexperiment mechanism which can be done with spherical joint is the spherical doublependulum. Look at the figure below!
;
In this case the axes are the default ones.
For this example, as long as one does not cause oscillations of it beyong the limits of midoscillations, no notion is required on the chaining operation between the simulation steps which are supposed to stay within the limits of midoscillations, so this is good.
If one onserts the chaning part in the simulation, the mechanism can explore all the freedoms of motion which are possible with the spherical joint.
Among the sampleprograms implemented with the LDNFS Simulator, this second, generic case implemented.
The spherical joint, however it may seem tricky at first galance, it used for example in the mechanims which gives extra ammortization to the driver's cabine in modern trucks: the cabinebody can move forward with respect to the rest of the basis, and move left and right... bit can can not turn around it's axis defined as the axis which says the "up" direction with respect to the cabine. the implementation of that is left as an excecize: You can start imlementing it by attaching a box to the 4wheel car example (the 10 DOF one is better for this than the 14 DOF one), through a spherical joint, keeping in mind that the spherical joint must allow rotation with respect to the main carbody's OWN P and Q axes.
Now. We have already mentioned the possibility to reset the overall 'center' of our articulated mechanism to
[ 0 0 0 ]
after each step to then accumulate the variation in that position into a
[ delta_xcm . . ]
vector which is added ONLY when expressing numerically in a table the positions of each point of the system a the end of each simulationstep, as well as, coherently, in the graphical representation and for the check of contact with ground and such extra things not part of the simulation itself.
This can be done because translation of that center does not influence the T() value and moreover is a linear operation in general, so letting x, y, z vary, or accumulating it in an auxiliry value and then resetting that x,y,z to 0, is the seme thing.
Well, this is certainly a big deal for the numrical simulation as each variable has limited number of significant digits so that a value of x,y, or z of thousands of units can destroy precision ; which so with resetting does not happen.
But let's concentrate on the mathematical validity of this operation. We said why this operation was legal. Fine. This is a very simple kind chaining, but it's not the real one: as x,y, z do not influence T()... and we prohibited to let it compeare in V() as well (!!). In fact alone, it would not suffice. But this is not a book on the theory of polynomial dynamical systems and the class of diffeqs it leads to, so let's just take this as an application of the stability theorem.
But as a vague concept, it's good for a start. In fact the real thing comes now.
The idea is that also orientation must be chained in a similar way. Let's oultline quickly how this is done and why it is legal.
The conventional geometric center of each body, clearly remains the seme. But once at the end of a step, a total displacement, a total variation in the oriantation of each body, is achieved.
And since the prametrization of the orientation for each of these is polynomial, and of a degree as low as 2, we can do something.
for the system, AT THE BEGIINING OF A STEP, wheter a body has an orientation of
[ lim < q0 < lim ; lim < q1 < lim ; lim < q2 < lim ]
or has an orientation of
[ 0, 0, 0 ] (identityorientaion) ,
is the seme. As well as the very shape, very dfinition of a body is the seme.
But we have been talking about rotations: and as an isometric transformation (don't need to know thr detailes here), they can be composed... they are onlinear in shape, but they have the particularity to be composable.
Composing rotations of doing successive rotation around varous axes, lead to the seme change in orientation of a body!
So... maybe we can forsee some allowed operations which can be useful, is not straight resolutory for us.
Let's not be too rigorous, for the detailes, some intuition is sufficient, and I'll make available some extra clarifications in some appendix.
So... if at the beginning of a simulation step the orienatation of a body was
[ 0 0 0 ] ,
and if at the end of each simulation step we reoriented the definition of that body (in it's ref rame) of the excact
[ lim < q0 < lim ; lim < q1 < lim ; lim < q2 < lim ]
amount achieved in that step, and then resetted the concretrized orientation to
[ 0 0 0 ] ,
it would be fine. Let's see the figure of such a hypothetical sequence, below!
;
comment: ...
;
comment: ...
;
comment: ...
Note that the rule that each body must have it's own reference frame
P_{n}, Q_{n}, R_{n} (not present in the 2D cases) ,
is met also here, as stated in part 1/3 . So, without too much verbal explanation, just look again at the sequence and not that the body's orientation changes just excactly the seme way as the orientation of the
P_{n}, Q_{n}, R_{n} (not present in the 2D cases) triplet or versors used as a reference frame.
So, the next thing is to see that we can do this thing collectively for all the bodies present in the mechanism to simulate. Look at the figure below!
;
comment: ...
;
comment: ...
;
comment: ...
It's a 2D example, to let You see easier what happens, in principle.
Just don't get confused by the fact that bodis are attaches, so the overall change of the system may see exagerated... even if it is not... just see each body how little changed with repsect to its own reframe.
If You are comfortable with te concept, just think this generalized to a 3D articulated body, such as the one illustrated in the figure below.
[FIGURE: GENERIC 3D articulated body]
;
Now, a last clarification.
Although in the prametrization we used the poly form, for small changes in orientation, reorienting and then restarting from id orientation, and not reorienting but restarting from the seme values of the parameters of the orientation as in the previous step, is the smee thing.
As a last note, in the reorientation operation, we can use the excact rotation, since it's not part of the simulation procedure, and so cannot cause failure, and neither does violate any of our hypotheses outlined in the introduction.
That was it. We have chained 2 intercompatible polynomial systems. The datastructure, that is the polnomial terms in the parametrization remain the seme, but some of the parameters are changed and some of the variables are reset to 0 betweeen to numerical diffeqsys solution steps.
After all, it's NOT all that difficult!
discussed in video (a provvisory one!) ; the video is at this link:
https://youtu.be/oRgBemzavo
A linear dynmical system is a subset of the more generic polynomial dynamical system. But since the concept of linearity is mentioned on so meny references, it is importanto to claerify and demistify this concept since we have been dealing until now with polynomical systems.
With linear dynamical systems, the good thing is that even if in few cases systems are linear and in very few cases a polynomial system can be roughly approximated with a linear one, when we have a linear dynamical system, once we have a linear dynamical system, it's system of differential equations is linear... a system of ordinary differential equations. And for this, not only analytical solutions exists, but when we build an intereactive application which relies on a numerical method, we have theorems for each numerical method, which tell the conditions in which the numerical solution is reliable, tell how much, and tells in which cases it will fail (a case called usually divergence of the numerical solution).
Moreover, at the light of the Lagraingian Dynamics' New Flavour, which relias entirely on polynomial systems, a thorough knowledge of linear systems acquires new uses beside thos I've tried to outline so far. They becoem useful because polynomical systems, as all the systems they approximate to arbitrary degree of faightfulness, have all a linear core as outlined in part 2/3: they are all built around, so to speak, a linear core. So, when designing a polunomical system (a common nonlinear system), we can use the tactique of drafting a linear prototype, check it's stability, oscillations, ownoscilaltions, and whatnot, choose the parameters for the numerical simulation to be stable and precize, and then we proceed to add all the various 'corrections' to it in the case of free bodies, universaljoint, hingejoints, spherical joints and special advanced joints not discussed in this book, and then chech it the ultimate design still behave as it's linearized 'draft' design. Moreover, understanding this, can also make us feel on a more solid ground. We will, in fact, come up with a nice set of examples in which the 'linearization' falsifies little, even if it's about quite complex systems. We well so be able also to build mentally also a sort of ABC of the dynamics of articulated bodies. An ABC into which go the stronges and most primitive relations, while the subtle ones are left outside.
And as a final topic, maybe the most fascinating one is this: what happesn if we linaerize a polynomial system even if this is not legal, and we chain the simulationssteps so constucted? Now. I don't pretend to give a full answer to this, but a comparatively lenghy discussion of this, is absolutely necessary, and can well open up new ways in application... as long a one keeps in mind what is being done. In one sentence: if we take any of the samplesimulators choosing one which has an evidently formidable dynamics ( that is, a rahter complex and colorful one ), what happens if we replcee the
Id + E + E^{2}
expression with the raw linear
Id + E
expression?
When does it lead to a truly linear dynamical system, and when not? And if it leads to one, how much does it falisificate the dynamics of the original mechanism? And if it comes out nonlinear anyway, is it beacuse it is impossible to formally approximate linearly, or is it possible anyway with some trick?  and if it's possible, how would it be?
Many questions, which also require to answer if it is possble to linearize all articulated mechanisms at whatever degee if falsification, as long as the final simulation is at least believeable.
This epilogue is dedicated to these things.
So. Let's get started.
[...]
This is the end of the book. After this, You can comfortably read any of the appendix sessions, as well as future books I plan to write on the advanced topics mentioned in this book ; as well as consulting any material on the topic with a fair amout of comfort.
Simon Hasur,
2015 November,
Italy
For those who may want to use the matrixequation authoring techique used in this book with HTML tables, I reposrt a protorypic one below. It's sufficient to copy the HTML sourcecode of this part, modify the table size, and format it with CSS syntax (where You can adopt, as a start the one I used for the matrix expression in the secion on the fre 3D rigidbody's orientation).
TEST of FORMULA, and mostly matrixequation AUTHORING IN HTML PAGES:
a_{11}  a_{12}  a_{13}  a_{14}  a_{15}  a_{16}  a_{17}  a_{18} 
a_{21}  a_{22}  a_{23}  a_{24}  a_{25}  a_{26}  a_{27}  a_{28} 
a_{31}  a_{32}  a_{33}  a_{34}  a_{35}  a_{36}  a_{37}  a_{38} 