M.TEch CSE
M.TEch CSE
MASTER OF TECHNOLOGY
In
COMPUTER SCIENCE AND ENGINEERING
Submitted
By
(22A91D5818)
CERTIFICATE
This is to certify that the project work entitled “A HYBRID MACHINE LEARNING
requirements for award of the M.Tech degree in Computer Science and Engineering.
External Examiner
DECLARATION
I hereby declare that the project entitled “A HYBRID MACHINE LEARNING APPROACH TO
FIND INVERSIONS OF PLANAR KINMATIC CHAINS” is a genuine project. This work has
partial fulfillment of the M.Tech degree. I further declare that this project work has not been
submitted in full or part of the award of any degree of this or any other educational institution.
By
(22A91D5818)
ACKNOWLEDGEMENT
I wish to thank Dr. S. Rama Sree, Professor in CSE and Dean (Academics) for
her support and suggestions during my project work..
My deepest thanks to our HOD Dr. K.Swaroopa, Professor for inspiring us all the
way and for arranging all the facilities.
I owe our sincere gratitude to Dr. M.Sreenivasa Reddy, Principal for providing a
great support and for giving the opportunity of doing the project.
I am grateful to our College Management for providing all the facilities in time for
completion of our project.
Not to forget, Faculty, Lab Technicians, non-teaching staff and our friends who
have directly or indirectly helped and supported us in completing my project in time.
ABSTRACT
In computer networks, Dijkstra’s algorithm is applied to find the shortest route from one
node to all other nodes. K-NN is a supervised machine learning algorithm which can be used
for classification of similar characteristic elements of a given population. In the present work,
both the above algorithms are applied sequentially on a kinematic chain. Initially a kinematic
chain is converted as a graph. Any kinematic chain can be shown as a Network graph
converting links as nodes and connections as paths. Dijkstra’s algorithm is used to find the
shortest paths from one node to another node. After that k-NN algorithm is applied to form the
Clusters of Nodes (links) to classify the kinematic links into similar characteristic groups
(same class) which are eventually called as ‘Inversions’. Results for 8-link 1-dof k-chains and
9-link 2-dof are analyzed and presented. The same concept can be extended for higher linkages
and degree-of-freedom.
P a g e 5 | 55
LIST OF FIGURES
COMPARISON OF ML-DL-AI
Fig 2.2
CHAIN NO. 6
Fig 4.1
P a g e 6 | 55
LIST OF ABBREVIATIONS
P a g e 7 | 55
LIST OF TABLES
Table 4.1 Shortest distances from Node 1 to all other nodes in Figure 4.2
Table 4.2 Consolidated report of shortest distances from each node to all nodes
Table 4.3 Shortest distances from each node to all nodes -Chain No. 6
Table 4.5 k-NN and k-NN(Rev) values for links of Chain No. 6
P a g e 8 | 55
INDEX
ABSTRACT 5
1 LIST OF FIGURES 6
2 LIST OF ABBREVIATIONS 7
3 LIST OF TABLES 8
10 REFERENCES 43-45
P a g e 9 | 55
CHAPTER 1
LITERATURE SURVEY
Patel & Rao [1] used the concept of velocity diagrams to identify the distinct inversions of a
k-chain. This method also results in identifying the best inversion also. In 1991, Rao AC[2]
applied hamming number technique to find the inversions and isomorphism in k-chains. In his
method, each link’s characteristics are compared with other links. Chu & Cao [3] developed new
idea of using Link’s adjacent-chain table (ACT) to identify the distinct mechanism of a k-chain.
Sanyal[4] presented a new method in which k-chain is represented using pseudo probability
scheme. This is method is limited to rotary type joints only. Rao [5] applied the principles of
information theory in finding inversions and rating of k-chains. Rao AC [6] proposed fuzzy logic
to study inversions of k-chains in 2000. Fuzzy uncertainty values with crisp vector are compared
in this approach. Kong et al [7] presented Hopfield–Tank artificial neural network (ANN)
technique to identify the isomorphism of the mechanism kinematic chain. Mohammad et al. [8]
proposed a new parameters ‘EA Matrix’ to find the inversions. Bedi and Sanyal [9] used
isomorphism. Marin et al [10] proposed a novel multivalued neural network that enables a
simplified formulation of the graph isomorphism problem. Sanyal[11] proposed a new method
based on Link joint connectivity which is tested successfully on single degree and multiple DOF
planar k-chains. Dargar et al. [12] use weighted structural indices i.e., extended adjacency link
value (EALV), total loop size (TLS) and extended adjacency string (EAS) to identify inversions.
Rizvi et al [13] proposed new parameters Link Identity Matrix (LIM), Link Signature (LS) and
Chain Signature(CS) are proposed to find the inversions. Rizvi et al. [14] proposed an algorithm
using ‘The absolute sum of the Eigen values of the inversion adjacency matrix’ as the parameter
deciding the distinct mechanism, named as ‘Link Identification Number’. Shukla and Sanyal [15]
P a g e 10 | 55
proposed ‘gradient matrices’ based on gradient analogy, to distinctly denote the structure of each
kinematic chain. Rai and Pubjabi [16] proposed a new method based on ‘Shortest distance matrix
and its string to find the inversions. Kamesh et al [17] used an additive approach based on
connectivities of the links through a new concept called ‘Remote Adjacency Influence Table’
(RAIT) to find the inversions. Ding et al. [18] used the 5th power of the adjacency matrix to find
the inversions. Kamesh et al. [19] proposed a novel algorithm based on the Rigidity concept to
find inversions, in which 3 parameters are defined namely Primary Connectivity Index (PCI),
Secondary Connectivity Index (SCI) and Net Connectivity Index (NCI). Tagliani et al [20]
proposed a ML sequential methodology for robot inverse kinematics modelling, iterating the
model prediction at each joint. The method implements an automatic Denavit–Hartenberg (D-H)
parameters formulation code to obtain the forward kinematic (FK) equations required to produce
the robot dataset. Amini et al. [21] proposed a new evaluation framework to compare Active
Learning(AL) and Design of Experiments(DoE) for different applications i.e., data generation,
sample efficiency, stability and predictive accuracy of the resulting ML models. Higgins et al [22]
presented a method to select the suitable clusters from the available to use on time-series lumbar
and pelvis kinematic data. Wang et al [23] portrayed different ML methods in the analysis of
complex data in the fault diagnosis of rotating machinery. Lin et al [24] developed a new
But no author applied Machine learning methods in suggesting the solutions of mechanical
engineering problems especially conceptual design of mechanisms or inversions. In the present work,
computer science and machine learning algorithms are applied combinedly to verify the results.
P a g e 11 | 55
CHAPTER 2
INTRODUCTION
theory concepts have been used for many years for the topological analysis for both planar
taken as Node and the connections(joints) are taken as paths between two nodes. K-NN
algorithm is basically utilized in the classification of different sets of data points to make them
into similar characteristic groups. In this work, the similarity between the characteristic
features of nodes is compared. The nodes which are having the similar features or parameters
with reference to k-NN algorithm are declared as same Inversions of a k-chain. In the present
work, Dijkstra’s algorithm & k-NN algorithm of Machine Learning are used to find the
shortest path in the computer networks & Clustering of the nodes to find the inversions of
planar kinematic chains. It is concluded that the applied hybrid algorithm produced the same
results available in the existing research papers.[1-24]. Many researchers discussed the
[7,10,20-24]. Taking those articles as inspiration, in the present work, the k-NN algorithm of
supervised machine learning is applied in finding the distinct inversions of k-chains. The
P a g e 12 | 55
2.1 DIJKSTRA'S ALGORITHM: A STEP-BY-STEP GUIDE
Dijkstra's algorithm is a popular algorithm used to find the shortest path between a given node (or
vertex) and all other nodes in a graph. It works on graphs where the edges have non-negative
weights.
ALGORITHM STEPS:
1. Initialization:
o Set: Create a set S to keep track of the visited nodes. Initially, S is empty.
▪ The distance to all other nodes is set to infinity, indicating they haven't been
reached yet.
o Previous: Create an array prev to store the previous node on the shortest path from
2. Selection:
o Find Minimum: Find the node in the graph that has the smallest distance value
3. Update Neighbors:
o Iterate: For each neighbor v of u that is not in S, calculate the tentative distance
from the source node to v through u. This is done by adding the distance from the
o Update Distance: If the calculated tentative distance is less than the current
distance to v, update the distance to v and set prev[v] to u. This means that u is now
P a g e 13 | 55
4. Repeat:
o Continue: Repeat steps 2 and 3 until all nodes have been added to S.
VISUAL REPRESENTATION:
Example:
If we want to find the shortest path from node A to all other nodes, we will follow these steps:
1. Initialization:
o Set S = {}.
2. Selection:
o u=A
3. Update Neighbors:
o Distance(B) = 0 + 1 = 1, prev[B] = A
o Distance(C) = 0 + 4 = 4, prev[C] = A
4. Repeat:
P a g e 14 | 55
APPLICATIONS:
• GPS systems: To find the shortest route between two points on a map.
Dijkstra's algorithm is a fundamental algorithm in graph theory and has numerous practical
Machine Learning, often abbreviated as ML, is a subset of artificial intelligence (AI) that
experience and by the use of data. In simpler terms, machine learning enables computers to learn
from data and make decisions or predictions without being explicitly programmed to do so.
At its core, machine learning is all about creating and implementing algorithms that
facilitate these decisions and predictions. These algorithms are designed to improve their
performance over time, becoming more accurate and effective as they process more data.
a task. However, in machine learning, the computer is given a set of examples (data) and a task to
perform, but it is up to the computer to figure out how to accomplish the task based on the
For instance, if we want a computer to recognize images of cats, we do not provide it with
specific instructions on what a cat looks like. Instead, we give it thousands of images of cats and
let the machine learning algorithm figure out the common patterns and features that define a cat.
Over time, as the algorithm processes more images, it gets better at recognizing cats, even when
P a g e 15 | 55
presented with images it has never seen before.
This ability to learn from data and improve over time makes machine learning incredibly
powerful and versatile. It is the driving force behind many of the technological advancements we
see today, from voice assistants and recommendation systems to self-driving cars and predictive
analytics.
Machine learning is often confused with artificial intelligence or deep learning. Let us look
at how these terms differ from one another. For a more in-depth look, check out our comparison
AI refers to the development of programs that behave intelligently and mimic human
intelligence through a set of algorithms. The field focuses on three skills: learning, reasoning, and
Machine learning is a subset of AI, which uses algorithms that learn from data to make
predictions. These predictions can be generated through supervised learning, where algorithms
learn patterns from existing data, or unsupervised learning, where they discover general patterns in
data. ML models can predict numerical values based on historical data, categorize events as true or
Deep learning, on the other hand, is a subfield of machine learning dealing with
algorithms based essentially on multi-layered artificial neural networks (ANN) that are inspired
Unlike conventional machine learning algorithms, deep learning algorithms are less linear,
more complex, and hierarchical, capable of learning from enormous amounts of data, and able to
produce highly accurate results. Language translation, image recognition, and personalized
Ref : https://www.datacamp.com/blog/what-is-machine-learning
In the 21st century, data is the new oil, and machine learning is the engine that powers this
data-driven world. It is a critical technology in today's digital age, and its importance cannot be
overstated. This is reflected in the industry's projected growth, with the US Bureau of Labor
Machine learning can be broadly classified into three types based on the nature of the
learning system and the data available: supervised learning, unsupervised learning, and
reinforcement learning.
P a g e 17 | 55
FIG 2.3: TYPES OF MACHINE LEARNING
It is a simple, yet effective supervised machine learning algorithm used for both
classification and regression tasks. It operates based on the principle that points that are close to
each other in feature space are likely to belong to the same class.
1. Data Preprocessing:
o Normalization: If the features have different scales, normalize them to ensure they
o Handling Missing Values: Impute missing values using techniques like mean,
median, or mode.
2. Choose K:
can make the model more sensitive to noise, while a larger k might make it less
P a g e 18 | 55
3. Calculate Distances:
o For a new data point, calculate its distance to all training points. Common distance
o Identify the k nearest neighbors to the new data point based on the calculated
distances.
5. Make Prediction:
o Classification: If the task is classification, determine the majority class among the
the target values of the k nearest neighbors. This average value is the predicted
VISUAL REPRESENTATION:
P a g e 19 | 55
Example:
Consider a dataset of two classes, represented by red and blue points. To Classify a new point
2. Calculate the distance between the green point and all training points.
4. Since the majority of these neighbors are blue, the green point is classified as blue.
ADVANTAGES OF K-NN:
DISADVANTAGES OF K-NN:
• Can be susceptible to the curse of dimensionality, where the distance between points
APPLICATIONS OF K-NN:
• Medical diagnosis: Predicting diseases based on patient symptoms and test results.
K-NN is a versatile algorithm that can be applied to a wide range of problems. By carefully
considering the choice of k and distance metric, it can provide accurate and reliable predictions.
P a g e 20 | 55
CHAPTER 3
METHODOLOGY
In the present work, a hybrid algorithm in which two algorithms i.e., Dijkstra’s
algorithm and k-NN are sequentially applied on a k-chain graph to find the distinct inversions
of a planar k-chain.
CHAINS
. Dijkstra’s algorithm is primarily used to find the shortest path or route from one node to
all other nodes in a computer network. Initially, the network is to be represented as a graph. In
the graph network, the distance between one node to another node can be known by applying
From the literature [1-24] many approaches evolved in the identification of inversions
of planar kinematic chains using graph theory. Hence, it is proposed to apply Dijkstra’s
algorithm in K-chains using a graph. In a kinematic chain, we will have links and joints. In
graph theory, links will be transformed as nodes and joints as a connective object will be
P a g e 21 | 55
All 8-link 1-dof k-chains [11] and 9-link 2-dof k-chains[16] were applied to the Dijkstra’s
algorithm & k-NN algorithm. The various steps in the algorithm are explained below.
Step 2: Nodes and individual paths are to be marked separately. The distance between the
Step 4: All the shortest paths from the first node to all other nodes are to be identified
Step 6: A consolidated report is to be presented for each node and its distances represented
Step 7: The matrix values in each column(distances) are sorted from low to high.
Step 8: Now, k-NN algorithm is applied by taking k=5 ( generally odd value is preferred
for better decision making). Clustering is to be done for each node( link) summing
up the nearest 5 distances. In the present work, k-NN (Rev) also proposed for
Step 9: After clustering process is completed, all the nodes ( links) values (k-NN and k-
mechanisms’ or ‘Inversions’.
P a g e 22 | 55
CHAPTER 4
In this section, it is explained about the new parameter proposed and the sequential
link-joint data needs to be analyzed. After calculating only ‘k’ values in the string of
shortest distances, ‘n-k’ distances will be left over from the analysis. Hence, in the present
work, a new parameter with the name ‘k-NN(Rev)’ is proposed. In this, ‘k’ values will be
considered from the other end. In the string of shortest distance, k-NN considers ‘k’ values
from one end, and k-NN(Rev) considers ‘k’ values from the other end. In another way, k-
NN considers nearest ‘k’ values while k-NN(Rev) considers ‘k’ farthest links. These two
values combinedly makes the analysis much easier to compare and form clusters.
The application of Dijkstra’s algorithm & k-NN algorithm is explained in detail by taking
one example from Appendix II. Various steps of the inversion identification procedure can be
All the k-chains are presented in Appendix I & II. Let us consider one kinematic chain
from Appendix II to study and analyze. Chain no. 6 is taken as a test specimen to check the
Step 1: The kinematic chain in Figure 4.1 is converted to a graph. All the links are shown
as Nodes and all the joints are shown as Paths between the nodes. The graph is shown in
Figure 4.2.
P a g e 23 | 55
FIG. 4.1. CHAIN NO. 6 FIG. 4.2. GRAPH FOR THE K-
CHAIN NO. 6
Step 2: In the graph, the distance from one node to another adjacent node is taken as the
product of both vertices’ incidence value. For example, the path from node 1 to node 3,
the incidence values for node 1 and node 3 are 3 and 2, respectively. Hence, the distance
between node 1 and node 2 will be 2*3 = 6. In an analogous way, all the distances can be
calculated.
Step 3: Now, Dijkstra’s algorithm is to be applied to find the shortest paths from Node 1
Step 4: After applying the algorithm, the various shortest distances from Node 1 to all
other nodes are found. The results are shown in Table 4.1.
Table 4.6. Shortest distances from Node 1 to all other nodes in Figure 2
Node 1 2 3 4 5 6 7 8 9
1 0 16 6 15 9 12 6 10 15
Step 5&6: After applying Step 4 for all the nodes finding shortest paths from that node to
all other nodes in the network, a consolidate report is prepared. The consolidated report is
P a g e 24 | 55
Table 4.7. Consolidated report of shortest distances from each node to all nodes
Link 1 2 3 4 5 6 7 8 9
1 0 16 6 15 9 12 6 10 15
2 16 0 15 6 12 9 10 6 15
3 6 15 0 21 15 6 12 16 12
4 15 6 21 0 6 15 16 12 12
5 9 12 15 6 0 12 15 18 6
6 12 9 6 15 12 0 18 15 6
7 6 10 12 16 15 18 0 4 21
8 10 6 16 12 18 15 4 0 21
9 15 15 12 12 6 6 21 21 0
Step 7: For applying the k-NN algorithm, matrix values are taken here. The shortest
Table 4.8. Shortest distances from each node to all nodes -Chain No. 6
L1 L2 L3 L4 L5 L6 L7 L8 L9
0 16 6 15 9 12 6 10 15
16 0 15 6 12 9 10 6 15
6 15 0 21 15 6 12 16 12
15 6 21 0 6 15 16 12 12
9 12 15 6 0 12 15 18 6
12 9 6 15 12 0 18 15 6
6 10 12 16 15 18 0 4 21
P a g e 25 | 55
10 6 16 12 18 15 4 0 21
15 15 12 12 6 6 21 21 0
L1 L2 L3 L4 L5 L6 L7 L8 L9
0 0 0 0 0 0 0 0 0
6 6 6 6 6 6 4 4 6
6 6 6 6 6 6 6 6 6
9 9 12 12 9 9 10 10 12
10 10 12 12 12 12 12 12 12
12 12 15 15 12 12 15 15 15
15 15 15 15 15 15 16 16 15
15 15 16 16 15 15 18 18 21
16 16 21 21 18 18 21 21 21
Similarly, for all the nodes the k-NN values are found.
43 43 51 51 45 45 47 47 51
L1 L2 L3 L4 L5 L6 L7 L8 L9
68 68 79 79 72 72 82 82 84
P a g e 26 | 55
In the present case, the sum of 5 shortest distances of k-farthest links is calculated.
Similarly, for all the nodes, the k-NN(Rev) values are found.
Step 9: After clustering of k-NN and k-NN (Rev) values, it is to analyze the node
All the links with similar k-NN and k-NN (Rev) are said to have similar
characteristics i.e., they are DISTINCT i.e., same inversion. In comparing any two links
about distinctness, it will be observed the string of values which form the k-NN value and
k-NN (Rev) value. In the present case, the results are shown in Table 4.5.
P a g e 27 | 55
Table 4.10. k-NN and k-NN(Rev) values for links of Chain No. 6
L1 L2 L3 L4 L5 L6 L7 L8 L9
0 0 0 0 0 0 0 0 0
6 6 6 6 6 6 4 4 6
6 6 6 6 6 6 6 6 6
9 9 12 12 9 9 10 10 12
10 10 12 12 12 12 12 12 12
12 12 15 15 12 12 15 15 15
15 15 15 15 15 15 16 16 15
15 15 16 16 15 15 18 18 21
16 16 21 21 18 18 21 21 21
43 43 51 51 45 45 47 47 51 k-NN
L1 L2 L3 L4 L5 L6 L7 L8 L9 Link
68 68 79 79 72 72 82 82 84 k-NN(Rev)
In the above example, the distinct mechanisms are : (1,4),(3,4),(5,6),(7,8),(9). Hence, the
number of inversions of this distinct kinematic chain is 5. In an analogous way, all the distinct
Here, 8-link 1-dof distinct k-chains were taken from [11] and 9-link 2-dof k-chains from
[16] are redrawn . The results are shown in Table 5.3 & Table 5.4 of the next section.
P a g e 28 | 55
CHAPTER 5
RESULTS
Dijkstra’s algorithm is applied to find the shortest paths from one node to all other
nodes & clustering by k-NN algorithm is applied on all the kinematic chains of 8-link 1-
dof, 9-link 2-dof. The associated graphs are shown in Table 5.1 & Table 5.2. The results
P a g e 29 | 55
P a g e 30 | 55
Table 5.2. Graphs of 9-link 2-dof k-chains
P a g e 31 | 55
P a g e 32 | 55
P a g e 33 | 55
P a g e 34 | 55
Table 5.3. Results of Hybrid algorithm for k-chains of 8-link 1-dof
1 K-NN 37 44 44 37 37 44 44 37 (L1,L4,L5,L8),
2
K-NN REV 53 71 71 53 53 71 71 53 (L2,L3,L6,L7)
2 K-NN 37 40 40 37 37 40 40 37 (L1,L4,L5,L8),
2
K-NN REV 49 63 63 49 49 63 63 49 (L2,L3,L6,L7)
3 K-NN 37 40 39 35 41 36 42 38 (L1),(L2),(L3),
(L4),(L5),(L6), 8
K-NN REV 49 61 61 48 56 52 64 49
(L7),(L8)
4 K-NN 42 35 39 39 35 38 38 42 (L1,L8),(L2,L5),
4
K-NN REV 62 50 59 59 50 53 53 62 (L3,L4),(L6,L7)
5 K-NN 42 37 42 44 44 42 37 42 (L1,L8),(L2,L7)
4
K-NN REV 64 58 61 67 67 61 58 64 ,(L3,L6),(L4,L5)
6 K-NN 41 35 39 39 35 41 36 40 (L1,L6),(L2,L5),
5
K-NN REV 60 52 59 59 52 60 56 50 (L3,L4),(L7),(L8)
7 K-NN 41 36 40 35 40 36 38 41 (L1),(L2,L6),
(L3,L5),(L4), 6
K-NN REV 62 52 56 50 56 52 49 58
(L7),(L8)
8 K-NN 42 37 37 42 37 37 42 42 (L1,L4,L7,L8),
2
K-NN REV 64 54 54 64 54 54 64 64 (L2,L3,L5,L6)
9 K-NN 36 36 41 36 41 41 36 41 (L1,L2,L4,L7),
2
K-NN REV 48 48 58 48 58 58 48 58 (L3,L5,L6,L8)
11 K-NN 36 40 43 34 44 38 44 44 (L1),(L2),(L3),(L4),
7
K-NN REV 48 63 60 44 61 54 61 57 (L5,L7),(L6),(L8)
12 K-NN 30 44 43 35 43 44 36 43 (L1),(L2),(L3,L8),
7
K-NN REV 56 67 61 46 64 67 52 61 (L4),(L5),(L6),(L7)
P a g e 35 | 55
13 K-NN 38 46 43 35 43 40 36 44 (L1),(L2),(L3),(L4),
8
K-NN REV 51 69 64 44 62 63 52 61 (L5),(L6),(L7),(L8)
14 K-NN 36 44 44 36 44 44 36 42 (L1,L7),(L2,L6),
5
K-NN REV 54 71 64 46 64 71 54 60 (L3,L5),(L4),(L8)
15 K-NN 44 36 44 44 36 44 44 44 (L1,L3,L4,L6,L7,L8),
2
K-NN REV 66 44 66 66 44 66 66 66 (L2,L5)
16 K-NN 34 44 44 34 44 44 48 48 (L1,L4),(L2,L3,L5,L6),
3
K-NN REV 44 62 62 44 62 62 60 60 (L7,L8)
TOTAL 71
P a g e 36 | 55
Table 5.4. Results of Hybrid algorithm for k-chains of 9-link 2-dof
K-NN 45 45 48 48 51 49 40 44 50 (L1,L2),(L3,L4),
1 (L5),(L6),(L7), 7
K-NN REV
75 75 73 73 90 71 68 84 83 (L8),(L9)
K-NN 45 57 54 57 52 48 45 46 50 (L1),(L2),(L3),
2 (L4),(L5),(L6), 9
K-NN REV
72 83 95 100 81 66 88 91 86 (L7),(L8),(L9)
K-NN 57 57 54 54 49 49 41 44 50 (L1,L2),(L3,L4),
3 (L5,L6), (L7), 6
K-NN REV
87 87 83 83 83 83 68 64 82 (L8),(L9)
K-NN 49 49 48 48 50 50 47 47 66 (L1,L2),(L3,L4),
4 (L5,L6),(L7,L8), 5
K-NN REV
75 75 72 72 97 97 86 86 88 (L9)
K-NN 43 49 43 49 50 47 50 47 51 (L1,L3),(L2,L4),
5 (L5,L7),(L6,L8), 5
K-NN REV
71 71 71 71 88 90 88 90 74 (L9)
K-NN 43 43 51 51 45 45 47 47 51 (L1,L2),(L3,L4),
6 (L5,L6),(L7,L8), 5
K-NN REV
68 68 79 79 72 72 82 82 84 (L9)
K-NN 56 56 56 56 56 56 44 44 64 (L1,L2,L3,L4,L5,L6),
7 3
K-NN REV 88 88 88 88 88 88 60 60 80 (L7,L8),(L9)
K-NN 48 48 48 48 45 45 42 42 48 (L1,L2,L3,L4),
8 (L5,L6),(L7,L8), 4
K-NN REV
78 78 78 78 72 72 72 72 72 (L9)
K-NN 49 43 49 43 50 51 50 51 51 (L1,L3),(L2,L4),
9 5
K-NN REV 77 80 77 80 98 99 98 99 74 (L5,L7),(L6,L8),
P a g e 37 | 55
(L9)
K-NN 43 43 54 54 51 51 44 44 44 (L1,L2),(L3,L4),
10 (L5,L6),(L7,L8), 5
K-NN 51 51 51 51 48 48 42 42 48 (L1,L2,L3,L4),
11 (L5,L6),(L7,L8), 4
K-NN REV
87 87 87 87 81 81 81 81 72 (L9)
K-NN 54 54 54 54 42 42 44 44 46 (L1,L2,L3,L4),
12 (L5,L6),(L7,L8), 4
K-NN REV
82 82 82 82 84 84 76 76 76 (L9)
K-NN 51 45 48 54 43 51 47 44 44 (L1),(L2),(L3),
13 (L4),(L5),(L6), 9
K-NN REV
76 76 72 97 69 78 85 89 80 (L7),(L8),(L9)
K-NN 49 49 54 54 43 43 47 47 51 (L1,L2),(L3,L4),
14 (L5,L6),(L7,L8), 5
K-NN REV
77 77 101 101 74 74 92 92 84 (L9)
K-NN 51 49 43 54 41 50 47 47 47 (L1),(L2),(L3),
15 (L4),(L5),(L6), 9
K-NN REV
75 75 72 88 67 87 83 78 81 (L7),(L8),(L9)
K-NN 48 48 54 54 43 43 47 47 51 (L1,L2),(L3,L4),
16 (L5,L6),(L7,L8), 5
K-NN REV
82 82 93 93 73 73 88 88 84 (L9)
K-NN 45 51 43 51 48 51 40 44 47 (L1),(L2),(L3),
17 (L4),(L5),(L6), 9
K-NN REV
75 75 71 82 73 79 68 76 77 (L7),(L8),(L9)
K-NN 46 52 52 54 52 46 46 46 50 (L1),(L2),(L3),
18 (L4),(L5),(L6), 9
K-NN REV
76 82 98 96 70 84 94 94 90 (L7),(L8),(L9)
19 K-NN 54 54 49 49 54 57 41 52 48 (L1,L2),(L3,L4), 7
P a g e 38 | 55
(L5),(L6),(L7),
K-NN REV
87 87 84 84 91 97 72 80 66 (L8),(L9)
K-NN 48 46 52 54 54 48 40 46 54 (L1),(L2),(L3),
20 (L4),(L5),(L6), 9
K-NN REV
68 80 94 90 82 74 76 86 88 (L7),(L8),(L9)
K-NN 57 57 50 44 57 45 50 46 45 (L1,L2),(L3),
21 (L4),(L5),(L6), 8
K-NN REV
88 88 85 64 79 68 84 87 85 (L7),(L8),(L9)
K-NN 54 54 44 44 54 48 40 46 54 (L1,L2),(L3),
22 (L4),(L5),(L6), 8
K-NN REV
82 82 84 68 78 72 72 82 86 (L7),(L8),(L9)
K-NN 43 43 48 48 47 47 42 51 51 (L1,L2),(L3,L4),
23 (L5,L6),(L7),(L8), 6
K-NN REV
71 71 73 73 82 82 72 84 72 (L9)
K-NN 56 56 56 56 48 48 52 52 52 (L1,L2,L3,L4),
24 (L5,L6),(L7,L8), 4
K-NN REV
92 92 92 92 68 68 88 88 92 (L9)
K-NN 44 44 64 64 56 56 52 52 52 (L1,L2),(L3,L4),
25 (L5,L6),(L7,L8), 5
K-NN REV
64 64 84 84 88 88 84 84 92 (L9)
K-NN 45 45 57 57 54 54 49 49 44 (L1,L2),(L3,L4),
26 (L5,L6),(L7,L8), 5
K-NN REV
73 73 83 83 87 87 88 88 60 (L9)
K-NN 43 43 51 51 44 44 45 57 44 (L1,L2),(L3,L4),
27 (L5,L6),(L7),(L8), 6
K-NN REV
72 72 81 81 79 79 82 79 83 (L9)
K-NN 51 51 42 48 48 49 43 50 51 (L1,L2),(L3),
28 (L4),(L5),(L6), 8
K-NN REV
90 90 85 82 73 76 76 97 95 (L7),(L8),(L9)
P a g e 39 | 55
K-NN 60 60 52 52 62 62 47 54 70 (L1,L2)(L3,L4),
29 (L5,L6),(L7),(L8), 6
K-NN REV
100 100 106 106 113 113 104 67 133 (L9)
K-NN 56 56 57 52 54 52 45 48 64 (L1,L2),(L3),
30 (L4),(L5),(L6), 8
K-NN REV
112 112 83 120 116 91 94 76 132 (L7),(L8),(L9)
31 K-NN 54 54 42 48 44 44 48 50 54 (L1,L2),(L3),
(L4),(L5),(L6), 8
K-NN REV
84 84 84 76 78 80 78 90 90 (L7),(L8),(L9)
K-NN 48 48 54 54 48 48 47 47 47 (L1,L2),(L3,L4),
32 (L5,L6),(L7,L8), 5
K-NN REV
87 87 97 97 72 72 94 94 98 (L9)
K-NN 43 47 47 41 47 47 43 57 51 (L1,L7),
(L2,L3,L5,L6),
33 6
K-NN REV (L4),(L7),(L8),
78 81 81 74 81 81 78 75 81 (L9)
K-NN 46 46 50 50 60 60 52 54 68 (L1,L2),(L3,L4),
34 (L5,L6),(L7),(L8), 6
K-NN REV
88 88 112 112 118 118 78 112 138 (L9)
K-NN 48 45 51 57 54 49 50 49 49 (L1),(L2),(L3),
(L4),(L5),(L6),
35 9
K-NN REV (L7),(L8),(L9)
66 80 75 88 95 92 88 95 97
K-NN 54 54 44 48 46 48 50 46 46 (L1,L2),(L3),
(L4),(L5),(L6),
36 8
K-NN REV (L7),(L8),(L9)
90 90 88 72 74 78 88 90 94
K-NN 48 48 54 54 49 49 47 47 47 (L1,L2),(L3,L4),
37 5
K-NN REV 76 76 105 105 85 85 98 98 98 (L5,L6),(L7,L9),
P a g e 40 | 55
(L8)
K-NN 44 44 50 50 54 54 48 48 52 (L1,L2),(L3,L4),
(L5,L6),(L7,L8),
38 5
K-NN REV (L9)
80 80 94 94 92 92 82 82 72
K-NN 56 56 62 62 50 56 48 44 70 (L1,L2),(L3,L4),
(L5),(L6),(L7),
39 7
K-NN REV (L8),(L9)
K-NN 54 54 56 56 54 54 48 44 64 (L1,L2),(L3,L4),
(L5),(L6),(L7),
40 7
K-NN REV (L8),(L9)
TOTAL 254
P a g e 41 | 55
CHAPTER 6
The proposed algorithm based on Machine learning approach (Supervised learning) is quite easy to
implement. It efficiently differentiated the characteristics of each kinematic link so that distinct
The proposed method can be applied to higher linkage and higher degree of freedom i.e., 10-link 3-
DoF and 10-link 1-DoF. This hybrid algorithm can be easily programmed in any programming
P a g e 42 | 55
REFERENCES
isomorphism among kinematic chains and inversions. Mechanism and machine theory.
58 (1993).
5. A.C.Rao. Topology based rating of kinematic chains and inversions using information
DOI: 10.1016/S0094-114X(97)00061-X
P a g e 43 | 55
Mechanical Engineering Science. 221, 81-88 (2007).
13. Rizvi, S. S. H., Hasan, A., & Khan, R. A. (2016). A new method for distinct inversions
14. S.S.H.Rizvi, A.Hasan, R.A.Khan: An efficient algorithm for distinct inversions and
(2016).DOI: 10.1016/j.pisc.2016.03.022
DOI: 10.1080/14484846.2017.1374815
Kinematic Chains and Mechanisms. Indian Journal of Science and Technology. 10(18),
P a g e 44 | 55
17. V V Kamesh, DVSSSV Prasad, PS Ranjit, Bh Varaprasad,V Srinivasa Rao. An
19. V.V.Kamesh, DVSSSV Prasad, PS Ranjit, BhV Prasad,VS Rao. A rigidity approach to
DOI: 10.3390/app12199417
(2023)
22. S.Higgins, S.Dutta, R.S.Kakar. Machine learning for lumbar and pelvis kinematics
. DOI: 10.1080/10255842.2023.2241593
24. Rongpei Lin, Yi Yang, F.Shen, G.Pi, Y.Li. An Algorithm for the Determination of
DOI: 10.3847/1538-4365/ad2dea
P a g e 45 | 55
APPENDIX I : 8-LINK 1-DOF K-CHAINS
P a g e 46 | 55
P a g e 47 | 55
APPENDIX II : 9-LINK 2-DOF K-CHAINS
P a g e 48 | 55
P a g e 49 | 55
P a g e 50 | 55
P a g e 51 | 55
P a g e 52 | 55
P a g e 53 | 55
P a g e 54 | 55
P a g e 55 | 55