0% found this document useful (0 votes)
68 views55 pages

M.TEch CSE

Mtech related
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
68 views55 pages

M.TEch CSE

Mtech related
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 55

A Project report on

A HYBRID MACHINE LARNING APPROACH

TO FIND INVERSIONS OF PLANAR KINAMTIC CHAINS

A Main Project Report submitted in partial fulfillment of the

requirements for the award of degree of

MASTER OF TECHNOLOGY
In
COMPUTER SCIENCE AND ENGINEERING

Submitted

By

VINJAMURI VENKATA KAMESH

(22A91D5818)

Under the esteemed guidance of

Dr. M.V.Rajesh M.Tech,Ph.D


Professor of Information Technology

DEPARTMENT OF COMPUTER SCIENCE AND ENGINEERING


ADITYA ENGINEERING COLLEGE (A)
Approved by AICTE, Permanently Affiliated to JNTUK & Accredited by NBA ,NAAC with
‘A++’ Grade
Recognized by UGC under the sections 2(f) and 12(B) of the
UGC act 1956 Aditya Nagar, ADB Road - Surampalem –
533437, Kakinada Dist., A.P., 2022 – 2024
ADITYA ENGINEERING COLLEGE (A)
Approved by AICTE, Permanently Affiliated to JNTUK & Accredited by NBA, NAAC with ‘A++’ Grade
Recognized by UGC under the sections 2(f) and 12(B) of the UGC act 1956
Aditya Nagar, ADB Road - Surampalem – 533437, Kakinada Dist., A.P.,

DEPARTMENT OF COMPUTER SCIENCE AND ENGINEERING

CERTIFICATE

This is to certify that the project work entitled “A HYBRID MACHINE LEARNING

APPROACH TO FIND INVERSIONS OF PLANAR KINEMATIC CHAINS” is being

submitted by VINJAMURI VENKATA KAMESH (22A91D5818) in partial fulfillment of the

requirements for award of the M.Tech degree in Computer Science and Engineering.

Project Guide Head of the Department


Dr. M.V.RajeshM.Tech.,Ph.D Dr. K.SwaroopaM.Tech.,Ph.D
Professor Professor

External Examiner
DECLARATION

I hereby declare that the project entitled “A HYBRID MACHINE LEARNING APPROACH TO

FIND INVERSIONS OF PLANAR KINMATIC CHAINS” is a genuine project. This work has

been submitted to the ADITYA ENGINEERING COLLEGE(A), Surampalem, permanently

affiliated to JAWAHARLAL NEHRU TECHNOLOGICAL UNIVERSITY KAKINADA in

partial fulfillment of the M.Tech degree. I further declare that this project work has not been

submitted in full or part of the award of any degree of this or any other educational institution.

By

VINJAMURI VENKATA KAMESH

(22A91D5818)
ACKNOWLEDGEMENT

It is with immense pleasure that I would like to express my indebted gratitude to my


Project Guide & Coordinator Dr.M.V.Rajesh, Professor, who has guided me a lot and
encouraged in every step of the project work, his valuable moral support and guidance
throughout the project helped me to a greater extent.

I wish to thank Dr. S. Rama Sree, Professor in CSE and Dean (Academics) for
her support and suggestions during my project work..

My deepest thanks to our HOD Dr. K.Swaroopa, Professor for inspiring us all the
way and for arranging all the facilities.

I owe our sincere gratitude to Dr. M.Sreenivasa Reddy, Principal for providing a
great support and for giving the opportunity of doing the project.

I am grateful to our College Management for providing all the facilities in time for
completion of our project.

Not to forget, Faculty, Lab Technicians, non-teaching staff and our friends who
have directly or indirectly helped and supported us in completing my project in time.
ABSTRACT

In computer networks, Dijkstra’s algorithm is applied to find the shortest route from one

node to all other nodes. K-NN is a supervised machine learning algorithm which can be used

for classification of similar characteristic elements of a given population. In the present work,

both the above algorithms are applied sequentially on a kinematic chain. Initially a kinematic

chain is converted as a graph. Any kinematic chain can be shown as a Network graph

converting links as nodes and connections as paths. Dijkstra’s algorithm is used to find the

shortest paths from one node to another node. After that k-NN algorithm is applied to form the

Clusters of Nodes (links) to classify the kinematic links into similar characteristic groups

(same class) which are eventually called as ‘Inversions’. Results for 8-link 1-dof k-chains and

9-link 2-dof are analyzed and presented. The same concept can be extended for higher linkages

and degree-of-freedom.

P a g e 5 | 55
LIST OF FIGURES

FIGURE NO NAME OF THE FIGURE

SAMPLE GRAPH FOR DIJKSTRA'S ALGORITHM


Fig 2.1

COMPARISON OF ML-DL-AI
Fig 2.2

TYPES OF MACHINE LEARNING


Fig 2.3

KNN ALGORITHM VISUALIZATION


Fig 2.4

CHAIN NO. 6
Fig 4.1

GRAPH FOR THE K-CHAIN NO 6


Fig 4.2

P a g e 6 | 55
LIST OF ABBREVIATIONS

ABBREVIATION FULL FORM

k-NN k-Nearest Neighbor

k-chain Kinematic chain

DoF/ dof Degree of Freedom

P a g e 7 | 55
LIST OF TABLES

Table 4.1 Shortest distances from Node 1 to all other nodes in Figure 4.2

Table 4.2 Consolidated report of shortest distances from each node to all nodes

Table 4.3 Shortest distances from each node to all nodes -Chain No. 6

Table 4.4 Sorted shortest distance of Chain no. 6

Table 4.5 k-NN and k-NN(Rev) values for links of Chain No. 6

Table 5.1 Graphs of 8-link 1-dof k-chains

Table 5.2 Graphs of 9-link 2-dof k-chains

Table 5.3 Results of Hybrid algorithm for k-chains of 8-link 1-dof

Table 5.4 Results of Hybrid algorithm for k-chains of 9-link 2-dof

P a g e 8 | 55
INDEX

S.No Description Page No.

ABSTRACT 5

1 LIST OF FIGURES 6

2 LIST OF ABBREVIATIONS 7

3 LIST OF TABLES 8

4 CHAPTER 1 : LITERATURE SURVEY 10-11

5 CHPATER 2: INTRODUCTION 11-20

6 CHAPTER 3 : METHODOLOGY 21-22

7 CHAPTER 4: IMPLEMENTATION OF THE METHODOLOGY 23-28

8 CHAPTER 5: RESULTS 29-41

9 CHAPTER 6: CONCLUSIONS & FUTURE SCOPE 42

10 REFERENCES 43-45

APPENDIX I: 8-LINK 1-DOF K-CHAINS 46-47

APPENDIX II: 9-LINK 2-DOF KK-CHAINS 48-52

P a g e 9 | 55
CHAPTER 1

LITERATURE SURVEY

Patel & Rao [1] used the concept of velocity diagrams to identify the distinct inversions of a

k-chain. This method also results in identifying the best inversion also. In 1991, Rao AC[2]

applied hamming number technique to find the inversions and isomorphism in k-chains. In his

method, each link’s characteristics are compared with other links. Chu & Cao [3] developed new

idea of using Link’s adjacent-chain table (ACT) to identify the distinct mechanism of a k-chain.

Sanyal[4] presented a new method in which k-chain is represented using pseudo probability

scheme. This is method is limited to rotary type joints only. Rao [5] applied the principles of

information theory in finding inversions and rating of k-chains. Rao AC [6] proposed fuzzy logic

to study inversions of k-chains in 2000. Fuzzy uncertainty values with crisp vector are compared

in this approach. Kong et al [7] presented Hopfield–Tank artificial neural network (ANN)

technique to identify the isomorphism of the mechanism kinematic chain. Mohammad et al. [8]

proposed a new parameters ‘EA Matrix’ to find the inversions. Bedi and Sanyal [9] used

connectivity between the joints of a k-chain to find inversions as well as detection of

isomorphism. Marin et al [10] proposed a novel multivalued neural network that enables a

simplified formulation of the graph isomorphism problem. Sanyal[11] proposed a new method

based on Link joint connectivity which is tested successfully on single degree and multiple DOF

planar k-chains. Dargar et al. [12] use weighted structural indices i.e., extended adjacency link

value (EALV), total loop size (TLS) and extended adjacency string (EAS) to identify inversions.

Rizvi et al [13] proposed new parameters Link Identity Matrix (LIM), Link Signature (LS) and

Chain Signature(CS) are proposed to find the inversions. Rizvi et al. [14] proposed an algorithm

using ‘The absolute sum of the Eigen values of the inversion adjacency matrix’ as the parameter

deciding the distinct mechanism, named as ‘Link Identification Number’. Shukla and Sanyal [15]
P a g e 10 | 55
proposed ‘gradient matrices’ based on gradient analogy, to distinctly denote the structure of each

kinematic chain. Rai and Pubjabi [16] proposed a new method based on ‘Shortest distance matrix

and its string to find the inversions. Kamesh et al [17] used an additive approach based on

connectivities of the links through a new concept called ‘Remote Adjacency Influence Table’

(RAIT) to find the inversions. Ding et al. [18] used the 5th power of the adjacency matrix to find

the inversions. Kamesh et al. [19] proposed a novel algorithm based on the Rigidity concept to

find inversions, in which 3 parameters are defined namely Primary Connectivity Index (PCI),

Secondary Connectivity Index (SCI) and Net Connectivity Index (NCI). Tagliani et al [20]

proposed a ML sequential methodology for robot inverse kinematics modelling, iterating the

model prediction at each joint. The method implements an automatic Denavit–Hartenberg (D-H)

parameters formulation code to obtain the forward kinematic (FK) equations required to produce

the robot dataset. Amini et al. [21] proposed a new evaluation framework to compare Active

Learning(AL) and Design of Experiments(DoE) for different applications i.e., data generation,

sample efficiency, stability and predictive accuracy of the resulting ML models. Higgins et al [22]

presented a method to select the suitable clusters from the available to use on time-series lumbar

and pelvis kinematic data. Wang et al [23] portrayed different ML methods in the analysis of

complex data in the fault diagnosis of rotating machinery. Lin et al [24] developed a new

algorithm to automatically derive a CME’s kinematic parameters based on machine learning.

But no author applied Machine learning methods in suggesting the solutions of mechanical

engineering problems especially conceptual design of mechanisms or inversions. In the present work,

computer science and machine learning algorithms are applied combinedly to verify the results.

P a g e 11 | 55
CHAPTER 2

INTRODUCTION

In the identification of distinct inversions of planar kinematic chains, graph

theory concepts have been used for many years for the topological analysis for both planar

and geared kinematic chains. In this process, a link (binary/ternary/quaternary/any other) is

taken as Node and the connections(joints) are taken as paths between two nodes. K-NN

algorithm is basically utilized in the classification of different sets of data points to make them

into similar characteristic groups. In this work, the similarity between the characteristic

features of nodes is compared. The nodes which are having the similar features or parameters

with reference to k-NN algorithm are declared as same Inversions of a k-chain. In the present

work, Dijkstra’s algorithm & k-NN algorithm of Machine Learning are used to find the

shortest path in the computer networks & Clustering of the nodes to find the inversions of

planar kinematic chains. It is concluded that the applied hybrid algorithm produced the same

results available in the existing research papers.[1-24]. Many researchers discussed the

application of machine learning methods in various mechanical engineering applications

[7,10,20-24]. Taking those articles as inspiration, in the present work, the k-NN algorithm of

supervised machine learning is applied in finding the distinct inversions of k-chains. The

fundamental concepts of these two algorithms are expalined as under.

P a g e 12 | 55
2.1 DIJKSTRA'S ALGORITHM: A STEP-BY-STEP GUIDE

Dijkstra's algorithm is a popular algorithm used to find the shortest path between a given node (or

vertex) and all other nodes in a graph. It works on graphs where the edges have non-negative

weights.

ALGORITHM STEPS:

1. Initialization:

o Set: Create a set S to keep track of the visited nodes. Initially, S is empty.

o Distance: Assign a distance to each node from the source node.

▪ The distance to the source node is 0.

▪ The distance to all other nodes is set to infinity, indicating they haven't been

reached yet.

o Previous: Create an array prev to store the previous node on the shortest path from

the source node to each other node.

2. Selection:

o Find Minimum: Find the node in the graph that has the smallest distance value

and is not yet in S. Let us call this node u.

o Add to Set: Add u to the set S.

3. Update Neighbors:

o Iterate: For each neighbor v of u that is not in S, calculate the tentative distance

from the source node to v through u. This is done by adding the distance from the

source to u and the weight of the edge from u to v.

o Update Distance: If the calculated tentative distance is less than the current

distance to v, update the distance to v and set prev[v] to u. This means that u is now

the previous node on the shortest path to v.

P a g e 13 | 55
4. Repeat:

o Continue: Repeat steps 2 and 3 until all nodes have been added to S.

VISUAL REPRESENTATION:

Example:

Consider the following graph:

Fig 2.1: SAMPLE GRAPH FOR DIJKSTRA'S ALGORITHM

If we want to find the shortest path from node A to all other nodes, we will follow these steps:

1. Initialization:

o Set S = {}.

o Distance(A) = 0, Distance(B) = ∞, Distance(C) = ∞, Distance(D) = ∞.

o prev array: [A, null, null, null]

2. Selection:

o u=A

3. Update Neighbors:

o Distance(B) = 0 + 1 = 1, prev[B] = A

o Distance(C) = 0 + 4 = 4, prev[C] = A

4. Repeat:

o ... (continue the process until all nodes are visited)

P a g e 14 | 55
APPLICATIONS:

• Routing protocols: In computer networks, Dijkstra's algorithm is used to determine the

shortest path for data packets to travel.

• GPS systems: To find the shortest route between two points on a map.

• Transportation networks: To optimize routes for delivery vehicles or public transport.

Dijkstra's algorithm is a fundamental algorithm in graph theory and has numerous practical

applications in various fields.

2.2 MACHINE LEARNING FUNDAMENTALS

2.2.1 What is Machine Learning?

Machine Learning, often abbreviated as ML, is a subset of artificial intelligence (AI) that

focuses on the development of computer algorithms that improve automatically through

experience and by the use of data. In simpler terms, machine learning enables computers to learn

from data and make decisions or predictions without being explicitly programmed to do so.

At its core, machine learning is all about creating and implementing algorithms that

facilitate these decisions and predictions. These algorithms are designed to improve their

performance over time, becoming more accurate and effective as they process more data.

In traditional programming, a computer follows a set of predefined instructions to perform

a task. However, in machine learning, the computer is given a set of examples (data) and a task to

perform, but it is up to the computer to figure out how to accomplish the task based on the

examples it has given.

For instance, if we want a computer to recognize images of cats, we do not provide it with

specific instructions on what a cat looks like. Instead, we give it thousands of images of cats and

let the machine learning algorithm figure out the common patterns and features that define a cat.

Over time, as the algorithm processes more images, it gets better at recognizing cats, even when
P a g e 15 | 55
presented with images it has never seen before.

This ability to learn from data and improve over time makes machine learning incredibly

powerful and versatile. It is the driving force behind many of the technological advancements we

see today, from voice assistants and recommendation systems to self-driving cars and predictive

analytics.

2.2.2 Machine learning vs AI vs deep learning

Machine learning is often confused with artificial intelligence or deep learning. Let us look

at how these terms differ from one another. For a more in-depth look, check out our comparison

guides on AI vs machine learning and machine learning vs deep learning.

AI refers to the development of programs that behave intelligently and mimic human

intelligence through a set of algorithms. The field focuses on three skills: learning, reasoning, and

self-correction to obtain maximum efficiency. AI can refer to either machine learning-based

programs or even explicitly programmed computer programs.

Machine learning is a subset of AI, which uses algorithms that learn from data to make

predictions. These predictions can be generated through supervised learning, where algorithms

learn patterns from existing data, or unsupervised learning, where they discover general patterns in

data. ML models can predict numerical values based on historical data, categorize events as true or

false, and cluster data points based on commonalities.

Deep learning, on the other hand, is a subfield of machine learning dealing with

algorithms based essentially on multi-layered artificial neural networks (ANN) that are inspired

by the structure of the human brain.

Unlike conventional machine learning algorithms, deep learning algorithms are less linear,

more complex, and hierarchical, capable of learning from enormous amounts of data, and able to

produce highly accurate results. Language translation, image recognition, and personalized

medicines are some examples of deep learning applications.


P a g e 16 | 55
FIG 2.2: COMPARISON OF ML-DL-AI

Ref : https://www.datacamp.com/blog/what-is-machine-learning

2.2.3 The Importance of Machine Learning

In the 21st century, data is the new oil, and machine learning is the engine that powers this

data-driven world. It is a critical technology in today's digital age, and its importance cannot be

overstated. This is reflected in the industry's projected growth, with the US Bureau of Labor

Statistics predicting a 21% growth in jobs between 2021 and 2031.

2.2.4 Types of Machine Learning

Machine learning can be broadly classified into three types based on the nature of the

learning system and the data available: supervised learning, unsupervised learning, and

reinforcement learning.

P a g e 17 | 55
FIG 2.3: TYPES OF MACHINE LEARNING

2.3 K-NEAREST NEIGHBORS (K-NN)

It is a simple, yet effective supervised machine learning algorithm used for both

classification and regression tasks. It operates based on the principle that points that are close to

each other in feature space are likely to belong to the same class.

HOW K-NN WORKS:

1. Data Preprocessing:

o Normalization: If the features have different scales, normalize them to ensure they

contribute equally to the distance calculation.

o Handling Missing Values: Impute missing values using techniques like mean,

median, or mode.

2. Choose K:

o Select an appropriate value for k, the number of neighbors to consider. A smaller k

can make the model more sensitive to noise, while a larger k might make it less

sensitive to local fluctuations.

P a g e 18 | 55
3. Calculate Distances:

o For a new data point, calculate its distance to all training points. Common distance

metrics include Euclidean distance, Manhattan distance, and Minkowski distance.

4. Find K Nearest Neighbors:

o Identify the k nearest neighbors to the new data point based on the calculated

distances.

5. Make Prediction:

o Classification: If the task is classification, determine the majority class among the

k nearest neighbors. The new data point is assigned to this class.

o Regression: If the task is regression, calculate the average or weighted average of

the target values of the k nearest neighbors. This average value is the predicted

value for the new data point.

VISUAL REPRESENTATION:

Fig 2.4: KNN ALGORITHM VISUALIZATION

P a g e 19 | 55
Example:

Consider a dataset of two classes, represented by red and blue points. To Classify a new point

(represented by a green star), we:

1. Choose a value for k (e.g., k=3).

2. Calculate the distance between the green point and all training points.

3. Find the 3 nearest neighbors to the green point.

4. Since the majority of these neighbors are blue, the green point is classified as blue.

ADVANTAGES OF K-NN:

• Simple to understand and implement.

• No training phase required, making it suitable for real-time applications.

• Can be effective for complex datasets with non-linear relationships.

DISADVANTAGES OF K-NN:

• Can be computationally expensive for large datasets.

• Sensitive to the choice of k.

• Can be susceptible to the curse of dimensionality, where the distance between points

becomes less meaningful in high-dimensional spaces.

APPLICATIONS OF K-NN:

• Image recognition: Classifying images based on their pixel values.

• Recommendation systems: Suggesting items to users based on their preferences.

• Pattern recognition: Identifying patterns in data.

• Medical diagnosis: Predicting diseases based on patient symptoms and test results.

K-NN is a versatile algorithm that can be applied to a wide range of problems. By carefully

considering the choice of k and distance metric, it can provide accurate and reliable predictions.

P a g e 20 | 55
CHAPTER 3

METHODOLOGY

In the present work, a hybrid algorithm in which two algorithms i.e., Dijkstra’s

algorithm and k-NN are sequentially applied on a k-chain graph to find the distinct inversions

of a planar k-chain.

3.1 DIJKSTRA’S ALGORITHM & K-NN ALGORITHM APPLIED TO KINEMATIC

CHAINS

. Dijkstra’s algorithm is primarily used to find the shortest path or route from one node to

all other nodes in a computer network. Initially, the network is to be represented as a graph. In

the graph network, the distance between one node to another node can be known by applying

the Dijkstra’s algorithm.

From the literature [1-24] many approaches evolved in the identification of inversions

of planar kinematic chains using graph theory. Hence, it is proposed to apply Dijkstra’s

algorithm in K-chains using a graph. In a kinematic chain, we will have links and joints. In

graph theory, links will be transformed as nodes and joints as a connective object will be

transformed as path or line.

P a g e 21 | 55
All 8-link 1-dof k-chains [11] and 9-link 2-dof k-chains[16] were applied to the Dijkstra’s

algorithm & k-NN algorithm. The various steps in the algorithm are explained below.

Step 1: Kinematic chain is to be converted into a graph.

Step 2: Nodes and individual paths are to be marked separately. The distance between the

two nodes is to be taken as a linear expression(addition/ multiplication / average)

of degree of two nodes connecting the path .

Step 3: Dijkstra’s algorithm is to be applied to a single node.

Step 4: All the shortest paths from the first node to all other nodes are to be identified

using the algorithm.

Step 5: Procedure mentioned in Step 4 is to be repeated for all the nodes.

Step 6: A consolidated report is to be presented for each node and its distances represented

in the form of a matrix.

Step 7: The matrix values in each column(distances) are sorted from low to high.

Step 8: Now, k-NN algorithm is applied by taking k=5 ( generally odd value is preferred

for better decision making). Clustering is to be done for each node( link) summing

up the nearest 5 distances. In the present work, k-NN (Rev) also proposed for

better decision making of the clustering process.

Step 9: After clustering process is completed, all the nodes ( links) values (k-NN and k-

NN Rev) to find the similar groups which are to be identified as ‘Distinct

mechanisms’ or ‘Inversions’.

P a g e 22 | 55
CHAPTER 4

IMPLEMENTATION OF THE METHODOLOGY

In this section, it is explained about the new parameter proposed and the sequential

application of two algorithms implanting the methodology.

4.1 NEW PARAMETER PROPOSED – K-NN(REV)

In comparing the link-joint characteristics of a k-chain it should be noted that each

link-joint data needs to be analyzed. After calculating only ‘k’ values in the string of

shortest distances, ‘n-k’ distances will be left over from the analysis. Hence, in the present

work, a new parameter with the name ‘k-NN(Rev)’ is proposed. In this, ‘k’ values will be

considered from the other end. In the string of shortest distance, k-NN considers ‘k’ values

from one end, and k-NN(Rev) considers ‘k’ values from the other end. In another way, k-

NN considers nearest ‘k’ values while k-NN(Rev) considers ‘k’ farthest links. These two

values combinedly makes the analysis much easier to compare and form clusters.

4.2 SEQUENTIAL APPLICATION OF TWO ALGORITHMS

The application of Dijkstra’s algorithm & k-NN algorithm is explained in detail by taking

one example from Appendix II. Various steps of the inversion identification procedure can be

known easily in the example shown below.

All the k-chains are presented in Appendix I & II. Let us consider one kinematic chain

from Appendix II to study and analyze. Chain no. 6 is taken as a test specimen to check the

application of Dijkstra’s algorithm. Chain no. 6 is shown in Figure 4.1 below.

Step 1: The kinematic chain in Figure 4.1 is converted to a graph. All the links are shown

as Nodes and all the joints are shown as Paths between the nodes. The graph is shown in

Figure 4.2.

P a g e 23 | 55
FIG. 4.1. CHAIN NO. 6 FIG. 4.2. GRAPH FOR THE K-

CHAIN NO. 6

Step 2: In the graph, the distance from one node to another adjacent node is taken as the

product of both vertices’ incidence value. For example, the path from node 1 to node 3,

the incidence values for node 1 and node 3 are 3 and 2, respectively. Hence, the distance

between node 1 and node 2 will be 2*3 = 6. In an analogous way, all the distances can be

calculated.

Step 3: Now, Dijkstra’s algorithm is to be applied to find the shortest paths from Node 1

to all other Nodes.

Step 4: After applying the algorithm, the various shortest distances from Node 1 to all

other nodes are found. The results are shown in Table 4.1.

Table 4.6. Shortest distances from Node 1 to all other nodes in Figure 2

Node 1 2 3 4 5 6 7 8 9

1 0 16 6 15 9 12 6 10 15

Step 5&6: After applying Step 4 for all the nodes finding shortest paths from that node to

all other nodes in the network, a consolidate report is prepared. The consolidated report is

shown in Table 4.2.

P a g e 24 | 55
Table 4.7. Consolidated report of shortest distances from each node to all nodes

Link 1 2 3 4 5 6 7 8 9

1 0 16 6 15 9 12 6 10 15

2 16 0 15 6 12 9 10 6 15

3 6 15 0 21 15 6 12 16 12

4 15 6 21 0 6 15 16 12 12

5 9 12 15 6 0 12 15 18 6

6 12 9 6 15 12 0 18 15 6

7 6 10 12 16 15 18 0 4 21

8 10 6 16 12 18 15 4 0 21

9 15 15 12 12 6 6 21 21 0

Step 7: For applying the k-NN algorithm, matrix values are taken here. The shortest

distances from Step 6 are presented in Table 4.3.

Table 4.8. Shortest distances from each node to all nodes -Chain No. 6

L1 L2 L3 L4 L5 L6 L7 L8 L9

0 16 6 15 9 12 6 10 15

16 0 15 6 12 9 10 6 15

6 15 0 21 15 6 12 16 12

15 6 21 0 6 15 16 12 12

9 12 15 6 0 12 15 18 6

12 9 6 15 12 0 18 15 6

6 10 12 16 15 18 0 4 21

P a g e 25 | 55
10 6 16 12 18 15 4 0 21

15 15 12 12 6 6 21 21 0

The sorted values are shown in Table 4.4.

Table 4.9. Sorted shortest distance of Chain no. 6

L1 L2 L3 L4 L5 L6 L7 L8 L9

0 0 0 0 0 0 0 0 0

6 6 6 6 6 6 4 4 6

6 6 6 6 6 6 6 6 6

9 9 12 12 9 9 10 10 12

10 10 12 12 12 12 12 12 12

12 12 15 15 12 12 15 15 15

15 15 15 15 15 15 16 16 15

15 15 16 16 15 15 18 18 21

16 16 21 21 18 18 21 21 21

Step 8: Clustering of nodes is to be done by calculating the sum of 5 nearest nodes

distances obtained from earlier step.

For Node 1 : k-NN value = 6+6+9+10+12= 43

For Node 2 : k-NN value = 16+15+15+12+10 = 68.

Similarly, for all the nodes the k-NN values are found.

43 43 51 51 45 45 47 47 51

L1 L2 L3 L4 L5 L6 L7 L8 L9

68 68 79 79 72 72 82 82 84

P a g e 26 | 55
In the present case, the sum of 5 shortest distances of k-farthest links is calculated.

For Node 1: k-NN (Rev) value = 16+15+15+12+10 = 68

For Node 3: k-NN (Rev) value = 21+16+15+15+12 = 79

Similarly, for all the nodes, the k-NN(Rev) values are found.

Step 9: After clustering of k-NN and k-NN (Rev) values, it is to analyze the node

characteristics by comparing the values with each other.

4.3 DECISION PARAMETER TO FIND SAME INVERSION

All the links with similar k-NN and k-NN (Rev) are said to have similar

characteristics i.e., they are DISTINCT i.e., same inversion. In comparing any two links

about distinctness, it will be observed the string of values which form the k-NN value and

k-NN (Rev) value. In the present case, the results are shown in Table 4.5.

P a g e 27 | 55
Table 4.10. k-NN and k-NN(Rev) values for links of Chain No. 6

L1 L2 L3 L4 L5 L6 L7 L8 L9

0 0 0 0 0 0 0 0 0

6 6 6 6 6 6 4 4 6

6 6 6 6 6 6 6 6 6

9 9 12 12 9 9 10 10 12

10 10 12 12 12 12 12 12 12

12 12 15 15 12 12 15 15 15

15 15 15 15 15 15 16 16 15

15 15 16 16 15 15 18 18 21

16 16 21 21 18 18 21 21 21

43 43 51 51 45 45 47 47 51 k-NN

L1 L2 L3 L4 L5 L6 L7 L8 L9 Link

68 68 79 79 72 72 82 82 84 k-NN(Rev)

In the above example, the distinct mechanisms are : (1,4),(3,4),(5,6),(7,8),(9). Hence, the

number of inversions of this distinct kinematic chain is 5. In an analogous way, all the distinct

mechanisms and number of inversions of each kinematic chain are found.

Here, 8-link 1-dof distinct k-chains were taken from [11] and 9-link 2-dof k-chains from

[16] are redrawn . The results are shown in Table 5.3 & Table 5.4 of the next section.

P a g e 28 | 55
CHAPTER 5

RESULTS

Dijkstra’s algorithm is applied to find the shortest paths from one node to all other

nodes & clustering by k-NN algorithm is applied on all the kinematic chains of 8-link 1-

dof, 9-link 2-dof. The associated graphs are shown in Table 5.1 & Table 5.2. The results

i.e., inversions are shown in Table 5.3 & Table 5.4.

Table 5.1. Graphs of 8-link 1-dof k-chains

P a g e 29 | 55
P a g e 30 | 55
Table 5.2. Graphs of 9-link 2-dof k-chains

P a g e 31 | 55
P a g e 32 | 55
P a g e 33 | 55
P a g e 34 | 55
Table 5.3. Results of Hybrid algorithm for k-chains of 8-link 1-dof

Chain Distinct No. of


Parameter L1 L2 L3 L4 L5 L6 L7 L8
No. Groups Inversions

1 K-NN 37 44 44 37 37 44 44 37 (L1,L4,L5,L8),
2
K-NN REV 53 71 71 53 53 71 71 53 (L2,L3,L6,L7)

2 K-NN 37 40 40 37 37 40 40 37 (L1,L4,L5,L8),
2
K-NN REV 49 63 63 49 49 63 63 49 (L2,L3,L6,L7)

3 K-NN 37 40 39 35 41 36 42 38 (L1),(L2),(L3),

(L4),(L5),(L6), 8
K-NN REV 49 61 61 48 56 52 64 49
(L7),(L8)

4 K-NN 42 35 39 39 35 38 38 42 (L1,L8),(L2,L5),
4
K-NN REV 62 50 59 59 50 53 53 62 (L3,L4),(L6,L7)

5 K-NN 42 37 42 44 44 42 37 42 (L1,L8),(L2,L7)
4
K-NN REV 64 58 61 67 67 61 58 64 ,(L3,L6),(L4,L5)

6 K-NN 41 35 39 39 35 41 36 40 (L1,L6),(L2,L5),
5
K-NN REV 60 52 59 59 52 60 56 50 (L3,L4),(L7),(L8)

7 K-NN 41 36 40 35 40 36 38 41 (L1),(L2,L6),

(L3,L5),(L4), 6
K-NN REV 62 52 56 50 56 52 49 58
(L7),(L8)

8 K-NN 42 37 37 42 37 37 42 42 (L1,L4,L7,L8),
2
K-NN REV 64 54 54 64 54 54 64 64 (L2,L3,L5,L6)

9 K-NN 36 36 41 36 41 41 36 41 (L1,L2,L4,L7),
2
K-NN REV 48 48 58 48 58 58 48 58 (L3,L5,L6,L8)

10 K-NN 35 43 36 40 36 43 43 43 (L1), (L3,L5),(L4)


4
K-NN REV 50 59 56 56 56 59 59 59 (L2,L6,L7,L8),

11 K-NN 36 40 43 34 44 38 44 44 (L1),(L2),(L3),(L4),
7
K-NN REV 48 63 60 44 61 54 61 57 (L5,L7),(L6),(L8)

12 K-NN 30 44 43 35 43 44 36 43 (L1),(L2),(L3,L8),
7
K-NN REV 56 67 61 46 64 67 52 61 (L4),(L5),(L6),(L7)

P a g e 35 | 55
13 K-NN 38 46 43 35 43 40 36 44 (L1),(L2),(L3),(L4),
8
K-NN REV 51 69 64 44 62 63 52 61 (L5),(L6),(L7),(L8)

14 K-NN 36 44 44 36 44 44 36 42 (L1,L7),(L2,L6),
5
K-NN REV 54 71 64 46 64 71 54 60 (L3,L5),(L4),(L8)

15 K-NN 44 36 44 44 36 44 44 44 (L1,L3,L4,L6,L7,L8),
2
K-NN REV 66 44 66 66 44 66 66 66 (L2,L5)

16 K-NN 34 44 44 34 44 44 48 48 (L1,L4),(L2,L3,L5,L6),
3
K-NN REV 44 62 62 44 62 62 60 60 (L7,L8)

TOTAL 71

P a g e 36 | 55
Table 5.4. Results of Hybrid algorithm for k-chains of 9-link 2-dof

Chain Distinct No.of


Parameter L1 L2 L3 L4 L5 L6 L7 L8 L9
No. Groups Inversions

K-NN 45 45 48 48 51 49 40 44 50 (L1,L2),(L3,L4),

1 (L5),(L6),(L7), 7
K-NN REV
75 75 73 73 90 71 68 84 83 (L8),(L9)

K-NN 45 57 54 57 52 48 45 46 50 (L1),(L2),(L3),

2 (L4),(L5),(L6), 9
K-NN REV
72 83 95 100 81 66 88 91 86 (L7),(L8),(L9)

K-NN 57 57 54 54 49 49 41 44 50 (L1,L2),(L3,L4),

3 (L5,L6), (L7), 6
K-NN REV
87 87 83 83 83 83 68 64 82 (L8),(L9)

K-NN 49 49 48 48 50 50 47 47 66 (L1,L2),(L3,L4),

4 (L5,L6),(L7,L8), 5
K-NN REV
75 75 72 72 97 97 86 86 88 (L9)

K-NN 43 49 43 49 50 47 50 47 51 (L1,L3),(L2,L4),

5 (L5,L7),(L6,L8), 5
K-NN REV
71 71 71 71 88 90 88 90 74 (L9)

K-NN 43 43 51 51 45 45 47 47 51 (L1,L2),(L3,L4),

6 (L5,L6),(L7,L8), 5
K-NN REV
68 68 79 79 72 72 82 82 84 (L9)

K-NN 56 56 56 56 56 56 44 44 64 (L1,L2,L3,L4,L5,L6),
7 3
K-NN REV 88 88 88 88 88 88 60 60 80 (L7,L8),(L9)

K-NN 48 48 48 48 45 45 42 42 48 (L1,L2,L3,L4),

8 (L5,L6),(L7,L8), 4
K-NN REV
78 78 78 78 72 72 72 72 72 (L9)

K-NN 49 43 49 43 50 51 50 51 51 (L1,L3),(L2,L4),
9 5
K-NN REV 77 80 77 80 98 99 98 99 74 (L5,L7),(L6,L8),

P a g e 37 | 55
(L9)

K-NN 43 43 54 54 51 51 44 44 44 (L1,L2),(L3,L4),

10 (L5,L6),(L7,L8), 5

K-NN REV 71 71 87 87 79 79 79 79 80 (L9)

K-NN 51 51 51 51 48 48 42 42 48 (L1,L2,L3,L4),

11 (L5,L6),(L7,L8), 4
K-NN REV
87 87 87 87 81 81 81 81 72 (L9)

K-NN 54 54 54 54 42 42 44 44 46 (L1,L2,L3,L4),

12 (L5,L6),(L7,L8), 4
K-NN REV
82 82 82 82 84 84 76 76 76 (L9)

K-NN 51 45 48 54 43 51 47 44 44 (L1),(L2),(L3),

13 (L4),(L5),(L6), 9
K-NN REV
76 76 72 97 69 78 85 89 80 (L7),(L8),(L9)

K-NN 49 49 54 54 43 43 47 47 51 (L1,L2),(L3,L4),

14 (L5,L6),(L7,L8), 5
K-NN REV
77 77 101 101 74 74 92 92 84 (L9)

K-NN 51 49 43 54 41 50 47 47 47 (L1),(L2),(L3),

15 (L4),(L5),(L6), 9
K-NN REV
75 75 72 88 67 87 83 78 81 (L7),(L8),(L9)

K-NN 48 48 54 54 43 43 47 47 51 (L1,L2),(L3,L4),

16 (L5,L6),(L7,L8), 5
K-NN REV
82 82 93 93 73 73 88 88 84 (L9)

K-NN 45 51 43 51 48 51 40 44 47 (L1),(L2),(L3),

17 (L4),(L5),(L6), 9
K-NN REV
75 75 71 82 73 79 68 76 77 (L7),(L8),(L9)

K-NN 46 52 52 54 52 46 46 46 50 (L1),(L2),(L3),

18 (L4),(L5),(L6), 9
K-NN REV
76 82 98 96 70 84 94 94 90 (L7),(L8),(L9)

19 K-NN 54 54 49 49 54 57 41 52 48 (L1,L2),(L3,L4), 7

P a g e 38 | 55
(L5),(L6),(L7),
K-NN REV
87 87 84 84 91 97 72 80 66 (L8),(L9)

K-NN 48 46 52 54 54 48 40 46 54 (L1),(L2),(L3),

20 (L4),(L5),(L6), 9
K-NN REV
68 80 94 90 82 74 76 86 88 (L7),(L8),(L9)

K-NN 57 57 50 44 57 45 50 46 45 (L1,L2),(L3),

21 (L4),(L5),(L6), 8
K-NN REV
88 88 85 64 79 68 84 87 85 (L7),(L8),(L9)

K-NN 54 54 44 44 54 48 40 46 54 (L1,L2),(L3),

22 (L4),(L5),(L6), 8
K-NN REV
82 82 84 68 78 72 72 82 86 (L7),(L8),(L9)

K-NN 43 43 48 48 47 47 42 51 51 (L1,L2),(L3,L4),

23 (L5,L6),(L7),(L8), 6
K-NN REV
71 71 73 73 82 82 72 84 72 (L9)

K-NN 56 56 56 56 48 48 52 52 52 (L1,L2,L3,L4),

24 (L5,L6),(L7,L8), 4
K-NN REV
92 92 92 92 68 68 88 88 92 (L9)

K-NN 44 44 64 64 56 56 52 52 52 (L1,L2),(L3,L4),

25 (L5,L6),(L7,L8), 5
K-NN REV
64 64 84 84 88 88 84 84 92 (L9)

K-NN 45 45 57 57 54 54 49 49 44 (L1,L2),(L3,L4),

26 (L5,L6),(L7,L8), 5
K-NN REV
73 73 83 83 87 87 88 88 60 (L9)

K-NN 43 43 51 51 44 44 45 57 44 (L1,L2),(L3,L4),

27 (L5,L6),(L7),(L8), 6
K-NN REV
72 72 81 81 79 79 82 79 83 (L9)

K-NN 51 51 42 48 48 49 43 50 51 (L1,L2),(L3),

28 (L4),(L5),(L6), 8
K-NN REV
90 90 85 82 73 76 76 97 95 (L7),(L8),(L9)

P a g e 39 | 55
K-NN 60 60 52 52 62 62 47 54 70 (L1,L2)(L3,L4),

29 (L5,L6),(L7),(L8), 6
K-NN REV
100 100 106 106 113 113 104 67 133 (L9)

K-NN 56 56 57 52 54 52 45 48 64 (L1,L2),(L3),

30 (L4),(L5),(L6), 8
K-NN REV
112 112 83 120 116 91 94 76 132 (L7),(L8),(L9)

31 K-NN 54 54 42 48 44 44 48 50 54 (L1,L2),(L3),

(L4),(L5),(L6), 8
K-NN REV
84 84 84 76 78 80 78 90 90 (L7),(L8),(L9)

K-NN 48 48 54 54 48 48 47 47 47 (L1,L2),(L3,L4),

32 (L5,L6),(L7,L8), 5
K-NN REV
87 87 97 97 72 72 94 94 98 (L9)

K-NN 43 47 47 41 47 47 43 57 51 (L1,L7),

(L2,L3,L5,L6),
33 6
K-NN REV (L4),(L7),(L8),

78 81 81 74 81 81 78 75 81 (L9)

K-NN 46 46 50 50 60 60 52 54 68 (L1,L2),(L3,L4),

34 (L5,L6),(L7),(L8), 6
K-NN REV
88 88 112 112 118 118 78 112 138 (L9)

K-NN 48 45 51 57 54 49 50 49 49 (L1),(L2),(L3),

(L4),(L5),(L6),
35 9
K-NN REV (L7),(L8),(L9)

66 80 75 88 95 92 88 95 97

K-NN 54 54 44 48 46 48 50 46 46 (L1,L2),(L3),

(L4),(L5),(L6),
36 8
K-NN REV (L7),(L8),(L9)

90 90 88 72 74 78 88 90 94

K-NN 48 48 54 54 49 49 47 47 47 (L1,L2),(L3,L4),
37 5
K-NN REV 76 76 105 105 85 85 98 98 98 (L5,L6),(L7,L9),

P a g e 40 | 55
(L8)

K-NN 44 44 50 50 54 54 48 48 52 (L1,L2),(L3,L4),

(L5,L6),(L7,L8),
38 5
K-NN REV (L9)

80 80 94 94 92 92 82 82 72

K-NN 56 56 62 62 50 56 48 44 70 (L1,L2),(L3,L4),

(L5),(L6),(L7),
39 7
K-NN REV (L8),(L9)

92 92 110 110 64 96 102 108 130

K-NN 54 54 56 56 54 54 48 44 64 (L1,L2),(L3,L4),

(L5),(L6),(L7),
40 7
K-NN REV (L8),(L9)

112 112 110 110 94 82 74 96 130

TOTAL 254

P a g e 41 | 55
CHAPTER 6

CONCLUSION AND FUTURE SCOPE

The proposed algorithm based on Machine learning approach (Supervised learning) is quite easy to

implement. It efficiently differentiated the characteristics of each kinematic link so that distinct

mechanisms are found without difficulty.

The proposed method can be applied to higher linkage and higher degree of freedom i.e., 10-link 3-

DoF and 10-link 1-DoF. This hybrid algorithm can be easily programmed in any programming

language and can be evaluated easily.

P a g e 42 | 55
REFERENCES

1. L.K.Patel, A.C.Rao. A Method for Detection of Distinct Mechanisms of a Planar

Kinematic Chain. Transactions of The Canadian Society for Mechanical

Engineering 12(1):15-20. (1988) . DOI: 10.1139/tcsme-1988-0003

2. A.C.Rao, D. Varada Raju. Application of the Hamming number technique to detect

isomorphism among kinematic chains and inversions. Mechanism and machine theory.

26(1), 55-75 (1991).

3. J.K.Chu, W.Q.Cao. Identification of Isomorphism among kinematic chains and

inversions using Links adjacent-chain-table. Mechanism and Machine theory. 29(1):53-

58 (1993).

4. S.Sanyal, M.Choubey, A.C.Rao. A pseudo probabilistic approach to detect distinct

inversions of kinematic chains. Transactions of The Canadian Society for Mechanical

Engineering. 21(2):85-96 (1997). DOI: 10.1139/tcsme-1997-0006

5. A.C.Rao. Topology based rating of kinematic chains and inversions using information

theory. Mechanism and Machine Theory 33(7):1055-1062 (1998)

DOI: 10.1016/S0094-114X(97)00061-X

6. A.C.Rao. Application of fuzzy logic for the study of isomorphism, inversions,

symmetry, parallelism, and mobility in kinematic chains. Mechanism and machine

theory. 35, 1103-1116 (2000).

7. F.G.Kong, Q.Z.L.Li,W.Zhang. An artificial neural network approach to mechanism

kinematic chain isomorphism identification. Mechanism and Machine

Theory 34(2):271-283. ( 2001). DOI: 10.1016/S0094-114X(98)00035-4

8. A.Mohammad, R.A.Khan and V.P.Agrawal: Identification of kinematic chains and

distinct mechanisms using extended adjacency matrix. Proc. IMechE Part C: J.

P a g e 43 | 55
Mechanical Engineering Science. 221, 81-88 (2007).

9. G.S.Bedi, S.Sanyal. Joint connectivity: A new approach to detection of isomorphism

and inversions of planar kinematic chains. Journal of the Institution of Engineers

(India): Mechanical Engineering Division 90:23-26 (2010)

10. G.G.Marin, D.L.Rodriguez, E.M.Casermeiro. A New Multivalued Neural Network for

Isomorphism Identification of Kinematic Chains. Journal of Computing and

Information Science in Engineering 10(1). (2010). DOI: 10.1115/1.3330427

11. S.Sanyal. Structural identification of distinct inversions of planar kinematic chains.

IIUM Engineering Journal. 12(3). 85-92 (2011) DOI: 10.31436/iiumej.v12i3.144

12. A.Dargar,A.Hasan,R.A.Khan. Some new codes for isomorphism identification among

kinematic chains and their inversions. International Journal of Mechanisms and

Robotic Systems 1(1):49 – 67. (2012). DOI: 10.1504/IJMRS.2013.051290

13. Rizvi, S. S. H., Hasan, A., & Khan, R. A. (2016). A new method for distinct inversions

and isomorphism detection in kinematic chains. International Journal of Mechanisms

and Robotic Systems, 3(1), 48-59. (2015). DOI:10.1504/ijmrs.2016.077039

14. S.S.H.Rizvi, A.Hasan, R.A.Khan: An efficient algorithm for distinct inversions and

isomorphism detection in kinematic chains. Perspectives in Science. 8, 251-253

(2016).DOI: 10.1016/j.pisc.2016.03.022

15. A.K.Shukla, S.Sanyal: Gradient method for identification of isomorphism of planar

kinematic chains Australian Journal of Mechanical Engineering. 18(3). 1-18 (2017).

DOI: 10.1080/14484846.2017.1374815

16. R.K.Rai, S.Punjabi: An Elusive Method to Identify Isomorphism and Inversions of

Kinematic Chains and Mechanisms. Indian Journal of Science and Technology. 10(18),

1-13 (2017). DOI: 10.17485/ijst/2017/v10i18/111320, May 2017

P a g e 44 | 55
17. V V Kamesh, DVSSSV Prasad, PS Ranjit, Bh Varaprasad,V Srinivasa Rao. An

additive approach to find distinct mechanisms of a planar kinematic chain. Materials

today proceedings. 46(1): 11054-11060 DOI: 10.1016/j.matpr.2021.02.162

18. Chen,Ding,Hong,Cui. Structural synthesis of plane kinematic chain inversions without

detecting isomorphism. Mechanical Sciences 12(2):1061-1071(2021)

19. V.V.Kamesh, DVSSSV Prasad, PS Ranjit, BhV Prasad,VS Rao. A rigidity approach to

find distinct mechanisms of a planar kinematic chain. Materials Today

Proceedings 43(1): 388-394 (2021) . DOI:10.1016/j.matpr.2020.11.684

20. F.L.Tagliani, N.Pellegrini, F.Aggogeri. Machine Learning Sequential Methodology for

Robot Inverse Kinematic Modelling. Applied Sciences 12(19):9417 ( 2022) .

DOI: 10.3390/app12199417

21. M.Amini,K.Sharifani,A.Rahmani: Machine learning Model towards evaluating data

gathering methods in Manufacturing and Mechanical Engineering. IJASER. 15(4).

(2023)

22. S.Higgins, S.Dutta, R.S.Kakar. Machine learning for lumbar and pelvis kinematics

clustering. Computer Methods In Biomechanics & Bio Engineering 27(3):1-14. (2023)

. DOI: 10.1080/10255842.2023.2241593

23. Qi Wang,R.Huang,J.Xiong,X.Dong. A survey on Fault diagnosis of Rotating

Machinery based on Machine learning. Measurement Science and Technology. 35(10).

(2024). DOI: 10.1088/1361-6501/ad6203

24. Rongpei Lin, Yi Yang, F.Shen, G.Pi, Y.Li. An Algorithm for the Determination of

Coronal Mass Ejection Kinematic Parameters Based on Machine Learning. The

Astrophysical Journal Supplement Series 271(2):59 (16 pp) (2024).

DOI: 10.3847/1538-4365/ad2dea

P a g e 45 | 55
APPENDIX I : 8-LINK 1-DOF K-CHAINS

P a g e 46 | 55
P a g e 47 | 55
APPENDIX II : 9-LINK 2-DOF K-CHAINS

P a g e 48 | 55
P a g e 49 | 55
P a g e 50 | 55
P a g e 51 | 55
P a g e 52 | 55
P a g e 53 | 55
P a g e 54 | 55
P a g e 55 | 55

You might also like

pFad - Phonifier reborn

Pfad - The Proxy pFad of © 2024 Garber Painting. All rights reserved.

Note: This service is not intended for secure transactions such as banking, social media, email, or purchasing. Use at your own risk. We assume no liability whatsoever for broken pages.


Alternative Proxies:

Alternative Proxy

pFad Proxy

pFad v3 Proxy

pFad v4 Proxy