0% found this document useful (0 votes)
5 views17 pages

Practical

The document outlines a series of experiments related to Compiler Design for a B. Tech. student, Bhupender Singh. It includes various programming tasks such as token calculation, top-down parsing, LL(1) parsing, operator precedence handling, and LALR and CLR parsing techniques. Each experiment provides theoretical background, programming examples, and expected outputs to demonstrate the concepts of compiler design.
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
5 views17 pages

Practical

The document outlines a series of experiments related to Compiler Design for a B. Tech. student, Bhupender Singh. It includes various programming tasks such as token calculation, top-down parsing, LL(1) parsing, operator precedence handling, and LALR and CLR parsing techniques. Each experiment provides theoretical background, programming examples, and expected outputs to demonstrate the concepts of compiler design.
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 17

Name of Student: Bhupender Singh

Class: B. Tech. (CSE) 6th Sem

Roll no.: 221261015036

Institution: Mata Raj Kaur Institute of


Engineering and Technology

Subject: Compiler Design

Session: 2024-2025
Index
Experiment 1: Introduction to compiler design…1
Experiment 2: Token calculation………………………..2
Experiment 3: Top down parsing to check balance
parenthesis……………………………………………………3-4
Experiment 4: LL(1) parsing……………………………5-7
Experiment 5: Handling basic operator
precedence……………………………………………………8-10
Experiment 6: LALR parsing in bottom-up
parsing…………………………………………………………11-13
Experiment 7: CLR parser in bottom-up parsing…..
…………………………………………………………………14-16
EXPERIMENT: 1 PAGE NO.: 1
AIM: Introduction of compiler design

Theory:

Introduction to Compiler Design

Compiler Design is a branch of computer science that focuses on developing compilers — programs
that translate code written in a high-level programming language (like C, Java, or Python) into
machine code that a computer can execute.

Compilers play a crucial role in software development by bridging the gap between human-readable
code and machine-executable instructions. The process involves analysing, optimizing, and
converting the source code efficiently and correctly.

Why Study Compiler Design?

• Helps understand how programming languages work internally.

• Enhances knowledge of data structures, algorithms, automata theory, and computer


architecture.

• Useful in creating new programming languages, interpreters, and code analysers.

• Builds strong fundamentals for fields like language processing, AI, cybersecurity, and
operating systems.

Basic Phases of a Compiler

1. Lexical Analysis – Breaks the source code into tokens.

2. Syntax Analysis – Checks the grammatical structure using parsing.

3. Semantic Analysis – Validates the meaning of statements.

4. Intermediate Code Generation – Translates code into an intermediate form.

5. Code Optimization - Improves the efficiency of the intermediate code.

6. Code Generation – Converts the optimized code into machine code.

7. Code Linking and Assembly – Finalizes the executable code.

Real-World Examples

• GCC (GNU Compiler Collection)

• Java Compiler (javac)

• LLVM (Low-Level Virtual Machine)


EXPERIMENT: 2 PAGE NO.: 2

AIM: Write a program for token calculation

PROGRAM:

import re def

tokenize(code):

tokens = []

pattern = r"(?P<ID>[a-zA-Z_][a-zA-Z0-9_]*)|(?P<NUM>\d+)|(?P<OP>[+\-
*/=])|(?P<PUNC>[();{}])|(?P<WS>\S+)"

for match in re.finditer(pattern,code):

kind = match.lastgroup value =

match.group(kind) if kind!=

"WS":#skip whitespace

tokens.append((kind,value)) return

tokens code = "x=10+y;" tokens =

tokenize(code) for token_type,

token_value in tokens:

print(f"({token_type}, {token_value})")

OUTPUT:

(ID, x)

(OP, =)

(NUM, 10)

(OP, +)

(ID, y)

(PUNC, ;)
EXPERIMENT:3 PAGE NO.: 3

AIM: Write a program for top-down parsing to check the balance parenthesis PROGRAM:
# Recursive Descent Parser to check balanced parentheses

# Input string (global)


input_str = "" index =
0 def S():
global index if index < len(input_str) and
input_str[index] == '(':
index += 1 S() if index <
len(input_str) and input_str[index] == ')':
index += 1
S() else:
print("Error: Missing closing parenthesis at index", index)
exit()
# else: ε (epsilon production — do nothing)
def is_balanced(string):
global input_str, index
input_str = string index =
0 S() if index ==
len(input_str):

print("Input is balanced ")


else:

print("Error: Extra characters at the end ")

# Test cases
PAGE NO.:4
test_input = input("Enter a string of parentheses: ")

is_balanced(test_input) OUTPUT:

Enter a string of parentheses: (()))

Error: Extra characters at the end


Enter a string of parentheses: (()())

Input is balanced
EXPERIMENT: 4 PAGE NO.: 5

AIM: Write a program for LL(1) parser

PROGRAM:

parsing_table = {

'E': {'id': ['T', 'E\''], '(': ['T', 'E\'']},

'E\'': {'+': ['+', 'T', 'E\''], ')': ['ε'], '$': ['ε']},

'T': {'id': ['F', 'T\''], '(': ['F', 'T\'']},

'T\'': {'+': ['ε'], '*': ['*', 'F', 'T\''], ')': ['ε'], '$': ['ε']},

'F': {'id': ['id'], '(': ['(', 'E', ')']}

stack = ['$', 'E']

def parse(input_tokens):

input_tokens.append('$') index = 0

print(f"\n{'Stack':<20}{'Input':<20}{'Action'}")

while stack:

top = stack[-1]

current_input = input_tokens[index]

print(f"{str(stack):<20}{str(input_tokens[index:]):<20}", end='')

if top == current_input:

stack.pop()

index += 1

print(f"Match '{top}'")

elif top in parsing_table and current_input in parsing_table[top]:

stack.pop()

production = parsing_table[top][current_input]

if production != ['ε']:

PAGE NO.: 6
for symbol in reversed(production):

stack.append(symbol)

print(f"Output: {top} → {' '.join(production)}")

else:

print("Error: Invalid syntax")

return

if index == len(input_tokens):

print("\nParsing successful ")


else:

print("\nParsing failed ")

# Example test

expression = input("Enter expression (e.g., id+id*id): ").replace(' ', '') tokens

= []

i = 0 while i <

len(expression): if

expression[i:i+2] == 'id':

tokens.append('id')

i += 2

else:

tokens.append(expression[i])

i += 1

parse(tokens)

OUTPUT:

Enter expression (e.g., id+id*id): id+id*id


PAGE NO.: 7 Stack

Input Action

['$', 'E'] ['id', '+', 'id', '*', 'id', '$']Output: E → T E'

['$', "E'", 'T'] ['id', '+', 'id', '*', 'id', '$']Output: T → F T'

['$', "E'", "T'", 'F']['id', '+', 'id', '*', 'id', '$']Output: F → id

['$', "E'", "T'", 'id']['id', '+', 'id', '*', 'id', '$']Match 'id'

['$', "E'", "T'"] ['+', 'id', '*', 'id', '$']Output: T' → ε

['$', "E'"] ['+', 'id', '*', 'id', '$']Output: E' → + T E'

['$', "E'", 'T', '+']['+', 'id', '*', 'id', '$']Match '+'

['$', "E'", 'T'] ['id', '*', 'id', '$']Output: T → F T'

['$', "E'", "T'", 'F']['id', '*', 'id', '$']Output: F → id

['$', "E'", "T'", 'id']['id', '*', 'id', '$']Match 'id'

['$', "E'", "T'"] ['*', 'id', '$'] Output: T' → * F T'

['$', "E'", "T'", 'F', '*']['*', 'id', '$'] Match '*'

['$', "E'", "T'", 'F']['id', '$'] Output: F → id

['$', "E'", "T'", 'id']['id', '$'] Match 'id'

['$', "E'", "T'"] ['$'] Output: T' → ε

['$', "E'"] ['$'] Output: E' → ε

['$'] ['$'] Match '$'

Parsing successful
EXPERIMENT: 5 PAGE NO.: 8

AIM: Write a program to handle basic operator precedence

PROGRAM:

def precedence(op):

if op in ('+', '-'):

return 1 if

op in ('*', '/'):

return 2

return 0

def apply_op(a, b, op):

if op == '+': return a + b

if op == '-': return a - b

if op == '*': return a * b

if op == '/': return a / b

def evaluate(expression):

values = [] # stack for numbers

ops = [] # stack for operators

i=0

while i < len(expression):

if expression[i] == ' ':

i += 1

continue

# Current token is a number

elif expression[i].isdigit():

val = 0

while i < len(expression) and expression[i].isdigit():

val = val * 10 + int(expression[i])


PAGE NO.: 9

i += 1

values.append(val)

i -= 1

# Opening bracket

elif expression[i] == '(':

ops.append(expression[i])

# Closing bracket elif

expression[i] == ')': while

ops and ops[-1] != '(':

val2 = values.pop()

val1 = values.pop() op

= ops.pop()

values.append(apply_op(val1, val2, op))

ops.pop() # remove '('

# Operator

else:

while (ops and precedence(ops[-1]) >= precedence(expression[i])):

val2 = values.pop()

val1 = values.pop() op

= ops.pop()

values.append(apply_op(val1, val2, op))

ops.append(expression[i])

i += 1

# Final computation

while ops:

val2 = values.pop()
PAGE NO.: 10

val1 = values.pop() op = ops.pop()

values.append(apply_op(val1, val2, op))

return values[0]

# Test the evaluator expression = input("Enter an

arithmetic expression: ") result =

evaluate(expression) print("Result:", result)

OUTPUT:
Enter an arithmetic expression: 3+5*5

Result: 28
EXPERIMENT: 6 PAGE NO.: 11

AIM: Write a program for LALR parsing in the bottom-up parsing.

PROGRAM:

class LALRParser:

def __init__(self):

self.stack = [0]

self.input = []

self.table = {}

self.rules = {

1: ('E', ['id', '+', 'id']),

2: ('E', ['id']),

def build_table(self, grammar=None):

# Simulated action/goto table for a basic grammar like E → id + id | id

self.table = {

(0, 'id'): 's2',

(0, '('): 's3',

(1, '$'): 'accept',

(2, '+'): 's4',

(4, 'id'): 's5',

(5, '$'): 'r1',

(2, '$'): 'r2',

(4, '+'): 'r2',

def parse(self, tokens):

self.input = tokens + ['$']


PAGE NO.: 12

pos = 0 print(f"Initial stack: {self.stack}, Input:

{self.input}")

while True:

state = self.stack[-1]

symbol = self.input[pos]

action = self.table.get((state, symbol), None)

print(f"State: {state}, Symbol: {symbol}, Action: {action}")

if action is None:

return f"Error at position {pos}"

if action == 'accept':

print("Parse successful")

return "Parse successful"

elif action.startswith('s'): # Shift

next_state = int(action[1:])

self.stack.append(symbol) # Push symbol

self.stack.append(next_state) # Push state

pos += 1 print(f"Shift to state {next_state},

Stack: {self.stack}")

elif action.startswith('r'): # Reduce

rule_num = int(action[1:]) lhs, rhs =

self.rules[rule_num] pop_len = len(rhs) * 2 # For

each symbol & state self.stack = self.stack[:-

pop_len current_state = self.stack[-1]


PAGE NO.:13

self.stack.append(lhs) # Push non-terminal

self.stack.append(current_state + 1) # Simulated goto print(f"Reduce using

rule {rule_num}: {lhs} -> {' '.join(rhs)}, Stack: {self.stack}")

# Example usage print("LALR

Parser Output:") parser =

LALRParser()

parser.build_table()

parser.parse(['id', '+', 'id'])

OUTPUT:
LALR Parser Output:

Initial stack: [0], Input: ['id', '+', 'id', '$']

State: 0, Symbol: id, Action: s2

Shift to state 2, Stack: [0, 'id', 2]

State: 2, Symbol: +, Action: s4

Shift to state 4, Stack: [0, 'id', 2, '+', 4]

State: 4, Symbol: id, Action: s5

Shift to state 5, Stack: [0, 'id', 2, '+', 4, 'id', 5]

State: 5, Symbol: $, Action: r1

Reduce using rule 1: E -> id + id, Stack: [0, 'E', 1]

State: 1, Symbol: $, Action: accept

Parse successful

'

Parse successful
EXPERIMENT: 7 PAGE NO.: 14

AIM: Write a program for CLR parser in the bottom-up parsing

PROGRAM:

class CLRParser: def

__init__(self):

self.stack = [(0, None)]

self.input = []

self.action = {}

self.goto = {}

def build_tables(self, grammar):

self.action = {

(0, 'id'): 's2',

(0, '('): 's3',

(1, '$'): 'accept',

(2, '+'): 's4',

(4, 'id'): 's2'

self.goto = {

(0, 'E'): 1,

(4, 'E'): 1

def parse(self, tokens):

self.input = tokens + ['$'] pos = 0

print(f"Initial stack: {self.stack}, Input: {self.input}")

PAGE NO.: 15
while True:

state = self.stack[-1][0] lookahead = self.input[pos]

action = self.action.get((state, lookahead), None) print(f"State:

{state}, Lookahead: {lookahead}, Action: {action}")

if action is None:

return f"Error at position {pos}"

if action == 'accept':

print("Parse successful")

return "Parse successful"

if action.startswith('s'):

next_state = int(action[1:])

self.stack.append((next_state, lookahead))

pos += 1

print(f"Shift to state {next_state}, Stack: {self.stack}")

elif action.startswith('r'):

rule = int(action[1:])

# Simulate a generic rule length of 2 for now (should be based on actual grammar)

pop_count = 2 self.stack =

self.stack[:-pop_count] state = self.stack[-

1][0] goto_state = self.goto.get((state, 'E'),

None) if goto_state is not None:

self.stack.append((goto_state, 'E'))

print(f"Reduce by rule {rule}, Stack: {self.stack}")

PAGE NO.: 16 else:

return f"GOTO error after reduction at position {pos}"


# Example usage print("CLR

Parser Output:") parser =

CLRParser()

parser.build_tables(None)

parser.parse(['id', '+', 'id'])

OUTPUT:

CLR Parser Output:

Initial stack: [(0, None)], Input: ['id', '+', 'id', '$']

State: 0, Lookahead: id, Action: s2

Shift to state 2, Stack: [(0, None), (2, 'id')]

State: 2, Lookahead: +, Action: s4

Shift to state 4, Stack: [(0, None), (2, 'id'), (4, '+')]

State: 4, Lookahead: id, Action: s2

Shift to state 2, Stack: [(0, None), (2, 'id'), (4, '+'), (2, 'id')]

State: 2, Lookahead: $, Action: None

'

Error at position 3

You might also like

pFad - Phonifier reborn

Pfad - The Proxy pFad of © 2024 Garber Painting. All rights reserved.

Note: This service is not intended for secure transactions such as banking, social media, email, or purchasing. Use at your own risk. We assume no liability whatsoever for broken pages.


Alternative Proxies:

Alternative Proxy

pFad Proxy

pFad v3 Proxy

pFad v4 Proxy