0% found this document useful (0 votes)
5 views63 pages

Sundararajan Narayanan Thesis 2021

The document presents a graduate project by Narayanan Sundararajan on the FPGA implementation of edge detection using the Sobel operator in eight directions, submitted for a Master of Science in Electrical Engineering. It discusses image processing fundamentals, various edge detection techniques, and details the implementation of the Sobel operator on a Zybo Z-10 FPGA, comparing results with MATLAB implementations. The project aims to enhance edge detection accuracy by utilizing multiple directional kernels to capture more edge information and reduce noise.

Uploaded by

itsidd.07
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
5 views63 pages

Sundararajan Narayanan Thesis 2021

The document presents a graduate project by Narayanan Sundararajan on the FPGA implementation of edge detection using the Sobel operator in eight directions, submitted for a Master of Science in Electrical Engineering. It discusses image processing fundamentals, various edge detection techniques, and details the implementation of the Sobel operator on a Zybo Z-10 FPGA, comparing results with MATLAB implementations. The project aims to enhance edge detection accuracy by utilizing multiple directional kernels to capture more edge information and reduce noise.

Uploaded by

itsidd.07
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 63

CALIFORNIA STATE UNIVERSITY, NORTHRIDGE

FPGA Implementation of Edge Detection for Sobel Operator in Eight Directions

A graduate project submitted in partial fulfillment of the requirements.

For the degree of Master of Science in Electrical Engineering

By

Narayanan Sundararajan

August 2021
The graduate project of Narayanan Sundararajan is approved:

Dr. Ramin Roosta Date

Dr. Jack Ou Date

Date
Dr. Shanham Mirzaei , Chair

California State University, Northridge

ii
ACKNOWLEDGEMENTS

I would like to extend my gratitude and thanks to my Committee Chair Dr. Mirzaei for
supporting throughout my endeavors through this graduate project. It would not have been
possible without his support and guidance. The several classes that I took under him during
my time at CSUN paved the fundamentals for this project.

I also would like to thank the other two committee members Dr. Roosta and Dr.Jack Ou who
remained supportive and taught me several fundamental courses that helped me understand
and appreciate the Digital and Analog parts of VLSI design.

This project would not be possible without the ECE Department who was there to support
and help at the right time during these virtual times. I thank Dr. Rengarajan for supporting me
and guiding me throughout my time in graduate school.

Finally, would like to thank my family for supporting my mentally and financially through
these hard times and letting me move back home to complete graduate school and this
project.

iii
Table of Contents

Signature Page ........................................................................................................................... ii

Acknowledgements ...................................................................................................................iii

List of Figures .......................................................................................................................... vii

Abstract ..................................................................................................................................... ix

Chapter 1 : What is Image Processing ....................................................................................... 1

1.1 : Definition of an Image ......................................................................................... 1

1.2 : Fundamental Steps in Image Processing ............................................................. 2

Chapter 2 : Edge Detection ........................................................................................................ 4

2.1 : Sobel Edge Detection........................................................................................... 4

2.2 : Roberts Edge Detection ....................................................................................... 5

2.3 : Canny Edge Detection ......................................................................................... 5

2.4 : Prewitt Edge Detection ........................................................................................ 5

Chapter 3 : Sobel Edge Detection .............................................................................................. 6

3.1 : Mathematical Background ................................................................................... 6

3.1.1: Calculation of Size of Gradient ......................................................................... 6

3.1.2: Calculation of Gradient Direction...................................................................... 6

3.1.3: Eight Direction Sobel Operator ......................................................................... 7

Chapter 4 : Zybo Z7-10: Zynq-7000 ARM/FPGA SoC Development Board ......................... 10

4.1: Definition of FPGA ............................................................................................ 10

4.2 :Features of Zybo Z7-10 ...................................................................................... 10

4.3: Software:Xilinx Vivado ...................................................................................... 11

4.4: Implementation ................................................................................................... 11

Chapter 5: Preliminary Implementation of Sobel Operator: .................................................... 13

iv
5.1 Results and Analysis ............................................................................................ 13

5.2 Conclusion ........................................................................................................... 16

Chapter 6 : Implementation of the Sobel Edge Detection for FPGA ...................................... 17

6.1 Initial Iteration ..................................................................................................... 17

6.2 Conclusion from Initial Iteration ......................................................................... 18

6.3 Second Iteration ................................................................................................... 19

6.4: Conclusion from Second Iteration ...................................................................... 20

Chapter 7: Implementation of the Sobel Edge Detection for FPGA using HLS ..................... 21

7.1 : High Level Synthesis ......................................................................................... 21

7.2 : Sobel Edge Detection Implementation .............................................................. 22

7.3 Results from HLS ................................................................................................ 22

Chapter 8 : Real Time Implementation of Sobel Edge Detection HLS ................................... 23

8.1 HLS C-Simulation and C – Synthesis.................................................................. 23

8.2 Vivado Design Suite ............................................................................................ 24

8.3 Synthesis,Implementation and Bitstream : .......................................................... 27

8.4 Results from Real Time Sobel Filtering : ............................................................ 27

Chapter 9 : Sobel Filter Implementation in 8 Directions ......................................................... 32

9.1 Eight Direction Sobel ........................................................................................... 32

9.2 Tradeoffs : 8 Direction Sobel vs 2 Direction Sobel ............................................. 34

Chapter 10 : Conclusion and Future Scope.............................................................................. 35

Bibliography ............................................................................................................................ 36

Appendix A: Source Code ....................................................................................................... 37

Appendix B: RTL Code (Second Iteration) ............................................................................. 39

Appendix C: HLS Implementation : Two direction Sobel ...................................................... 49

v
Appendix D: HLS Implementation : Eight direction Sobel ..................................................... 51

Appendix E : Block Diagram ................................................................................................... 53

vi
List of Figures
Fundamental Steps in Image Processing 2
Edge Detection 4
(1)- Longitudinal Template , 6
(2) - Lateral Template 6
Eight Direction Sobel Operator 7
5*5 Kernels 8
Weight Formulae 8
Zybo Z7-10 10
Features of Zybo Z-10 11
Block Diagram 12
Input Image 13
Grayscale Equivalent of Original Image 14
Image Convolved with Gx 14
Image Convolved with Gy 15
Edge Detected Output Image 15
Input Image 17
Implementation Block Diagram 18
Line Buffers 19
Input Image 20
Output Image 20
Input Image (HLS) 21
Output Image (Sobel Filter) (HLS) 22
HLS Generated Sobel IP 22
Synthesis Report 23
Synth Report : Hardware Resources 24
Sobel Edge Detection HLS 24
IP Block Diagram 25
DVI2RGB 25
Video to AXI 26
Video Timing Controller 26
Zynq Processing IP 27
Source Image Taken on Webcam 28
vii
Output Image 28
4K Video on Youtube 29
Edge Filtered Output on Monitor 29
Next Frame 30
Edge Filtered Output on Monitor 30
Source Image 32
2-Direction Sobel in X and Y 33
8 Direction Sobel 33

viii
Abstract

FPGA Implementation of edge detection for Sobel Operator in eight directions

By

Narayanan Sundararajan

Master of Science in Electrical Engineering

In the fields of computer vision and image processing, Sobel Edge detection is an approach

which uses specific kernels such as the Sobel Filter to extract edge information in an image.

Edge detection can be implemented completely at a software level of extraction but

implementing on a FPGA can be advantageous in the ability to detect the edges in image

locally. The advantages can be seen below in the results and observations. [1]

This graduate project will demonstrate the use of a FPGA (Field Programmable Gated Array)

for achieving the edge detection of an image. The Sobel Operator by means of kernel

convolution will obtain the high gradient intensity pixels or edges. The Operator makes uses

of two 3*3 kernels with one in the horizontal and other in the vertical direction. This is

effective in determining the edges but there is also a presence of noise in the output. To

obtain more edge information and remove noise in the image, the Sobel operator convolves

the image with eight 5*5 kernels in eight different directions. [1]

ix
The flow of the document is as follows , firstly, a concept of image processing and various

edge detection algorithms are introduced. This is followed by an introduction to the Sobel

operator and the theory behind the multi-directional algorithm (8 Directions).

The algorithm is then implemented on a FPGA (Zybo Z-10) and the preliminary experimental

results are obtained. The results are compared with the results obtained from writing and

running a script on MATLAB to implement the algorithm. The results obtained would

indicate that the Sobel algorithm was implemented successfully on the FPGA- Zybo Z-10.

x
Chapter 1
What is Image Processing

In the field of signal processing, Image processing is a technique which performs


computations on an image to transform, improve or obtain information from it
[2]. Image processing have applications in photography , surveillance, modern transport
systems (self-driving cars) , medicine , entertainment [3]. In this project, will be using digital
images only.

1.1 Definition of an Image

An image is a function where the x-axis and y-axis are two planes of space , which
represents the intensity at different points. This intensity is also called gray level.
When the x-axis and y-axis values all are discrete in nature , this image is called a digital
image. [3]

A digital image consists of a combination of binary 1’s and 0’s. Each point in the x-y
coordinate system or the image is a binary string such as in a 8 bit image , a point could be
00110011.This point in the x-y coordinate system or an image is referred to as a pixel.

Consider the case of 16-bit grayscale digital image, each point in the image is represented by
a 16 bit binary number. Different points or pixels represent different intensities. These
intensities are represented or described by their binary representation. A 16 bit number has a
range from minimum number 0 to maximum 2^16 – 1. (2 to the power of 16). In the case of
a 8 bit the range reduces , hence some information is lost or left unhighlighted.

1
1.2 Fundamental Steps in Image Processing :

Fig.1 Fundamental Steps in Image Processing [3]


a) Image acquisition : This is the first step in Image processing. In fundamental terms this
involves acquiring the image information from the real world or the specific environment.
Consider capturing a photograph with the help of a DSLR, this would be an illustration of
image acquisition.

b) Image enhancement involves tweaking for it to be perfect or useful for a certain application.
Hence this is application dependent, and the process varies. Eg: Image enhancement methods
used in cinema would probably not be useful at all in medical imaging applications.

c) Image restoration is a process wherein an image which is degrading is improved. This is


application independent. This helps in preserving important information and details that could
be lost due to degradation.

d) Wavelets are the beginning point for representing images in various degrees of resolutions.[3]

e) Color Image Processing : This field involves extracting features of interest from the digital
images that are in color in contrast to the conventional methods used which involve grayscale

2
images.

f) Compression : This technique involves the decreasing the size of the memory required to
store this image and during transmission , the bandwidth. There are two types of image
compression which are lossy and lossless compression. [3]

g) Morphological Processing: This technique primarily involves the elements that help describe
shape in images.
h) Segmentation: This field is involved with dividing images into smaller sections for the
purpose of extracting information from the image such as identifying objects in the input
image.
i) Feature Extraction: The output of the Segmentation stage serves as the input to this stage.
Post the Segmentation stage the image is divided into its constituent sections and this can be
used to extract more details from the image or describe or highlight attributes in the image.
[3] In the case of this project, edge detection is performed which is kind of feature detection
where the edges are detected. [3]

j) Image Pattern Classification involves labelling the objects detected during the Segmentation
process.

k) Knowledge Base: This is where the specific details regarding the domain under consideration
are coded in to help gather the information for processing by the other stages quickly and
with ease. [3] This is core to the Image processing framework and is connected to all the
other steps.

3
Chapter 2
Edge Detection

In any given image, the intensity or the gray level varies in different parts of the image.
Edges can be identified by the stark difference in intensity with respect to other parts of the
image. The intensity variation over the image and it’s first derivative is connected to the
edges. Edges can be useful in extracting useful details such as features of objects in different
regions of the image. The intensity variation in the images or the discontinuity are
characterized by two types : a) step discontinuities and b) line discontinuities. [4]

Fig.2. Edge Detection

4
2.1 Sobel Edge Detection:

The edges in the image can be detected or identified using specific techniques. In this
technique, the edge information is extracted by this method by convolving the image with a
kernel to obtain the edges in a certain direction or angle. Commonly, Sobel is performed in 2
directions x and y which are horizontal and vertical directions, respectively. It is often
referred to as a method where the first order derivative with respect to x and y are taken to
obtain edge details. Although it can be implemented in more than 2 upto 8 directions as
demonstrated in this project. The kernels Gx and Gy are used for the horizontal and vertical
direction, respectively. Gy is the transpose of the matrix Gx. It is often implemented on
grayscale images. It can be implemented on color images, but the complexity increases, and it
can be inefficient.

2.2 Roberts Edge Detection:

This edge detection methodology uses the Roberts Cross operator to determine the edges in
an image. It is like the Sobel Edge operator. The difference being the kernels used to
convolve with the image are 2*2 masks (Gx and Gy). The input to this detector is a greyscale
image and the output is an image with the high spatial gradient or edges detected.

2.3 Canny Edge Detection:

This is a more of involved method of edge detection. It is first smoothed using a Gaussian
kernel and then a 2 dimensional convolution is performed like the previous methods. This 2D
convolution output results in formation of ridges which then is further processed leading to
the output.

2.4 Prewitt Edge Detection:


This edge detection is like the Sobel and Roberts Operators. The difference being that the
kernels in the convolution are 4x4 matrices. This algorithm/operator produces similar results
like the Sobel Operator and the convolution kernels contain +1 ,0 and -1s only.

5
Chapter 3: Sobel Edge Detection

3.1 Mathematical Background:

Sobel filter uses two kernels that are matrices of order 3*3. First kernel highlights edges in
the horizontal direction and another one highlights the edges in the vertical direction.
The filter convolves with the kernels below Gx and Gy to obtain the approximations in the
horizontal and vertical directions, respectively.

Fig.3. [1]
(1)- Longitudinal Template
(2) - Lateral Template

In the Fig.3, A represents the original image, Gx and Gy represent the kernels that are
convolved with the horizontal and vertical kernels.

3.1.1 Calculation of Size of the Gradient :

[1]
Vertical and horizontal gradient approximation of each pixel of the image is combined with
formula (3) to calculate the size of the gradient.

3.1.2 Gradient Orientation Calculation:

[1]

6
Since only vertical and horizontal edge kernels are used, edges are missed out in any other
direction. This results in missing key edge details. This will be observed in the following
output observations further in this project.

3.1.3 Sobel Operator in Eight Directions:


The conventional Sobel operator as seen above only detects edges in the vertical and
horizontal directions. The edge detection in other directions is considered less which leads to
missing key edge information.
In efforts to preserve more edge information, a more directional kernel is considered for
detection, as a result the image edges are relatively complete and have a good continuity. [1]

Fig.4 Eight Direction Diagram [1]

Eight kernels of size 5*5 are convolved with the image to preserve key edge details.
The Sobel Operator is extended to eight directions which are 0 , 22.5 , 45 , 67.5, 90 and
112.5,135 and 157.5 degrees. On convolving the image with these above eight kernels , more
edge information is detected and it will be observed the output result will be more in image
detail than the conventional 2 – direction conventional Sobel operation.

7
The kernels or matrices of these directions are as follows:

Fig.5. 5*5 Kernels [1]


Considered the above kernels, the weight w(m,n) of different positions is calculated by the
formula :

Fig. 6 Weight Formulae [1]


The pixel values on the final image is the maximum of the pixels in all 8 directions. So the
maximum value is determined by a pixel by pixel comparison. This will clearly contain more
edge detail and information as observed.

8
3.2 Positives and Drawbacks of Sobel Edge Detection :

Positives:

• Sobel filter based edge detection is a fairly simple process. The complexity only lies in it’s
implementation and that usually can be improved by using suitable hardware and better
software architecture.

• It also is not only limited to the edge detection but also detects the orientation of the edges.
Unlike the Canny Edge detection, Sobel isn’t a multistep process involving several complex
computations. Therefore as a result it is not time consuming.

• The Sobel operator is also computationally better given it’s simplicity. When implemented on
FPGA locally , the Sobel will use fewer computational resources when compared to the
Canny Edge operator. A simple standard Zybo Z-10 can be used to implement the Sobel
algorithm.

Drawbacks:

• Noise can have a serious effect on the edges detected in this method.

• Magnitude of the edge is a very important factor , since as it goes up the ability to detect
edges correctly reduces.

Note : Given , the implementation of the Sobel operator in this project is in eight directions
on a FPGA , the accuracy of the edge detection will be improved.

9
Chapter 4:
Zybo Z7-10: Zynq-7000 ARM/FPGA SoC Development Board

In this project, the Sobel Filter as introduced in the last few chapters will be implemented
locally on a FPGA.

4.1 Definition of a FPGA:


FPGA is an abbreviation for Field Programmable Gated Array. FPGA are fundamentally
semiconductor devices which are made of several CLBS (Configurable Logic Blocks) wired
together via interconnects that can be programmed. FPGAs can be reprogrammed for a
specific application or functionality post manufacturing. This is unlike the
ASICs(Application Specific Integrated Circuits) which are manufactured for specific design
application. Therefore FPGAs are used widely for prototyping and after adoption in the
market , the design is transitioned to ASICs. It is expensive to develop ASICs hence FPGA
provides an entry into the market at an affordable cost.

4.2 Features of the Zybo Z7-10 :

For this project , the FPGA to be used is the Zybo Z7-10 FPGA SoC Development Board.

Fig.7 Zybo Z7-10 [5]

10
The Zybo board has a HDMI sink (input) and source (output) port and Pcam camera
connector which are useful for image processing applications. Eg : Real Time Edge Detection

Fig.8. Features of Zybo Z-10 [5]

4.3 Software : Xilinx Vivado

The software used along with the FPGA will be the Xilinx Vivado. This will be used to write
the design in Verilog/VHDL. The design will then be simulated and the bitstream will be
pushed onto the FPGA.

In the case of this project, the Block RAM will be populated with the image pixel values. The
image will be converted into binary equivalent values and then loaded onto the Block RAM.

The image is loaded onto the Block RAM by means of an inbuilt IP “Block Memory
Generator”

4.4 Implementation :

The Sobel Edge Detection is implemented on the FPGA as shown below :

11
Load Block RAM with Image

Color to Grayscale Conversion

Gaussian Blur

Sobel Edge Detection

Sobel Edge Detection in Eight


Directions

Fig 9 Block Diagram

The above figure demonstrates the step-by-step implementation of this project. The image is
first converted to binary data and loaded onto the Block RAM of the FPGA using the inbuilt
IP. The image pixels are converted from RGB/Color to Grayscale to be ready for edge
detection. A Gaussian Blur is performed on the Image pixels to reduce high frequency noise.
After which the Image is convolved with the two Sobel gradient kernels to determine the
edges followed by the procedure to detect the edges in 8 directions.

12
Chapter 5 :

Preliminary Implementation of the Sobel Edge Detection

The Sobel Edge Detection algorithm is implemented MATLAB to gain an understanding of


the algorithm and have a test output to compare with the results from the FPGA.

MATLAB script for this is included in Appendix A.

5.1 Results and Analysis :

The image whose edges had to be detected :

Fig.10. Input Image

13
Converted to Grayscale :

Fig.11 Grayscale Equivalent of Original Image

Gradient in the X (horizontal) :

Fig.12 Image Convolved with Gx

14
Gradient in the Y (vertical) :

Fig.13. Image Convolved with Gy

Edge Detected Output Image :

Fig.14. Edge Detected Output Image

15
Analysis:
The output obtained determined the edges in the image. The image was successfully
converted from RGB or Color to grayscale. The final output indicates that the kernel
convolution with both Gx and Gy were successful, and the edge details were obtained clearly
in the image.

The output has some noise and this can be further reduced when the Sobel algorithm is
implemented in 8 directions. Since the image will be convolved with kernels in different
orientations or directions , most edge details will not be missed and the obtained output will
have more clarity. The resulting image will have a clear representation of the discontinuities
in the intensity of the image constituting the edges.

5.2 Conclusion

Using MATLAB , the obtain results of the Sobel edge detection was accurate and correct.
This will serve as the test output to compare with the FPGA results for purposes of
verification.

16
Chapter 6 :

Implementation of the Sobel Edge Detection for FPGA

6.1 Initial Iteration

The image was to be stored in a Block RAM of the FPGA. A Block RAM was generated
using a Vivado IP. The Block RAM was initialized with the image data using the .coe file or
the coefficient file.

A 512*512 sized image was converted to it’s pixel data by writing a MATLAB Script
(attached in Appendix B). The MATLAB converted the image in .bmp file format to .coe
format. The Coefficent file consists of all pixel data one after another.

The 512*512 image to be stored in the Block RAM was lena_gray:

Fig.15 Input Image [7]

The image had to be resized to 256*256 during conversion to .coe file. In the absence of
resizing the IP tool indicated that there will be collisions or pixel data overlap while storing.
Conversion to 256*256 from 512*512 would also affect the picture quality i.e. resolution.

This approach involved using two-three more Block RAMs to store processed images after
blurring and passing through the sobel filter to detect edges. So this approach would cause
significant usage of block RAMs and LUTs leaving very little leeway for the kernel
convolution.

17
6.2 Conclusions from Initial Iteration:

The approach wasn’t practical and would consume significant FPGA resources and, in all
purposes, wouldn’t be the most efficient design.

Therefore, a second approach involved using line buffers and reading directly from the DDR
memory on the FPGA.

6.3 Second Iteration: `

The Second Iteration involved storing the image on the DDR memory using USB/UART and
accessing the image using a line buffer. The DDR memory has a size of 1 GB on the Zybo Z-
10.

The implementation is as shown below:

Block Diagram :

Fig 16. Implementation Block Diagram

As per the above block diagram , the image is stored in the DDR memory using USB/UART
interface from PC. Using a DMA Controller or Direct Memory Access Controller m the
image is sent into the image processing IP (generated from scratch or self-generated) which is
basically consisting of three fundamental blocks :

1) Line Buffer : This module buffers in the necessary pixels according to a certain fixed
length. The number of line buffers used are 4. Each buffer reads 512 pixels in and
then can be read out for processing. This is an efficient way of transferring an image
18
into the FPGA for processing since it uses up a relatively lesser amount of resources.

Fig.17 Line Buffers

2) Convolution : This module performs the convolution between the kernel and the pixel
data from the line buffer. This is mostly a multiply and accumulate module or MAC
module. The kernel is initialized with values and multiplied with the incoming pixel
data.

3) Controller : This module is the controller for the complete system and is a core and
integral part. It ensures the line buffer are written to read from correctly. Reading
from a line buffer into the convolution unit for processing doesn’t happen until 512
values are filled up in the line buffer. The transition between line buffer 1 and 2 and
so on also happens at the same way. Line buffer 2 is written to after line buffer 1 is
filled up with pixel values and so on till line buffer 4. A multiplexer is implemented
which takes care of selection of which buffer to read from.

A top module instantiating all the modules is written along with a FIFO(IP used) to prevent
the mismatch between the read and write clock resulting in any latency or erroneous output.

A testbench module using file I/O instructions is written to test the system during simulation
and ensure the image undergoes a blur. The kernel is initialized to all 9 values of 1 for
blurring. (Code attached in Appendix C for all modules)

6.4 : Results from the Second Iteration :

The results obtained were unsatisfactory and a blurred image wasn’t obtained.

19
Fig 18. Input Image [7]

Fig 19. Output Image

The system didn’t simulate correctly and there seems to be an error in the kernel convolution
module or in the controller module. The code has to be further scoped and debugged to get an
accurate output.

The system is complex and it is difficult to identify pixels and verify if the convolution
individually are accurate. The submodule were scoped and their respective data were checked
for any errors. This has to be done more in depth and each module needs a testbench to check
functionality.

This is non trivial and fairly the most hardest way to implement the filter on the FPGA and
hence a HLS method was adopted to make things easier relatively and efficient.

20
Chapter 7 :

Implementation of the Sobel Edge Detection for FPGA using HLS

7.1 High Level Synthesis or HLS :

The Vivado High-Level Synthesis allow C, C++ to be used as language and target FPGA
devices from Xilinx without the requirement of writing HDL code from scratch.
Therefore it becomes possible to write complex functionality with ease.
This provides advantages over traditional RTL coding such as simplicity and more
functionality. The C/C++ code written is converted into RTL by the tool and the developer
need not be limited by the constructs in HDLs like Verilog and VHDL.

In the case of image processing, HLS allows us to make use of OpenCV functions which
enables us to program more complex features like edge detection and blurring efficiently and
easily and be able to modify and make edits quickly without worrying about the board
resources too much.

7.2 Sobel Edge Detection Implementation :

High Level Synthesis was used to write C/C++ script. Code attached in Appendix D. Image
and a stimulus script was also written to test the program. The output was as shown below:
[14]

Fig.20 Input Image ( HLS) [8]

21
Fig. 21 Output Image (Sobel Filter)
(HLS)

The program convolved a high resolution 512*512 image with the sobel kernel to obtain the
above image which clearly contrasts the edges.

The input image was not grayscale like in the traditional HDL method. The input image was
a color image , hence it was made into a grayscale image and then convolved with the kernels
to extract edge information. Code attached in Appendix D.

7.3 : Results of HLS Method:

The output obtained by the means of using High Level Synthesis tool was in accordance with
expectations. This edge detection program has been regenerated as an IP as shown below and
will be used to connect with other modules and finally the edge detected image will be
displayed on a HDMI monitor.

Fig.22 HLS Generated Sobel IP

22
Chapter 8 : Real Time Implementation of Sobel Edge Detection HLS

8.1 : HLS C-Simulation & Synthesis :

HLS Code written in C/C++ is simulated and the results as per above and is according to our
expectations. The Code is then synthesized to convert the C/C++ code into HDL ie. Verilog
or VHDL. The HLS code is converted into an IP which forms the core of the video streaming
system designed as a block diagram . The Sobel filter detects edges present in the frames or
images in the video. HLS Code is attached in Appendix E.

Fig 23. Synthesis Report

On observation of the synthesis reports , it provides us information on the timing such as


clock , latency , uncertainty , intervals of latency. The clock period that was picked is 13.5 ns.
The uncertainty is shown to be 1.25 ns.

Additionally , the report also shows version of the Vivado HLS hardware , the product family
and the target device which is Zybo Z-10 or xc7zclg400 – 1 which is the equivalent chip
name.

23
Fig.24. Synth Report : Hardware Resources

The further part of the report describes the hardware resources consumed by the Sobel Filter
or Edge Detector. On observation it shows the resources used such as Block RAM, Flip Flops
, Look Up Tables.

This is where the compute optimization can be done. The complexity or computational effort
required by the design can be reduced or optimized in terms of area (h/w resources) , time
(speed).

For purposed of this design , the hardware resources and time seem okay. Proceeding forward
to generate the IP in the VIVADO design tool and connect other required IPs.

8.2 : Vivado Design Suite :


The generated IP is copied into a folder and a new project is created which will implement a
block diagram as required with the HLS Sobel IP forming the main part of the core.

Fig.25 Sobel Edge Detection HLS

24
The above IP as in Fig.25 is the core of the block diagram that will be implemented for the
video streaming pipeline.

The input to the Zybo Z-10 or the FPGA in general is HDMI input from a video source. The
laptop in this case will serve as the video input to the board. The video is streamed into the
board by means of pixels and this must passthrough the sobel filter and be streamed out to a
monitor with the filtered real time video.

A snapshot of the block diagram that was implemented:

Fig.26 IP Block Diagram[15]

(Note : A clearer picture of the diagram is attached in Appendix F after to demonstrate the
block diagram)

This is fundamentally a memory mapped design implemented with the Zynq PS or Zynq
Processing System.
Core blocks of the diagram are:

1) DVI2RGB and RGB2DVI:


The HDMI input from the video source is converted to the 24-bit RGB frames.
Input specific format present in this image processing system in scope of the project is RGB.
RBG2DVI does the opposite function at the output stage.

Fig.27 DVI2RGB[12]

25
2) Video to AXI & AXI to Video :

The RGB filtered equivalent of the video is converted to the AXI stream which is
widely using within the FPGA for image processing applications. The AXI to stream
IP is at the output side and converts the AXI stream into it’s equivalent RGB filtered
value.

Fig.28 Video to AXI[13]

3) Video Timing Controller:

Two Video Timing Controller blocks are present in the system. This helps in meeting
timing requirements between the input stage and the output stage.

Fig.29 Video Timing Controller


[11]

4) HLS Sobel Edge Filter :

The Sobel Filter that is implemented in HLS before is the same one instantiated.

26
5) Zynq PS :

This is referring to the Zynq processing system or the FPGA under consideration is
the Memory Mapped mode Zybo Z-10.

Fig.30 Zynq Processing IP[10]

8.3 Synthesis,Implementation and Bitstream :

The above design is synthesized and implemented on Vivado to verify that the IPs constitute
effective hardware after the block diagram design is validated and correct.

The synthesis and implementation errors are corrected at every stage and any missed
connection is made. Once both synthesis and implementation are complete the bitstream is
generated for the system and the Zybo Z-10 is targeted for downloading or flashing the
bitstream.

The Zybo Z-10 HDMI library is used for helping facilitate real time HDMI input to output
onto a monitor. This is implemented on the Xilinx SDK and passes through the block
diagram implemented in the design giving the desired edge filtered output as show below.

8.4 Results from Real Time Sobel Filtering :

The below images are the observed results of the real time Sobel filtering on a HDMI
monitor.

27
Fig.31 Source Image taken on Webcam

Fig.32 Output Image taken on Monitor

In the above two images , both are taken through the webcam with the first one taken on the
laptop through the webcam. This is the source image. The second image is the edge filtered
output image taken on the HDMI monitor which is the output of the Zybo Z-10.

It can be clearly observed from the output result that the Sobel edge detection in 2 directions
is successful and the edges are filtered out and the varied discontinuities in the intensities are
clearly pointed out on the output.

The output on the monitor is not just filtering images but filtering real time video edges as
well. Below is an example of real time video filtering.

28
Fig.33 4K Video on Youtube [9]

Fig 34. Edge Filtered Output on Monitor [9]

29
Fig.35 Next Frame [9]

Fig 36 . Edge Filtered Output [9]

30
As shown in the last four figures , the sobel filter implemented in 2 direction can extract edge
detail from real time 4K video as well as images.

In the last image , there seems to be some clarity missing since there is a lot of edge detail
and the 2D Sobel isn’t sufficient to clearly describe such images. The 8 direction Sobel can
do a better job at detecting more richer edge details and highlight the intensity variations
more clearly.

31
Chapter 9 : Sobel Filter Implementation in 8 Directions

9.1 Eight Direction Sobel :

The theory has already been stated before on the eight direction Sobel. The Eight direction or
multidirectional sobel operator convolves a given input image with 8 kernels which help
preserve edge information and highlight them.

The same procedure was followed to implement the Sobel filter in 8 direction as the 2
direction filter. C/C++ code was written in HLS to generate an equivalent IP and add it to the
video streaming pipeline. The code is attached in Appendix E.

The simulation results from the eight direction Sobel were as follows :

Fig.37 Source Image [8]

32
Fig. 38 2-Direction Sobel in X and Y

Fig.39 8 Direction Sobel

On comparing Fig 39 with Fig 37 , it is clear that the 8 Direction Sobel is far better at edge
filtering when compared to the 2 direction Sobel. Several key details in the Mars Curiosity
Rover were not highlighted in the 2 direction Sobel. For eg: the tires in the rover have far
more edge detail which has been well captured by the eight direction Sobel.

33
The Sobel Filter in 8 directions was not implemented in the real time video pipeline since the
implementation required more hardware resources and it was not possible to implement a
complex algorithm involving 8 image-based kernel convolutions on a conventional Zybo Z-
10.

9.2 Tradeoffs : 8 Direction Sobel vs 2 Direction Sobel :

a) Edge Information: The Eight direction Sobel does a better job at preserving edge
information and highlighting the intensity discontinuities in an image when
comparing to the 2 direction Sobel.

b) Area : The 8 Direction Sobel consumes more area given the 8 convolutions that are
implemented and the hardware that would accompany that kind of operational
requirement.

c) Complexity : Maximum value among all 8 convolved outputs have to be determined


and this would require consequent comparisons between each directionally oriented
kernel output.

d) Code / Software Architecture : Several loops are written to execute the 8 direction
Sobel which will be unrolled during synthesis and implementation. Writing several
loops is a naïve solution and could be improved to reduce number of loops and as a
result hardware resources.

e) HDLs: In a scenario , the design is purely implemented on HDLs , line buffers could
be made use of and the image can be read from the DDR memory. This could prove
beneficial for the 8 direction Sobel that requires high level of complexity.

f) Speed/Time : Depending on the nature of implementation both the 8 direction Sobel


and 2 direction Sobel can be optimized for better timing.

34
Chapter 10
Conclusion and Future Scope

Conclusion:

1) Real time Sobel Edge detection was implemented on a Zybo Z-10 in 2 directions and
results were in accordance with the expected results.

2) 8 Direction Sobel Edge detection was simulated, and the results were as expected.

Future Scope :

The Real time Sobel Edge detection can be implemented on FPGA in 8 directions and a
comparison can be made to the real time detection in 2 directions.. Different kernels can be
convolved with the image to obtain better edge detection. Machine learning and Artificial
Intelligence techniques can be made use of to help facilitate higher quality in edge detection
filtering.

35
Bibliography

[1] Zou Xiangxi , Zhang Yonghui , Zhang Shuaiyan , Zhang Jian,


“FPGA Implementation of Edge Detection for Sobel Operator in Eight Directions”,
IEEE Asia Pacific Conference on Circuits and Systems 2018

[2] University of Tartu Image Processing Textbook 1, 1. “Introduction to Image Processing”

[3] Rafael C. GONZALES, Richard E., “Digital Image Processing”, 3rd edition, Woods,2008

[4] Ramesh Jain, Rangachar Kasturi, Brian G. Schunck, “MACHINE VISION”, 1995

[5] “Xilinx Products” https://www.xilinx.com/products/silicon-devices/fpga/what-is-an-


fpga.html

[6] “Digilent Zybo Z-10 Product Information” https://www.xilinx.com/products/boards-and-


kits/1-pukimv.html

[7] “Source Image of Lena Gray” :


https://www.cosy.sbg.ac.at/~pmeerw/Watermarking/lena.html

[8] NASA/JPL-Caltech “Curiosity Rover : Source Image” , 2011


https://www.nasa.gov/mission_pages/msl/multimedia/pia14760.html

[9] YouTube,“4K Video (Ultra HD) Unbelievable Beauty, 2019


https://www.youtube.com/watch?v=K1QICrgxTjA&t=593s

[10] Zynq-7000 Processing System IP, PG082, May 10, 2017,


https://www.xilinx.com/products/intellectual-property/processing_system7.html

[11] Video Timing Controller IP, PG016, February 12,2019


https://www.xilinx.com/products/intellectual-property/ef-di-vid-timing.html

[12] ElodGyorgy, “DVI-to-RGB (Sink) 1.7 IPCore User Guide”, Revised February 2, 2017.

[13] VideoIn to AXI4-Stream IP, PG043, October 4, 2017


https://www.xilinx.com/products/intellectual-property/video_in_to_axi4_stream.html

[14] Vivado Design Suite UserGuide High-Level SynthesisUG902 (v2020.1) May 4, 2021
https://www.xilinx.com/support/documentation/sw_manuals/xilinx2020_1/ug902-
vivado-high- level-synthesis.pdf

[15] Adam Taylor , “FPGA-Based Edge Detection Using HLS” , 2018


https://www.hackster.io/adam-taylor/fpga-based-edge-detection-using-hls-192ad2

36
Appendix A : MATLAB Script

%Author : Narayanan.S
%Date : 04/02/2021
ima1 = imread('sobel.jpeg');
% The color image under consideration

ima1 =double(rgb2gray(img));
%Conversion from RGB to Grayscale

Gx = double([-1 0 1;-2 0 2;-1 0 1]);


Gx=rot90(Gx,2);
%Gradient in X domain

Gy = double([-1 -2 -1; 0 0 0; 1 2 1]);


Gy=rot90(Gy,2);
%Gradient in Y domain

I= ima1;

[r,c]=size(I);

Fx = zeros(r,c);
Fy = zeros(r,c);

I = padarray(I,[1 1]);

for i=2:r-1
for j=2:c-1
Fx(i,j)=sum(sum(Gx.*I(i-1:i+1,j-1:j+1)));
Fy(i,j)=sum(sum(Gy.*I(i-1:i+1,j-1:j+1)));
end
end

37
ima1=uint8(ima1);
FMag=sqrt(Fx.^2+Fy.^2);

figure(1)
imshow(ima1);
title('Original Image');

figure(2)
imshow((abs(Fx))./max(max(Fx)));
title('Gradient X ');

figure(3)
imshow(abs(Fy)./max(max(Fy)));
title('Gradient Y');

figure(4)
imshow(FMag./max(max(FMag)));
title('Gradient Mag');

38
Appendix B: RTL Code (Second Iteration)

`timescale 1ns / 1ps


//////////////////////////////////////////////////////////////////////////////////
// Company:
// Engineer:
//
// Create Date: 06/13/2021 07:05:01 PM
// Design Name:
// Module Name: lbuff
// Project Name:
// Target Devices:
// Tool Versions:
// Description:
//
// Dependencies:
//
// Revision:
// Revision 0.01 - File Created
// Additional Comments:
//
//////////////////////////////////////////////////////////////////////////////////
//512*512 image - grayscale - assumption.

//line buffers

module lbuff(
input clock,
input reset,

//1 pixel of data coming into the line buffer


input [7:0] dta,

//Valid signal - to check the data coming into the line buffer
input dta_valid,

//8 bit 3*3 matrix - 3 pixels - Each pixel - 1 byte.


output [23:0]dout,

input read
);

reg [7:0] line [511:0] ; //line buffer


reg [8:0]wp;// Write pointer
reg [8:0]rd;//Read pointer

// writing and reading are done with respect to the clock signal
always @(posedge clock)

39
begin

if(dta_valid)
line[wp] <= dta;

end

always@(posedge clock)

begin
if(reset)
//Active High Reset
wp <= 'd0;
else if(dta_valid)
wp <= wp + 'd1;
end

assign dout = {line[rd],line[rd+1],line[rd]};

always@(posedge clock)
begin
if(reset)
//Active High Reset
rd <= 'd0;
else if(dta_valid)
rd <= rd + 'd1;
end
endmodule

`timescale 1ns / 1ps

module conv(
input lk,
//Image consists of pixels which are 1 byte in size or 8 bits in size.
//Can't define as a two dimensional , has to be flatten to single dimensional
input [71:0] pdata,

//Valid signal similiar to AXI Stream


input pdatavalid,

//multiply all the neighbourhood pixels and take average - blur


output reg [7:0] cdata,

//Valid signal
output reg cdata_valid
);

integer i;

//kernel - 72 elements - can be modelled as 2-dimensional array

40
//Intended to Blur the image , can also be used to do Gaussian Blur
//It is all going to be 1 so it can be 1 bit , but can be modelled as 8 bit for Gaussian Blur and
other operations
reg [7:0] kernel [8:0];

//16 bit wide - 8 bit multiplication between kernel and data , storing the kernel result
reg [15:0] MUL[8:0];

//16 bit wide - worst case scenario


reg [15:0] sdat;
reg [15:0] sdata;
reg mulval;
reg sval;
reg convolved_data_valid;

//Initialize the values of the kernel


//Blur all values of the matrix is 1

initial

begin
//Unroll the for loop and fundamentally execute each line one by one , equivalent to writing
kernel[0]=1.kernel[1]=1,.....kernel[i]=1
for(i=0;i<9;i=i+1)
begin
kernel[i] = 1;
end
end

//Splitting into several always blocks create a pipeline to improve timing


//Multiplication Operation
always @(posedge clock)
begin
for(i=0;i<9;i=i+1)
begin

//2-D convolution between kernel and pixel data

multData[i] <= kernel[i]*pixdat[i*8+:8];


end
mulval <= pixdat_valid;
end

//Sum Operation
always @(*)
//always block indicates a purely combinational block
begin
sdat = 0;
for(i=0;i<9;i=i+1)

41
begin
sdat = sdat + multData[i];
end
end

always @(posedge clock)


begin
sdata <= sdat;
sval <= mulval;
end

always @(posedge clock)


begin
//integer division by 9 , integer part is considered
ocdat <= sdata/9;
ocdat_valid <= sval;
end

endmodule

`timescale 1ns / 1ps


module imageControl(
input clock,
input reset,

//input inteface
input [7:0] pixdat,
input pixdat_valid,

//output interface
output reg [71:0] opd,
output opd_valid,
output reg o_intr
);

//Counts the number of pixels


reg [8:0] counter;

//0-3 line buffers (4 total)


reg [1:0] wr_buff;
reg [3:0] lbuffd;
reg [3:0] lbuffrd;
reg [1:0] currentRdlbuff;

//24 bits wide - size line buffer , o-data - line buffer


wire [23:0] lb0;
wire [23:0] lb1;
wire [23:0] lb2;
wire [23:0] lb3;

42
reg [8:0] RDC;
//This is the signal that controls which line buffer are we reading from
reg readbuff;

//512*4 ~ 2000 - worst case scenario , so that can be represented by 12 bits


reg [11:0] count;

//State Machine Variable for transitioning between IDLE and READ State
reg rdState;

localparam IDLE = 'b0,


RD_BUFFER = 'b1;

assign opd_valid = readbuff;

always @(posedge clock)


begin
if(reset)
//12 bits
count <= 0;
else
begin

//pixdat_valid --- data from the external world , readbuff - reading from the line buffer
signal

//data coming from the external world , but not reading any pixel data from the line buffer
if(pixdat_valid & !readbuff)
count <= count + 1;

//data not coming from the external world , but reading a pixel data from the line buffer
else if(!pixdat_valid & readbuff)
count <= count - 1;

//in the scenario of the data coming from the external world , but not reading any pixel data
from the line buffer - nothing happens ,
//count - remains the same value.
end
end

//State Machine which will become active after all line buffers are filled up by writing and
reading can begin.
//512*3 = 1536 is the number of minimum pixels that are required to be filled in before one
can start reading from the line buffers
always @(posedge clock)
begin
if(reset)
begin
rdState <= IDLE;
readbuff <= 1'b0;

43
o_intr <= 1'b0;
end
else
begin
case(rdState)
IDLE:begin
o_intr <= 1'b0;
//512 * 3 = 1536 , line buffer data is full and reading can begin
if(count >= 1536)
begin
//Reading begins by making the readbuff active high
readbuff <= 1'b1;
rdState <= RD_BUFFER;
end
end
RD_BUFFER:begin
//Reading moves to idle state when 512 pixels are read
if(RDC == 511)
begin
rdState <= IDLE;
readbuff <= 1'b0;
o_intr <= 1'b1;
end
end
endcase
end
end

//Pixel Counter - 512 pixels are added to line buffer


always @(posedge clock)
begin
if(reset)
counter <= 0;
else
begin
if(pixdat_valid)
counter <= counter + 1;
end
end

//When 511th pixel is read into the line buffer,the next pixel will enter the next line buffer
always @(posedge clock)
begin
if(reset)
//
wr_buff <= 0;
else
begin
if(counter == 511 & pixdat_valid)
wr_buff <= wr_buff+1;

44
end
end

//pointing to the current line buffer and all other line buffers are closed away for any storage
always @(*)
begin
lbuffd = 4'h0;
lbuffd[wr_buff] = pixdat_valid;
end

//similiar to write , read counter


always @(posedge clock)
begin
if(reset)
RDC <= 0;
else
begin
if(readbuff)
RDC <= RDC + 1;
end
end

//Similiar to write, reading out the line buffer


always @(posedge clock)
begin
if(reset)
begin
currentRdlbuff <= 0;
end
else
begin
if(RDC == 511 & readbuff)
currentRdlbuff <= currentRdlbuff + 1;
end
end

//Reading from the line buffer


always @(*)
begin
case(currentRdlbuff)
0:begin
//Concatenating 3 - since it is 24 bits
opd = {lb2,lb1,lb0};
end
1:begin
opd = {lb3,lb2,lb1};
end
2:begin
opd = {lb0,lb3,lb2};

45
end
3:begin
opd = {lb1,lb0,lb3};
end
endcase
end

//implements a multiplexer with currentRdlbuff as the control signal


//written as a combinational circuit since it has to switch between the line buffers
immediately and not suffer from clock latency
always @(*)
begin
case(currentRdlbuff)
0:begin
//doesnt read form the fourth line buffer , corresponds to the previous case statement
block
lbuffrd[0] = readbuff;
lbuffrd[1] = readbuff;
lbuffrd[2] = readbuff;
lbuffrd[3] = 1'b0;
end
1:begin
lbuffrd[0] = 1'b0;
lbuffrd[1] = readbuff;
lbuffrd[2] = readbuff;
lbuffrd[3] = readbuff;
end
2:begin
lbuffrd[0] = readbuff;
lbuffrd[1] = 1'b0;
lbuffrd[2] = readbuff;
lbuffrd[3] = readbuff;
end
3:begin
lbuffrd[0] = readbuff;
lbuffrd[1] = readbuff;
lbuffrd[2] = 1'b0;
lbuffrd[3] = readbuff;
end
endcase
end
//4 line buffers
//data valid - pointing to lbuffd indicating that the line buffer will be enabled and used as per
required and when the previous line buffer has stored pixel data
lbuff lB0(
.clock(clock),
.reset(reset),
.dta(pixdat),
.dta_valid(lbuffd[0]),

46
.dout(lb0),
.read(lbuffrd[0])
);

lbuff lB1(
.clock(clock),
.reset(reset),
.dta(pixdat),
.dta_valid(lbuffd[1]),
.dout(lb1),
.read(lbuffrd[1])
);

lbuff lB2(
.clock(clock),
.reset(reset),
.dta(pixdat),
.dta_valid(lbuffd[2]),
.dout(lb2),
.read(lbuffrd[2])
);

lbuff lB3(
.clock(clock),
.reset(reset),
.dta(pixdat),
.dta_valid(lbuffd[3]),
.dout(lb3),
.read(lbuffrd[3])
);

endmodule

`timescale 1ns / 1ps

module imageProcessTop(
input axclock,

//Active Low Reset


input axi_reset_n,
//slave interface
input dta_valid,
//pixel data
input [7:0] dta,
output dout_ready,
//master interface - data back to the DMA Controller
output dout_valid,
//data after convolution
output [7:0] dout,
input dta_ready,

47
//interrupt - indicate to the FPGA processor that a line buffer is free and the new pixels can be
sent
output o_intr

);

wire [71:0] pixel_data;


wire pixel_data_valid;
wire axis_prog_full;
wire [7:0] convolved_data;
wire convolved_data_valid;

assign dout_ready = !axis_prog_full;


imageControl IC(
.clock(axclock),
//since reset was assumed as an active high reset
.reset(!axi_reset_n),
.pixdat(dta),
.pixdat_valid(dta_valid),
.opd(pixel_data),
.opd_valid(pixel_data_valid),
.o_intr(o_intr)
//no ready signal because memory is always ready
);

conv conv(
.clock(axclock),
.pixdat(pixel_data),
.pixdat_valid(pixel_data_valid),
.ocdat(convolved_data),
.ocdat_valid(convolved_data_valid)
);
//Convolution Outputs are buffered through the FIFO
outputBuffer OB (
.wr_rst_busy(), // output wire wr_rst_busy
.rd_rst_busy(), // output wire rd_rst_busy
.s_aclk(axclock), // input wire s_aclk
.s_aresetn(axi_reset_n), // input wire s_aresetn
.s_axis_tvalid(convolved_data_valid), // input wire s_axis_tvalid
.s_axis_tready(), // output wire s_axis_tready
.s_axis_tdata(convolved_data), // input wire [7 : 0] s_axis_tdata
.m_axis_tvalid(dout_valid), // output wire m_axis_tvalid
.m_axis_tready(dta_ready), // input wire m_axis_tready
.m_axis_tdata(dout), // output wire [7 : 0] m_axis_tdata
.axis_prog_full(axis_prog_full) // output wire axis_prog_full
);

Endmodule

48
Appendix C: HLS Implementation : Two direction Sobel

Edge_detect.cpp :

#include "edge_detect.h"
#include "math.h"
void edge_detect(stream_t& stream_in, stream_t& stream_out)
{
int const rows = MAX_HEIGHT;
int const cols = MAX_WIDTH;
rgb_img_t img0(rows, cols);
rgb_img_t img1(rows, cols);
rgb_img_t img1a(rows,cols);
rgb_img_t img1b(rows,cols);
rgb_img_t img2(rows, cols);
rgb_img_t img3(rows, cols);
rgb_img_t img4(rows,cols);
rgb_img_t img5(rows,cols);
rgb_img_t img6(rows,cols);
rgb_img_t img7(rows,cols);
rgb_img_t img8(rows,cols);
rgb_img_t img9(rows,cols);
hls::AXIvideo2Mat(stream_in, img0);
//converted to matrix form
//converted to gray-scale
hls::CvtColor<HLS_RGB2GRAY>(img0, img1);
hls::Duplicate(img1,img1a,img1b);
//Gx
//hls::Sobel<1,0,3>(img1a,img2);

//Gy
//hls::Sobel<0,1,3>(img1b,img3);
//hls::AddWeighted(img3,(double)0.5,img2,(double)0.5,(double)0.0,img4);

hls::CvtColor<HLS_GRAY2RGB>(img3, img4);
hls::Mat2AXIvideo(img4, stream_out);
}

Edge Detect.h :

#include "hls_video.h"

typedef ap_axiu<24,1,1,1> interface_t;


typedef hls::stream<interface_t> stream_t;

void edge_detect(stream_t& stream_in, stream_t& stream_out);

#define MAX_HEIGHT 720


#define MAX_WIDTH 1280

49
typedef hls::Mat<MAX_HEIGHT, MAX_WIDTH, HLS_8UC3> rgb_img_t;

#define INPUT_IMAGE "rover.bmp"


#define OUTPUT_IMAGE "rover_output.bmp"

Edge_Detect_test :

#include "edge_detect.h"
#include "hls_opencv.h"

int main()
{
int const rows = MAX_HEIGHT;
int const cols = MAX_WIDTH;

cv::Mat src = cv::imread(INPUT_IMAGE);


cv::Mat dst = src;

stream_t stream_in, stream_out;


cvMat2AXIvideo(src, stream_in);
edge_detect(stream_in, stream_out);
AXIvideo2cvMat(stream_out, dst);

cv::imwrite(OUTPUT_IMAGE, dst);

return 0;
}

50
Appendix D: HLS Implementation : Eight direction Sobel

Code:

#include "edge_detect.h"
#include "math.h"
void edge_detect(stream_t& stream_in, stream_t& stream_out)
{
int const rows = MAX_HEIGHT;
int const cols = MAX_WIDTH;
rgb_img_t img0(rows, cols);
rgb_img_t img1(rows, cols);
rgb_img_t img1a(rows,cols);
rgb_img_t img1b(rows,cols);
rgb_img_t img2(rows, cols);
rgb_img_t img3(rows, cols);
rgb_img_t img4(rows,cols);
rgb_img_t img5(rows,cols);
rgb_img_t img6(rows,cols);
rgb_img_t img7(rows,cols);
rgb_img_t img8(rows,cols);
rgb_img_t img9(rows,cols);
hls::AXIvideo2Mat(stream_in, img0);
//converted to matrix form
//converted to gray-scale
hls::CvtColor<HLS_RGB2GRAY>(img0, img1);
hls::Duplicate(img1,img1a,img1b);
//Gx
//hls::Sobel<1,0,3>(img1a,img2);

//Gy
//hls::Sobel<0,1,3>(img1b,img3);
//hls::AddWeighted(img3,(double)0.5,img2,(double)0.5,(double)0.0,img4);

//const char coefficients[5][5] = { {0,0,0,0,0},


// { -1, -2, -4,-2,-1},
// {0,0,0,0,0},
// { 1, 2, 4,2,1},
// {0,0,0,0,0}};

// hls::Window<5,5,char> kernel;

// for (int i=0;i<5;i++){


// for (int j=0;j<5;j++){
// kernel.val[i][j]=coefficients[i][j];
// }
// }
// hls::Point_<int> anchor = hls::Point_<int>(-1,-1);
// hls::Filter2D(img1a,img2,kernel,anchor);

51
//G2

const char coefficients2[5][5] = { {0,1,0,0,0},


{ -1, 0, 4,2,0},
{0,-4,0,4,0},
{0,-2, -4,0,1},
{0,0,0,-1,0}};

hls::Window<5,5,char> kernel1;

for (int i=0;i<5;i++){


for (int j=0;j<5;j++){
kernel1.val[i][j]=coefficients2[i][j];
}
}
hls::Point_<int> anchor = hls::Point_<int>(-1,-1);
hls::Filter2D(img1b,img3,kernel1,anchor);

//G3

// const char coefficients3[5][5] = { {0,0,0,0,0},


// { 0, -2, -4,-2,0},
// {0,-4,0,4,0},
// {-1, 0, 4,2,0},
// {0,1,0,0,0}};

//hls::Window<5,5,char> kernel2;

//for (int i=0;i<5;i++){


//for (int j=0;j<5;j++){
// kernel2.val[i][j]=coefficients3[i][j];
//}
///}
//hls::Point_<int> anchor = hls::Point_<int>(-1,-1);
//hls::Filter2D(img6,img7,kernel1,anchor);

hls::CvtColor<HLS_GRAY2RGB>(img3, img4);
hls::Mat2AXIvideo(img4, stream_out);
}

52
Appendix E : Block Diagram

53

You might also like

pFad - Phonifier reborn

Pfad - The Proxy pFad of © 2024 Garber Painting. All rights reserved.

Note: This service is not intended for secure transactions such as banking, social media, email, or purchasing. Use at your own risk. We assume no liability whatsoever for broken pages.


Alternative Proxies:

Alternative Proxy

pFad Proxy

pFad v3 Proxy

pFad v4 Proxy