0% found this document useful (0 votes)
20 views91 pages

Introduction to Data Science: (Khoa học dữ liệu)

computer network

Uploaded by

ngocmaicute0509
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
20 views91 pages

Introduction to Data Science: (Khoa học dữ liệu)

computer network

Uploaded by

ngocmaicute0509
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 91

Introduction to Data Science

(Khoa học dữ liệu)


Image representation

Nguyen Thi Oanh


Hanoi University of Science and Technology
oanhnt@soict.hust.edu.vn

SOICT, HUST, 2024


1
About me
• Dr. Nguyen Thi Oanh
• Department of Computer science, SoICT, HUST
• Email:
‒ oanhnt@soict.hust.edu.vn
‒ oanh.nguyenthi@hust.edu.vn
• Office:
‒ 706 - B1 (working office) / 1001-B1
• Teaching:
‒ Computer vision, image processing
‒ Databases, database labs
‒ Intro to DS, Intro to ICT
• Research:
‒ Semantic segmentation (on medical images)
‒ Domain adaptation for semantic segmentation
‒ Action recognition (with multi-view, multi-modality
‒ Image representation and retrieval

2
Plan

• Introduction
• Digital images and basic operations
‒ histogram, brightness, contrast, color, texture, …

• Convolution and Filters


‒ noisy remove,
‒ edge detectors

• Feature extraction: local and global descriptor

3
Computer Vision ?

• Image Processing
‒ Work with image as a matrix
‒ Input: image ➔ output: image
‒ Help human to examine / modify images
• Computer Vision
‒ Make computers understand images and video
‒ Images and video are a source of information on the
reality
What kind of scene?
Where are the cars?
How far is the building?

4
Computer Vision and Applications

• Images, video are everywhere


• Video, images:
‒ Riche information

➔Hot topic, especially


When we talk every day
about AI with smart city,
mart home, smart …

5
How vision is used now

• Understand the image Facebook's suggestion

Smile detection: smart camera


• Camera can automatically trip the shutter at the right instant to catch the
perfect expression

Source: Derek Hoiem, Computer vision, CS 543 / ECE 549, University of Illinois
6
How vision is used now

• Login without a password, but with biometrics (fingerprint,


iris, face,…

Fingerprint scanners on many new laptops, Face recognition systems now


beginning to appear more widely
other devices
http://www.sensiblevision.com/

Source: Derek Hoiem, Computer vision, CS 543 / ECE 549, University of Illinois

7
How vision is used now

• Object recognition (on mobile phones)

Point & Find, Nokia


Google Goggles

Source: Derek Hoiem, Computer vision, CS 543 / ECE 549, University of Illinois
8
How vision is used now
• Content-based image retrieval

9
How vision is used now

• Earth View, Google earth (3D modeling from lots of 2D


images): automatic building generation + hand modeled
buildings (Golden Gate bridge or Sydney Opera house)

Microsoft's Virtual Earth


10
How vision is used now

• Panorama stitching:

Source:http://miseaupoint.org/blog/en/wp-content/uploads/2014/01/photo_stitching.jpg

11
How vision is used now

• Smart cars → autonomous vehicles

Mobileye: vision systems currently in many cars


“In mid 2010 Mobileye will launch a world's first application of full emergency
braking for collision mitigation for pedestrians where
vision is the key technology for detecting pedestrians
Source: Derek Hoiem, Computer vision, CS 543 / ECE 549, University of Illinois

12
How vision is used now

• Games / robots: 2_on_1_melee2

http://www.robocup.org/

Vision-based interaction game


(Microsoft's Kinect)

Robot vacuum cleaner


13
What we will talk about?

• 2 types information we would like to extract from images :


‒ Matrix 3D information
‒ Semantic Information
3D building?

Where is it?
Text in the picture,
How to represent what does it mean?
the image content ? Are there any person
in the picture?

Feature extraction Learning


Pre-processing Model
Deep Learning

17
Digital images ?

• What can we see on the picture?


‒ A car?
• What does the machine see?
‒ Image is a matrix of pixels
‒ Image N x M : N x M matrix
‒ 1 pixel (gray levels):
• An intensity value: 0-255
• Black: 0
• White: 255

18
Digital images ?

• For an image I x

‒ Index (0,0): Top left corner


‒ I(x,y): intensity of pixel at
the position (x,y) y

19
Digital images ?

• Principal type of images


‒ Binary image:
- I(x,y)  {0 , 1}
- 1 pixel: 1 bit
‒ Gray image:
- I(x,y)  [0..255]
- 1 pixel: 8 bits (1 byte)
‒ Color image
- IR(x,y), IG(x,y) IB(x,y)  [0..255]
- 1 pixel: 24 bits (3 bytes )
‒ Other : multi-spectre, depth
image,…

20
Color image in RGB space

It exists other color spaces:


Lab, HSV, …

21
Image histogram

 Histogram is a graphical representation of the repartition of


colours among the pixels of a numeric image.

22
Image histogram

 Histogram
 Should be normalized by dividing all elements to total number
of pixels in the image

p 𝑖 𝜖 [0,1]

255
෍𝑝 𝑖 = 1
𝑖=0

23
Image histogram

• Histogram
‒ Only statistic information
‒ No indication about the location of pixel (no spatial
information) ➔ Different images can have the same
histogram

24
Image Brightness
• Brightness of a grayscale image is the average intensity
of all pixels in an image
‒ refers to the overall lightness or darkness of the image

25
Contrast
• The contrast of a grayscale image indicate how easily
object in the image can be distinguished
• Many different equations for contrast exist
‒ Standard deviation of intensity values of pixels in the
image

‒ Difference between intensity value maximum et minimum

26
Brightness/Contrast vs histogram

Narrow histogram

Broad histogram

27
Contrast Enhancement
• Modify pixel intensities to obtain higher contrast
• There are several methods:
‒ Linear stretching of intensity range:
• Linear transform
• Linear transform with saturation
• Piecewise linear transform

‒ Non-linear transform (Gama correction)

‒ Histogram equalization (Cân bằng histogram)

28
Linear stretching 𝑠 = 𝑠𝑚𝑖𝑛 + 𝑟 − 𝑟𝑚𝑖𝑛
𝑠𝑚𝑎𝑥 − 𝑠𝑚𝑖𝑛
𝑟𝑚𝑎𝑥 − 𝑟𝑚𝑖𝑛

𝑠 = 𝑎. 𝑟 + 𝑠𝑚𝑖𝑛 − 𝑎. 𝑟𝑚𝑖𝑛 ,
𝑠 −𝑠
where a = 𝑟𝑚𝑎𝑥− 𝑟𝑚𝑖𝑛
𝑚𝑎𝑥 𝑚𝑖𝑛

If 𝑠𝑚𝑖𝑛 = 0; 𝑠𝑚𝑎𝑥 = 255


255
𝑠 = 𝑟 − 𝑟𝑚𝑖𝑛
𝑟𝑚𝑎𝑥 − 𝑟𝑚𝑖𝑛

29
Linear stretching

Intensity range = [0,255]


No efficace?

30
Histogram equalization
• Change histogram of modified image into uniform
distribution

• No parameters. OpenCV:cv2.equalizeHist(img)

31
Histogram equalization

32
Gama correction
Non linear transformation
• The general form of power-law transformation is:
𝑠 = 𝑐. 𝑟 
‒  > 1: compress values in dark area, while expanding values in light
area
‒  < 1 : expand values in dark area, while compressing values in light
area

s : new value
r : normalized old values
to [0, 1]
(r = old intensity/(L-1))
c : scaling constant
corresponding to the bit size
used
(c = L-1 = 255)

33
Gama correction

For grayscale image


of 8 bits:

𝑟 𝛾
𝑠 = 255.
255

34
Color Image histogram

• Intensity histogram:
‒ Convert color image to
grayscale
=> Compute histogram of gray
scale image
• Individual Color Channel
Histograms:
3 histograms for (R,G,B)
• 3D histogram:
a color identified by 3 values. Not
usually because of big elements

Source: https://web.cs.wpi.edu/~emmanuel

35
RGB (Red – Green - Blue)

• Used in storage and display


• R = G = B: gray level
• Any color
= r*R + g*G + b*B
‒ Strongly correlated channels
‒ Non-perceptual
‒ No separation between
intensity and color

36
Human Vision

• Two types of light-sensitive photoreceptors (on


retina) Cones
cone-shaped
less sensitive
operate in high light
color vision
Rods
rod-shaped
highly sensitive
operate at night
gray-scale vision

37
Human Vision ➔ Camera
• Three kinds of cones
- each cone is able to detect a range of colors
- labeled by the color at which they are most sensitive
.

440 530 560 nm.


RELATIVE ABSORBANCE (%)

100
S M L

50

400 450 500 550 600 650

WAVELENGTH (nm.)

38
HSV (Hue – Saturation- Value)

• The Hue-Saturation-Value (HSV) color space is useful


for segmentation and recognition
‒ Non-linear conversion
‒ Visual representation of colors

• We identify for a pixel:


‒ The pixel intensity (value)
‒ The pixel color (hue + saturation)
• RGB does not have this seperation

39
HSV (Hue – Saturation- Value)

• Hue (H) is coded as an angle


between 0 and 360
• Saturation (S) is coded as a
radius between 0 and 1
‒ S = 0 : gray
‒ S = 1 : pure color
• Value (V) = MAX (Red,
Green, Blue)

40
HSV (Hue – Saturation- Value)

• If we know the color of the object we are looking for, we


can model it using a hue interval
• Take care, because it is an angle (periodic value)
‒ Hue < 60° means nothing
• Is 350° smaller or bigger than 60°?
‒ Define an interval: 350° < Hue < 60° (for example)
• This interval is valid if Saturation > threshold (otherwise
gray level)
• This is independant of Value , which is more sensible to
light conditions

41
Lab color space

• The Lab system (sometimes L*a*b*) is based on a study


from human vision
‒ independant from all technologies
‒ presenting colors as seen by the human eyes
• Colors are defined using 3 values
‒ L is the luminance, going from 0% (black) to 100%
(white)
‒ a* represents an axis going from green (negative
value, -127) to red (positive value, +127)
‒ b* represents an axis going from blue (negative value,
-127) to yellow (positive value,+127)

42
Lab color space

43
Color conversions

• Convert between color spaces

• OpenCV:
‒ https://docs.opencv.org/4.0.0/de/d25/imgproc_color_conversions
.html
‒ Function: cv::cvtColor (InputArray src, OutputArray dst, int
code, int dstCn=0)
• converts an image from one color space to another
• code: conversion code (COLOR_RGB2HSV,
COLOR_RGB2HSV, COLOR_BGR2Lab, …)

44
Color space vs. illumination conditions

• collected 10 images of the cube under varying illumination


conditions

• separately cropped every color to get 6 datasets for the 6


different colors

Changes in color due to varying Illumination conditions


• Compute the density plot: Check the distribution of a particular
color say, blue or yellow in different color spaces. The density
plot or the 2D Histogram gives an idea about the variations in
values for a given color
Source: Vikas Gupta, Learn OpenCV
45
Similar illumination: very compact

Fig.: Density Plot showing the variation of values in color


channels for 2 similar bright images of blue color
Source: Vikas Gupta, Learn OpenCV
46
Similar illumination: very compact

Fig.: Density Plot showing the variation of values in color channels for
2 similar bright images of yellow color
Source: Vikas Gupta, Learn OpenCV
47
Different illumination

Fig.: Density Plot showing the variation of values in color channels


under varying illumination for the blue color Source: Vikas Gupta, Learn OpenCV
48
Different illumination

• Different illumination:

Fig.: Density Plot showing the variation of values in color channels


under varying illumination for the yellow color
Source: Vikas Gupta, Learn OpenCV
49
Color space vs illumination conditions

• Different illumination:
‒ RGB space: the variation in the value of channels is very
hight
‒ HSV: compact in H. Only H contains information about the
absolute color ➔ a choix
‒ YCrCb, LAB: compact in CrCb and in AB
• Higher level of compactness is in LAB
‒ Convert to other color spaces (OpenCV):
• cvtColor(bgr, ycb, COLOR_BGR2YCrCb);
• cvtColor(bgr, hsv, COLOR_BGR2HSV);
• cvtColor(bgr, lab, COLOR_BGR2Lab);

50
Spatial convolution

• Image filtering : For each pixel, compute function of local neighborhood


and output a new value
‒ Same function applied at each position
‒ Output and input image are typically the same size
• Convolution : Linear filtering, function is a weighted sum/difference of
pixel values
I' = I * K
• Really important!
‒ Enhance images: Denoise, smooth, increase contrast, etc.
‒ Extract information from images:
• Texture, edges, distinctive points, etc.
‒ Detect patterns
• Template matching

51
Spatial convolution

Mask (kernel)
Original image Filtered image

52
Spatial convolution

• New value of a pixel(i,j) is a weighted sum of its neigbors


K: convolution kernel,
mask, filter, …

- Flip the kernel both horizontally and


vertically (filter rotated 180 degrees).
- Put the center of kernel at each pixel (i,j) of
the image.
- Multiply each element of the kernel with its
corresponding element of the image matrix
- Sum up all product outputs

53
Spatial convolution

• New value of a pixel(i,j) is a weighted sum of its neigbors

Source: http://machinelearninguru.com

54
Spatial convolution

I' = I * K

55
Spatial convolution

I' = I * K

56
Spatial convolution

I' = I * K

57
Spatial convolution

• Border problem?
‒ Zero padding in the input matrix
‒ reflect across border:
• f(-x,y) = f(x,y)
• f(-x,-y) = f(x,y)
‒…

58
Some kernels

• 2D spatial convolution
‒ is mostly used in image processing for feature extraction
‒ And is also the core block of Convolutional Neural Networks
(CNNs)
• Each kernel has its own effect and is useful for a specific
task such as
‒ blurring (noise removing): mean filter, gaussian filter, …
‒ Sharpening,
‒ edge detection: sobel, prewitt, laplace
‒ …..

60
Some kernels

0 0 0
* 0 1 0
0 0 0

Original image Filtered image


(no change)

0 0 0
* 1 0 0
0 0 0

Filtered image
Original image
(shifted left by 1 pixel)
Source: David Lowe
61
Some kernels

• Box filter (mean filter): low-pass filter


‒ Replace each pixel with an average of
its neigborhood
‒ Achieve smoothing effect

Original image Filtered image Filtered image


with box size 5x5 with box size 11x11

62
Some kernels

 Gaussian filter : ): low-pass filter


0.003 0.013 0.022 0.013 0.003
0.013 0.059 0.097 0.059 0.013
0.022 0.097 0.159 0.097 0.022
0.013 0.059 0.097 0.059 0.013
0.003 0.013 0.022 0.013 0.003
Gaussian filter with size 5 x5 , sigma =1
Gaussian function in 3D

Rule for Gaussian filter:


set filter half-width to about 3σ
Gaussian image
Sigma = 0.5 ➔ mask size: 3x3
63
Some kernels

• Gaussian filter

Original image Filtered image Filtered image


with box size 5x5 with box size 11x11

65
Some kernels

• Sobel

Vertical Edge
(absolute value)
66
Some kernels

• Sobel

Horizontal Edge
(absolute value)
67
Edge detection

• Edges are corresponding to:


‒ Maximums of the first derivative
‒ Zero-crossing in the second derivative

68
Edge detection with first derivatives

• Compute the convolution between the image and the first


derivatives kernels
‒ Kernels: Sobel, Prewitt, Robert
‒ Implemented in OpenCV library

• Find local extrema


‒ Edge composed of pixels having maximum/minimum value of
the first derivatives of image
‒ Can use a threshold to detect edge rapidly
‒ Can make several steps to obtain the optimal edge: Canny
detector (implemented in OpenCV)

69
Edge detection with first derivatives

• Filters used to compute the first


derivatives of image
‒ Robert

‒ Prewitt
• less sensitive to noise
• Smoothing with mean filter,
then compute1st derivative
‒ Sobel:
• less sensitive to noise
• Smoothing with gaussian,
then computing1st derivative
y x
70
Edge detection with first derivatives

71
Image derivatives

• 1st derivatives :

First derivative of
I* Ix image with
respect to x

First derivative of
I* Iy image with
respect to y

Image gradient

72
Image gradient

• An image gradient is a directional change in the intensity


or color in an image
• For each pixel in the image: Gx, Gy
➔Form a gradient vector (Gx, Gy) :
- Important information to describe the image content
- Gradient Magnitude = (Gx)2 + (Gy)2 ≈|Gx| + |Gy|
- Gradient Direction = arctan(Gy/Gx)

Blue lines represent the


gradient direction: from
brightest to darkest

73
Edge detection with second derivatives

• Compute the second derivative


‒ Apply the Laplacian filter on the image I

[ 0 1 0
1 −4 1
0 1 0 ] Or
[ 1 1 1
1 −8 1
1 1 1 ]
• Find zero-crossing

74
Laplacian filter - Second derivative

• Discrete approximations for the Laplacian function


‒ One convolution matrix
[ 0 1 0
1 −4 1
0 1 0 ] [ 1 1 1
1 −8 1
1 1 1 ]

75
OpenCV

• Blurring: GaussianBlur, boxFilter,...


• First derivatives: cv.Sobel(), Scharr,...
• Second derivative: cv.Laplacian(),...
• Canny edge detector: optimal detector
‒ https://docs.opencv.org/4.x/da/d22/tutorial_py_canny.html
‒ cv.Canny()

76
Image representation

77
Feature extraction

• Two types of features are extracted from the image:


‒ local and global features (descriptors)
• Global features
‒ Describe the image as a whole to the generalize the entire
object
‒ Include contour representations, shape descriptors, and
texture features
‒ Examples: Invariant Moments (Hu, Zernike), Histogram
Oriented Gradients (HOG), PHOG, and Co-HOG,...
• Local feature:
‒ the local features describe the image patches (key points in
the image) of an object
‒ represents the texture/color in an image patch
‒ Examples: SIFT, SURF, LBP, BRISK, MSER and FREAK, …
78
Feature extraction

• Global features

256 bins intensity histogram

Pyramid Histogram of Oriented Gradients


16 bins intensity histogram
Source:http://www.robots.ox.ac.uk/~vgg/research/caltech/phog.html

79
Feature extraction

• Local features: how to determine image patches / local


regions

Dividing into
patches with Keypoint detection
regular grid
Image segmentation

Without knowledge about


image content Based on the content of image

81
Feature extraction

• Image segmentation
‒ Thresholding
‒ Split and merge
‒ Region growing
‒ Watershed
‒…

82
Feature extraction

• Keypoint detectors:
‒ DoG /SIFT detector
‒ Harris corner detector
‒ Moravec
‒ …
• Local features: computed in
local regions associated to each
keypoints:
- SIFT,
- SURF(Speeded Up Robust
Features),
- PCA-SIFT
- LBP, BRISK, MSER and
FREAK, …

83
Keypoint detector

• Harris corner detector


‒ https://docs.opencv.org/3.4/dc/d
0d/tutorial_py_features_harris.h
tml
‒ Invariant under translation,
rotation, but not scaling
• Harris-Laplace detector:
‒ https://docs.opencv.org/4.0.1/d
1/dad/classcv_1_1xfeatures2d_
1_1HarrisLaplaceFeatureDetec
tor.html
‒ Invariant under translation,
rotation, scaling

84
Keypoint detector: DoG/SIFT detector

• Find local extrema in


space-scale DoG:
‒ DoG ~ Laplace of
Gaussian
‒ extrema in second
derivatives

A SIFT keypoint : {x, y, scale, dominant orientation}

Source: Distinctive Image Features from Scale-Invariant Keypoints – IJCV 2004

85
Feature extraction : Good feature?

• Compact
• Invariant to
‒ geometric transformation
‒ Camera viewpoint
‒ Lighting condition

• Best performant local feature: SIFT (David Lowe)

86
Feature extraction : SIFT feature

Blur the image Compute orientation


Compute gradients in
using the scale of histogram in 8
respect to the keypoint
the keypoint directions over 4x4
orientation(rotation
(scale invariance) sample regions ➔ 128 D
invariance)

Source: Distinctive Image Features from Scale-Invariant Keypoints – IJCV 2004


http://campar.in.tum.de/twiki/pub/Chair/TeachingWs13TDCV/feature_descriptors.pdf

87
Other detectors and descriptors

Popular features: SURF, HOG, SIFT


http://campar.in.tum.de/twiki/pub/Chair/TeachingWs13TDCV/feature_descriptors.p
df

Summary some local features:


http://www.cse.iitm.ac.in/~vplab/courses/CV_DIP/PDF/Feature_Detectors_and_Descri
ptors.pdf

88
Feature extraction : OpenCV

• SIFT & SURF:


‒ Patented algorithms
‒ They are free to use fro academic / research purposes
‒ You should technically be getting permission to use them in
commercial applications
• From OpenCV 3.0, patented algorithms are
‒ removed from standard package,
‒ putted into non-free module (opencv-contrib, not installed by
default). From version 4.4, sift is free (in the standard package)
• Free alternatives to sift, surf:
‒ ORB (Oriented FAST and Rotated Brief)
‒ BRIEF, BRISK, FREAK, KAZE and AKAZE

89
Feature extraction : OpenCV

• SIFT
sift = cv.xfeatures2d.SIFT_create() // for version before v 4.4.
sift = cv. SIFT_create() // for version from v 4.4.
‒ sift.detect() function finds the keypoint in the images
‒ sift.compute() which computes the descriptors from the keypoints
kp = sift.detect(gray,None)
kp,des = sift.compute(gray,kp)
‒ Find keypoints and descriptors in a single step
sift.detectAndCompute()
kp, des = sift.detectAndCompute(gray,None)
‒ https://docs.opencv.org/3.4/da/df5/tutorial_py_sift_intro.html
• SURF: similar

90
Feature extraction : OpenCV

• SURF: similar
>>> img = cv.imread('fly.png',0)
# Create SURF object. You can specify params here or later.
# Here I set Hessian Threshold to 400
>>> surf = cv.xfeatures2d.SURF_create(400)
# Find keypoints and descriptors directly
>>> kp, des = surf.detectAndCompute(img,None)
>>> len(kp)
699

‒ https://docs.opencv.org/3.4/df/dd2/tutorial_py_surf_intro.html

91
Origine: Bag-of-words models
• Orderless document representation: frequencies of
words from a dictionary Salton & McGill (1983)

US Presidential Speeches Tag Cloud


http://chir.ag/phernalia/preztags/
92
Bags of features for object recognition
• Works pretty well for image-level classification and for
recognizing object instances

face, flowers, building

Csurka et al. (2004), Willamowski et al. (2005), Grauman & Darrell (2005), Sivic et al. (2003, 2005) 93
Bag of features: outline
1. Extract features OpenCV:
BOWImgDescriptorExtractor Class
2. Learn “visual vocabulary”
3. Quantize features using visual vocabulary
4. Represent images by frequencies of
“visual words”

94
Higher semantic vision problem
1. Image representation
- Pixel level
- Region level
- Image level

2. Classification: ML techniques
- Pixel level ➔ segmentation
- Region level ➔ detection
- Image level ➔ classification/recognition

95
References
• CVIP tool to explore the power of computer processing of digital images: Many methods in image
processing and computer vision have been implemented
‒ https://cviptools.ece.siue.edu/
• Library: OpenCV, with C/C++, Python and Java interfaces. OpenCV was designed for computational
efficiency and with a strong focus on real-time application: https://opencv.org/
• Books:
‒ Rafael C. Gonzalez, Richard Eugene Woods, Digital Image Processing, 2nd edition, Prentice -Hall,
2002: Chap 3 (spatial operators), 6 (Color spaces)
‒ Richard Szeliski, Computer Vision: Algorithms and Applications, Springer, 2010.
http://szeliski.org/Book/
• Articles:
‒ SIFT (DoG detector and SIFT descriptor): https://www.cs.ubc.ca/~lowe/keypoints/
‒ SURF: Herbert Bay, Andreas Ess, Tinne Tuytelaars, and Luc Van Gool, "Speeded Up Robust
Features", ETH Zurich, Katholieke Universiteit Leuven
‒ GLOH: Krystian Mikolajczyk and Cordelia Schmid "A performance evaluation of local descriptors",
IEEE Transactions on Pattern Analysis and Machine Intelligence, 10, 27, pp 1615--1630, 2005.
‒ PHOG: http://www.robots.ox.ac.uk/~vgg/research/caltech/phog.html
• https://www.learnopencv.com/ : many examples with code in C++/ Python and clear explanation

96
Thank you for
your attention!

98

You might also like

pFad - Phonifier reborn

Pfad - The Proxy pFad of © 2024 Garber Painting. All rights reserved.

Note: This service is not intended for secure transactions such as banking, social media, email, or purchasing. Use at your own risk. We assume no liability whatsoever for broken pages.


Alternative Proxies:

Alternative Proxy

pFad Proxy

pFad v3 Proxy

pFad v4 Proxy