0% found this document useful (0 votes)
123 views6 pages

A Real-Time Collision Detection System For Vehicles

The document discusses a real-time collision detection system for vehicles using deep learning technology. It presents an algorithm for real-time detection using Mask RCNN and achieved over 95% accuracy in experiments. A real-time detection system has become an important safety feature in vehicles, especially autonomous vehicles, to help alert drivers and prevent accidents.

Uploaded by

Mayank Lovanshi
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
123 views6 pages

A Real-Time Collision Detection System For Vehicles

The document discusses a real-time collision detection system for vehicles using deep learning technology. It presents an algorithm for real-time detection using Mask RCNN and achieved over 95% accuracy in experiments. A real-time detection system has become an important safety feature in vehicles, especially autonomous vehicles, to help alert drivers and prevent accidents.

Uploaded by

Mayank Lovanshi
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 6

Proc.

of the International Conference on Electrical, Computer and Energy Technologies (ICECET)


9-10 December 2021, Cape Town-South Africa

A Real-Time Collision Detection System for


Vehicles
Sam Amiri
2021 International Conference on Electrical, Computer and Energy Technologies (ICECET) | 978-1-6654-4231-2/21/$31.00 ©2021 IEEE | DOI: 10.1109/ICECET52533.2021.9698622

Shailendra Singh
Faculty of Engineering, Environment and Computing Faculty of Engineering, Environment and Computing
Coventry University, Coventry, UK Coventry University, Coventry, UK
ad0246@coventry.ac.uk singhs93@coventry.ac.uk

Abstract—A real-time collision detection system has become a Real-time detection system has thus become an important
crucial safety feature in vehicles today, mainly after the evolution safety feature in automobiles equipped with “Advanced Driver
of autonomous and self-driving vehicles. It is proved to be very Assist Systems (ADAS)”. This system is found to be an
effective in minimizing the number of road accidents. This paper
presents an algorithm for a real-time detection system using the effective feature in preventing and mitigating a road accident.
deep learning technology based on Mask-RCNN (Mask-Region It helps to alert the driver and even stop the vehicle, if the
based Convolutional Neural Network). We prepared a custom driver fails to apply brakes on time. Figure 2 illustrates the
dataset from scratch to experiment with our algorithm and a working of a real-time detection system in a vehicle. Figure
detailed analysis of the results are provided. Experiments indicate 3 illustrates the use of a real-time detection system in ADAS.
that the developed algorithm gives highly accurate results. We
achieved more than 95% accuracy with overall prediction score
of greater than 0.90.

Index Terms—Collision Detection System, Object Detection,


Pedestrian Detection, Cyclist Detection, Vehicle Detection, Ma-
chine Learning, Deep Learning, Convolutional Neural Network,
Mask-RCNN

I. INTRODUCTION
Roads are part of the basic infrastructure in any country
and they are widely shared between pedestrians, cyclists,
bikers and other vehicles. Figure 1 represents the fatalities in
road accidents that have alarmingly increased over the years.
In a study conducted by “Centers for Disease Control and
Prevention (CDC)”, there were around 6000 fatalities and
137,000 injuries in road accidents in 2017 in USA [1]. Fig. 2. Working of a Real-Time Detection System

Fig. 1. Fatality Composition in 2009 and 2018 [2] Fig. 3. A Real-Time Detection System in ADAS

978-1-6654-4231-2/21/$31.00 ©2021 IEEE


Authorized licensed use limited to: International Institute of Information Technology-Raipur. Downloaded on April 06,2024 at 07:21:38 UTC from IEEE Xplore. Restrictions apply.
Our work includes comprehensive and detailed study on more accurate and faster object detection capabilities. The
Mask-RCNN algorithm to detect different types of objects on results indicated that the proposed framework exceptionally
the road, for example, pedestrians, cyclists, bikers, vehicles, performed well even in heavy and dynamically changing
animals, trees, statues, light poles etc. There is not much work scenes and backgrounds.
done in this area, where a single algorithm can detect these
many types of objects on the road. We created our own custom Murthy and Hashmi [8] proposed the design of an enhanced
dataset containing 650 images. We labeled the images and “Tiny-YOLOv3” network, which improves feature extraction
created the annotations. We trained the algorithm with this and bounding box loss error. Tiny-YOLOv3 is derived
dataset and achieved higher accuracy and prediction score by from YOLOv3, which is a modified version of YOLO.
tuning the values of “Validation Steps” and “Steps per Epoch”. Tiny-YOLOv3 network has 7 convolution layers and 6 max
pooling layers. It uses 3 scale prediction networks of different
II. LITERATURE REVIEW sizes (13 x 13, 26 x 26 and 52 x 52) for detecting objects.
The benefits of a real-time collision detection system have This proposed network applies “K-means clustering” to
drawn attention from automotive industry, regulatory bodies find the optimum number of bounding boxes, while an anti-
and research community. Researchers have introduced several residual module is developed to improve the feature extraction.
algorithms in past 10 years, such as Haar Cascade, Histogram
of Oriented Gradients (HOG), Single Shot Detector (SSD), Wang et al. [9] proposed a pruning approach that identifies
You Only Look Once (YOLO), Region based Convolutional structural redundancies in a convolutional network and
Neural Networks (R-CNN), Fast-RCNN and Faster-RCNN [3]. prunes filters in the selected layer(s) with most redundancy.
It focuses more on identifying structural redundancies than
Haar [3] was the first algorithm which was introduced to finding unimportant filters. This approach improved the
detect objects in various cluttered backgrounds. It could detect results of image classifications and it can be effectively used
the change of gray image scale along with edges, line, center- for object detection and image synthesis.
surround and diagonal line features. Haar is the foundation
of object detection and it was designed to effectively detect Most of these research papers have tried detection of only
objects in static images. The researchers continued to single object. In this paper, we have considered detection of all
do further study and more algorithms are developed that possible objects on the road. Also, we have tried to improve
are more accurate and have faster object detection capabilities. the accuracy by tuning the algorithm.
III. CHALLENGES
Abari [4] proposed a unique method in which Histogram of
A real-time collision detection system when paired
Oriented Gradients (HOG), Local Binary Pattern (LBP) and
with “Autonomous Emergency Braking (AEB)” system
Haar-like features are combined and Linear Support Vector
has potential to prevent road accidents by stopping the
Machine (SVM) is used as classifier with an aim to achieve
vehicle before a collision. A human brain requires only a
higher speed and accuracy.
couple of looks to scan the area and can easily identify
obstacles on the road in less than a second. But it is very
Lan et al. [5] in their paper proposed “YOLO-R”
challenging to develop an algorithm, which can perform
convolutional network which is an improved version of
in a similar way. A lot of study and research is required
“YOLO”. The proposed network was designed to improve
to develop an advanced algorithm for such a detection
the accuracy of YOLO network. The new YOLO-R network
system, which can examine a wide range of surroundings
structure is formed by adding three passthrough layers to the
within a few milliseconds and quickly respond in real-time
original YOLO network. This provided enhanced ability to
situation to avoid an accident. Thus, the main challenge is
extract information of the shallow pedestrian features.
to develop and test such an algorithm, which can be fitted
in modern vehicles and can be considered as safe and reliable.
Dong and Liang [6] presented their research work on
image recognition and classification algorithm which is based
Moreover, a vehicle running on the road can come across
on deep learning technology. The research indicated that the
several objects, for example, pedestrians, cyclists, bikers, other
algorithms for simple image classification, which are based
vehicles, animals etc. Preparing a complete dataset and label-
on deep learning and on convolution of the neural network
ing of images containing all types of objects with different
are very effective.
background, postures and light conditions is a challenging task
in itself.
Ahmed et al. [7] proposed to use “MobileNets” and
“Single Shot Detector (SSD)” framework to improve IV. PROPOSED METHOD
accuracy and real time speed. MobileNets provides a robust This paper proposes an improved implementation of deep
network as compared to other network architectures, such learning algorithm based on Mask-RCNN, which is applied
as “ResNet” and “VGG” and uses separable depth wise to a completely new dataset of objects with the purpose of
convolution network layers. On the other-hand SSD provides achieving higher accuracy and also providing deep analysis.

Authorized licensed use limited to: International Institute of Information Technology-Raipur. Downloaded on April 06,2024 at 07:21:38 UTC from IEEE Xplore. Restrictions apply.
A. Benefits of Deep Learning technology A deep learning network can also be viewed as a multistage
Deep learning has become a popular and preferred choice filtration process, where input data is filtered by successive
of researchers due to the below main benefits: layers and a highly purified final output is generated at the
end of the process.
• Deep learning method is a sophisticated method that
provides accurate results and is capable of handling The main components of a perceptron are input signal,
intensive computational algorithms [6]. weights, bias, net input, activation function and output
• Models based on deep learning method are more robust as signal. Input signal is an array of features, which are
they have several hidden layers of strongly interconnected further processed to predict the output. Weights are scalar
neural networks and the input to each layer can be multiplication factors. Bias is a constant input value of “1”,
adjusted using the associated weights and bias to predict which is multiplied to a weight. Net input can be described
the output. This helps to minimize the error and increases as a transfer function, which is the result of the summation
the probability of correct decision for the pedestrian of weights multiplied to feature values and bias multiplied to
detection [6]. a weight. The activation function provides non-linearity to
• Deep learning models can be easily trained on a computer the net input value in each perceptron. The main activation
with good configuration of Graphics Processing Unit functions used are Sigmoid, ReLU and Softmax. Output
(GPU). Thus, we can process large volume of real-time signal is the final processed value from a perceptron, which
data received from cameras and radars fitted on the is then passed to the next successive layer. The structure of a
vehicle for training and accordingly use it to make precise perceptron (i.e. neuron) is shown in Figure 4.
predictions [10].
• Deep learning algorithms, such as “Mask-RCNN” are
easy to extend and generalize. It can be used for other
detection tasks, such as human pose detection, cyclist
detection, vehicle detection, static object detection, traffic
light detection etc. [11].
• Deep learning methods based on “Convolutional Neural
Networks (CNN)” are proved to give better performance
in terms of accuracy and speed, as compared to traditional
methods [3].
• Deep learning methods can be easily applied to unsuper-
vised learning.
• Deep learning methods are very useful and efficient when
dealing with large volume of unstructured data.
• Deep learning algorithm can be developed on various
Fig. 4. Structure of a perceptron (i.e. neuron)
software platforms, such as “Anaconda”, which is one
of the most popular Python based development platform.
A deep learning neural network is composed of input layer
It provides built in integrated libraries that can be easily
(where first level of data processing takes place), hidden
used for developing deep learning algorithms [7].
layers (these are the intermediate layers between input and
output layer and it can range from tens to hundreds in
B. Overview of a Deep Learning Network number) and output Layer (which contains the final processed
data). The information is transferred from one layer to another
Deep learning is an advance technique that has delivered over connecting channels. The structure of a deep learning
promising results for object detection [12]. It can be used neural network is shown in Figure 5.
for object detection based on large volume of sample data
[10]. It is capable of providing high accuracy and can handle
highly complex computational algorithms [7]. C. Architecture of Mask-RCNN network
The proposed Mask-RCNN algorithm is based on Faster-
A deep learning model is inspired from the structure RCNN and is a faster algorithm compared to its predecessors,
of a human brain. It consists of several successive layers such as R-CNN, SP-NET and Fast-RCNN [13]. It can
of “Artificial Neural Networks (ANNs)”, each composed distinctly detect multiple objects of same type or class and
of several artificial neurons termed as “perceptron”. These can accordingly create bounding boxes, labels and masks
artificial neurons are represented by mathematical algorithms for the detected objects. It has two stages. In the first
and are capable of learning automatically from the input stage, Region Proposal Network (RPN) scans the image and
training data. A deep learning model can have many generates the proposals for the regions that might contain the
successive layers of neural networks depending upon the object in the form of feature map. In the second stage, the
volume of the training data and complexity of the algorithm. binary mask classifier predicts the class, refines the bounding

Authorized licensed use limited to: International Institute of Information Technology-Raipur. Downloaded on April 06,2024 at 07:21:38 UTC from IEEE Xplore. Restrictions apply.
Fig. 6. Architecture of a Mask-RCNN network
Fig. 5. Structure of a Deep Learning Neural Network

V. EXPERIMENT
Below are the main steps involved in developing the Mask-
box and generates the colored mask for each object.
RCNN model as illustrated in Figure 7:
• Preparation of complete custom training dataset using
Below are the main components of the proposed Mask- images of pedestrains, cyclists, bikers, cars and animals
RCNN architecture (as shown in Figure 6): on the road.
• Backbone Layer: It uses Feature Pyramid Networks • Implementation of Mask-RCNN algorithm for training

(FPN) and ResNet as backbone for the feature extraction. the model.
FPN provides the pyramid like structure, while ResNet • Testing the sample images and videos using the trained

provides the deep convolutional network. A backbone Mask-RCNN model.


architecture with FPN and ResNet for feature extractor
provides better accuracy and speed [7].
• Region Proposal Network (RPN) Layer: RPN layer is a
neural network that scans the feature map passed by the
backbone layer in a sliding-window fashion and finds
areas that contain the objects. Each scanned region is
represented by an “Anchor Box”. This layer generates
the output in the form of “Anchor Class” and “Bounding
Box Refinement”. The bounding box refinement process
generates the “Regions of Interest (RoI)”, which is passed
to the next layer.
• Region of Interest (RoI) Align Layer: This layers scans the
RoI proposed by RPN and generates two outputs “Class”
and “Bounding Box Refinement”. Since, this layer is
deeper in the network, it is more specific in identifying
the class of the object. The bounding box refinement
class further refines the location and size of the bounding
boxes. The final output is generated in the form of a Fig. 7. Development process for a Deep Learning Mask-RCNN model
fixed dimension feature map, which is used by the next
layer. This layer also prevents the loss of information and A. Required Python Libraries
improves the mask accuracy relatively by 10% to 50%
[11]. We used Python programming to develop the Mask-RCNN
• Network Head Layer: The feature map from RoI align algorithm. Python is simple, stable, flexible and provides sup-
layer is fed to the binary mask classifier with two porting libraries, such as Tensorflow and Keras, for developing
convolutional layers. The first convolutional layer is a the algorithm. The python libraries used are Cv2 4.4.0, Keras
fully connected layer having a “Classifier” with Softmax 2.4.3, Matplotlib 3.3.3, Numpy 1.18.5, Pandas 1.2.0, Scipy
activation function and a “Bounding Box Regressor”. 1.5.2, Skimage 0.18.1, Sklearn 0.24.0 and Tensorflow 2.4.1.
The activation function identifies the class of the object, B. Preparation of Training Dataset
while the regressor predicts the bounding box. The second
convolutional layer generates the coloured pixel mask of A custom training dataset is prepared from scratch using
the object using Sigmoid activation function. The colored the images downloaded from Penn-Fudan dataset [14], COCO
mask is then applied to each detected object in an image. dataset [15] and from public domains such as, Google photos.
A total of 650 images are used to prepare the training dataset

Authorized licensed use limited to: International Institute of Information Technology-Raipur. Downloaded on April 06,2024 at 07:21:38 UTC from IEEE Xplore. Restrictions apply.
that is used to train the developed Mask-RCNN model. The
split of the training dataset is done as, around 80% of images
for training and around 20% of images for validation. We
labeled the images and created the annotations using ”VGG
Image Annotator (VIA)” tool [16].

C. Training of Mask-RCNN model


The developed Mask-RCNN deep learning algorithm is
trained and tuned with following parameters:
• Number of images in training dataset equal to 650.
• Number of “Steps per Epochs” equal to 10, 20, 30 and
50.
• Number of “Validation Steps” equal to 50 and 100.

Fig. 8. Loss values when “Steps per Epoch” is variable


TensorFlow’s visualization toolkit “TensorBoard” is used
to plot graphs for loss parameters for analysis. The plots for
loss parameters are shown in Figures 8 and 9. The X-axis
indicates number of training cycles and Y-axis the value of
loss.

The analysis of the trained model is done on the basis of


below loss parameters:
• Class Loss: This indicates the loss due to the improper
classification of the object by the output layer of the
Mask-RCNN model.
• Bounding Box Loss: This indicates the loss associated
with the localization of the bounding box of the object
class detected by the output layer of the Mask-RCNN
model.
• Mask Loss: This indicates the accuracy of the masks
created on the objects detected by the output layer of Fig. 9. Loss values when “Validation Steps” is variable
the Mask-RCNN model. The higher the loss, the lower
the accuracy of the masks.
D. Testing the pre-trained Mask-RCNN model
In the first step of model tuning, the number of “Validation The developed pre-trained Mask-RCNN model is tested
Steps” is fixed to 50, while number of “Steps per Epoch” using the random images and videos. The test images and
is set to different values of 10, 20, 30 and 50. The plots videos contained unseen pictures of pedestrians, cyclists,
we obtained for the loss parameters are shown in Figure 8. bikers, riders, animals and vehicles on the road. Overall more
Analysis of the graphs in Figure 8 indicates that the losses than 95% accuracy is achieved in detecting objects. The
are least when the number of “Steps per Epoch” is equal to algorithm is able to detect objects in complex scenarios as
30. Also, the model is under trained when “Steps per Epoch” well, such as in low light conditions, in costumes, in different
is equal to 10 or 20, while the model is over trained when postures and clothing and pedestrian with bag or trolley. The
“Steps per Epoch” is equal to 50. results with prediction scores are shown in Figure 10.

In the second step of model tuning, the number of “Steps Table I provides the comparison data of the prediction scores
per Epoch” is fixed to 30, while the number of “Validation between the results obtained from the implemented Mask-
Steps” is set to 50 and 100. The plots we obtained for the loss RCNN algorithm and the results from another Convolutional
parameters are shown in Figure 9. Analysis of the graphs in Neural Networks (CNN) algorithm [17]. As per the compari-
Figure 9 indicates that the losses are least when “Validation son data in Table I, it is evident that the detection results ob-
Steps” is equal to 50. tained from our implemented Mask-RCNN algorithm are more
accurate with higher prediction scores. We achieved higher
As per above analysis, we trained the developed Mask- prediction scores in the range of 0.914 to 0.994 as compared
RCNN model with ”Steps per Epoch” set to 30 and ”Validation to prediction scores in the range of 0.731 to 0.919 through
Steps” set to 50 to get the most accurate results, since the another CNN algorithm [17]. Our algorithm is successfully
values of loss parameters are lowest. able to detect other objects as well, such as bikers, horse riders

Authorized licensed use limited to: International Institute of Information Technology-Raipur. Downloaded on April 06,2024 at 07:21:38 UTC from IEEE Xplore. Restrictions apply.
videos. Mask-RCNN is a promising technology to develop
neural network-based machine learning algorithms for object
detection. It can be used for advanced features in modern
vehicles, such as for Pedestrian Detection System, Obstacle
Detection System, Cyclist Detection System, Driver Alert
System, Collision Avoidance System, Park Assist System and
Blind Spot Detection System.
R EFERENCES
[1] CDC (2020) Pedestrian Safety. [online]. Available:
https://www.cdc.gov/transportationsafety
[2] NHTSA (2019) 2018 Fatal Motor Vehi-
cle Crashes: Overview. [online]. Available:
https://crashstats.nhtsa.dot.gov/Api/Public/ViewPublication/812826
[3] C. Ning, L. Menglu, Y. Hao, S. Xueping, and L. Yunhong, “Survey of
pedestrian detection with occlusion,” Complex and Intelligent Systems,
2020, 7(1), pp.577-587.
[4] M.E. Abari, “A Novel Pedestrian Detection Method Based on Combi-
nation of LBP, HOG, and Haar-Like Features,” 2018 IEEE International
Conference on Electro/Information Technology (EIT), 2018, pp.0055-
0066, doi:10.1109/EIT.2018.8500235
[5] W. Lan, J. Dang, Y. Wang and S. Wang, “Pedestrian Detection
Based on YOLO Network Model,” 2018 IEEE International Confer-
ence on Mechatronics and Automation (ICMA), 2018, pp.1547-1551,
doi:10.1109/ICMA.2018.8484698
[6] Y. Dong and G. Liang, “Research and Discussion on Image Recog-
nition and Classification Algorithm Based on Deep Learning,” 2019
International Conference on Machine Learning, Big Data and Busi-
ness Intelligence (MLBDBI), 2019, pp. 274-278, doi: 10.1109/MLB-
DBI48998.2019.00061.
[7] Z. Ahmed, R. Iniyavan, and M.P. Madhan, “Enhanced Vulnerable Pedes-
trian Detection using Deep Learning,” 2019 International Conference on
Communication and Signal Processing (ICCSP). 2019, pp.0971-0974,
doi:10.1109/ICCSP.2019.8697978
[8] C.B. Murthy and M.F. Hashmi “Real Time Pedestrian Detec-
tion Using Robust Enhanced Tiny-YOLOv3,” 2020 IEEE 17th In-
dia Council International Conference (INDICON), 2020, pp.1-5,
doi:10.1109/INDICON49873.2020.9342082
Fig. 10. Detection results from the implemented Mask-RCNN model [9] Z. Wang, C. Li and X. Wang “Convolutional Neural Network Pruning
with Structural Redundancy Reduction,” 2021 IEEE/CVF Conference
on Computer Vision and Pattern Recognition (CVPR). 2021, pp. 14908-
and animals and it does not require any pre-processing of the 14917, doi: 10.1109/CVPR46437.2021.01467
[10] A. Begum, F. Fatima, A. and Sabahath, “Implementation of Deep
input images used for training or testing. The implemented Learning Algorithm with Perceptron using TenzorFlow Library,” 2019
algorithm can be easily trained to detect various objects by International Conference on Communication and Signal Processing
using the relevant training datasets. Moreover, the algorithm (ICCSP). 2019, pp.0172-0175, doi:10.1109/ICCSP.2019.8697910
[11] H. Kaiming, G. Gkioxari, P. Dollár and R. Girshick “Mask R-CNN,”
provides better accuracy and speed and can be used in real- 2017 IEEE International Conference on Computer Vision (ICCV). 2017,
time scenarios. pp.2980-2988, doi:10.1109/ICCV.2017.322
[12] S. Ghosh, P. Amon, A. Hutter, and A. Kaup, “Reliable pedestrian
TABLE I. COMPARISON OF PREDICTION detection using a deep neural network trained on pedestrian counts,”
SCORES 2017 IEEE International Conference on Image Processing (ICIP). 2017,
pp.685-689, doi:10.1109/ICIP.2017.8296368
Object Mask-RCNN CNN [17] [13] Towards Data Science Inc. (2018) R-CNN, Fast R-CNN, Faster
Pedestrian 0.969 to 0.990 0.731 R-CNN, YOLO — Object Detection Algorithms. [online]. Avail-
Cyclists 0.968 to 0.988 0.781 able: https://towardsdatascience.com/r-cnn-fast-r-cnn-faster-r-cnn-yolo-
Bikers 0.943 to 0.987 No Data object-detection-algorithms-36d53571365e
Horse Rider 0.937 to 0.989 No Data [14] Penn Engineering University of Pennsylvania (n.d.) Penn-Fudan
Database for Pedestrian Detection and Segmentation. [online]. Available:
Animals 0.985 to 0.994 No Data
https://www.cis.upenn.edu
Vehicles 0.914 to 0.992 0.872 and 0.919
[15] COCO Consortium (2020) COCO Common Objects in Context. [online].
Available: https://cocodataset.org
[16] A. Dutta and A. Zisserman “The VIA Annotation Software for Im-
ages, Audio and Video,” In Proceedings of the 27th ACM Inter-
CONCLUSION national Conference on Multimedia (MM ’19). 2019, pp.2276–2279,
In this paper, we implemented the deep learning algorithm https://doi.org/10.1145/3343031.3350535
[17] P. Kaur and R. Sobti, “Pedestrian and Vehicle detection in automo-
based on Mask-RCNN for designing a real-time detection tive embedded systems using deep neural networks,” 2018 Interna-
system. The developed algorithm is trained with custom tional Conference on Recent Innovations in Electrical, Electronics and
dataset. We achieved more than 95% accuracy in detecting Communication Engineering (ICRIEECE). 2018, pp. 2105-2109, doi:
10.1109/ICRIEECE44171.2018.9008972
the objects when tested the algorithm with static images and

Authorized licensed use limited to: International Institute of Information Technology-Raipur. Downloaded on April 06,2024 at 07:21:38 UTC from IEEE Xplore. Restrictions apply.

You might also like

pFad - Phonifier reborn

Pfad - The Proxy pFad of © 2024 Garber Painting. All rights reserved.

Note: This service is not intended for secure transactions such as banking, social media, email, or purchasing. Use at your own risk. We assume no liability whatsoever for broken pages.


Alternative Proxies:

Alternative Proxy

pFad Proxy

pFad v3 Proxy

pFad v4 Proxy