Evolutionary Deep Learning For Car Park Occupancy Prediction in Smart Cities: 12th International Conference, LION 12, Kalamata, Greece, June 10-15, 2018, Revised Selected Papers
Evolutionary Deep Learning For Car Park Occupancy Prediction in Smart Cities: 12th International Conference, LION 12, Kalamata, Greece, June 10-15, 2018, Revised Selected Papers
net/publication/330014757
Evolutionary Deep Learning for Car Park Occupancy Prediction in Smart Cities:
12th International Conference, LION 12, Kalamata, Greece, June 10–15, 2018,
Revised Selected Papers
CITATIONS READS
7 1,724
4 authors:
Some of the authors of this publication are also working on these related projects:
All content following this page was uploaded by Andrés Camero on 08 January 2019.
1 Introduction
Nowadays, most of the world population lives in urban areas, and it is expected
that the number of inhabitants in cities will be 75% of the world’s population
by 2050 [3]. Thus, a wide range of challenges have to be faced by the different
city stakeholders in order to mitigate the negative effects of a very fast growth
of such urban areas. With the application of new types of computing and the
technological innovation of critical infrastructure and services, the concept of
Smart City emerges as a means to efficiently address big city challenges.
One of the main concerns in modern cities is mobility. The vast increment
in the volume of urban road traffic experienced during the last decades causes
serious issues that have be confronted with new tools. Traffic jams bother the
daily life of the population, mainly because traffic congestion causes longer trip
times and a larger associated pollution, not to mention the economic losses due
to delays and other transport problems. Thus, great efforts are being made to
develop along one of the dimensions of the Smart City initiative: Smart Mobility,
which focuses on providing sustainable transport systems and logistics to allow
a smooth urban traffic and commuting by mainly applying information and
communication technologies [4, 7, 19, 22, 33].
The search of free parking spaces is an important activity that negatively
affects the road traffic flows: it is responsible for up to 40% of the total traffic
within cities [10]. This is mainly due to the fact that drivers often do not take
the most efficient parking decisions, because those are based basically on partial
on-road perceptions and past personal experiences. During the last few years, a
number of systems have emerged with the aim of simplifying the search of free
parking spaces [19]. These systems have a positive impact on traffic operations
in urban areas. The main advantages of using such systems are [34]:
– they reduce the driver’s frustration since they increase the probability of
finding free parking spaces,
– they improve the global road traffic efficiency (e.g., fuel/energy consumption,
generated gas emissions, travel times, etc.) because they reduce the total
distance traveled by vehicles,
– and they help road users optimize their trips by taking into account the
expected free parking space information to decide where to park in advance.
This type of systems involve learning, predicting, and exploiting cloud based
architectures. In this article, we aim to provide a new technique for learning and
predicting car park systems, which can be also applied to other forecast prob-
lems. Indeed, a new Deep Learning (DL) technique based on Recurrent Neural
Networks (RNN) and Evolutionary Algorithms is proposed, a deep neuroevo-
lutionary hyper-parameter architecture optimization. We compare the perfor-
mance of our proposal against other machine learning (ML) techniques applied
to predict the occupancy rate of several car parks in Birmingham (U.K.) [32].
When dealing with DL, the efficiency is an important issue, mainly due to the
required computational cost, which hinders the use of such a powerful tool. Using
a neural network is often a solution to many practical problems similar to this
one. But one of the main aspects that affects the efficiency is the appropriateness
of the used neural network architecture [24], that is, how good is the defined set
of layers and links, and which method is to be used for its best design.
In this study, we define an optimization problem consisting in determining
the optimal design of an RNN, obtaining an architecture that minimizes the
required computational costs while keeping a high accuracy. As a global state-
ment, an optimization problem is defined by a search space and a quality or
fitness function. The search space defines (and then restricts) the possible con-
figurations of a solution vector, which is associated with a numerical cost by the
fitness function. Thus, solving an optimization problem consists in finding the
least-cost configuration of a solution vector (assuming minimization).
The vast number of possible RNN architectures that can be defined makes
this task very hard. Then, the use of automatic intelligent tools seems a manda-
tory requirement when addressing them. In this sense, Evolutionary Algorithms
(EA) [2, 11] emerge as an efficient stochastic techniques able to solve optimiza-
tion problems. Indeed, these algorithms are currently employed in a multitude
of hard-to-solve problems, e.g., in the domain of Smart City [20, 33], showing
a successful performance. Nevertheless, the use of such a methodology in the
domain of DL is still limited [21].
Therefore, the main contributions of this study are two: first, defining a gen-
eral technique to automatically design efficient RNN by using EAs, and second,
applying such tool for developing an efficient parking prediction system to be
applied in smart urban areas. The remainder of this paper is organized as follows.
Next section reviews similar works solving the prediction of car park occupancy
rates. Section 3 presents our proposal. Section 4 presents the experiments carried
out, the results, a benchmark, and the analysis. Finally, conclusions and future
work are considered in Section 5.
2 Related Work
The prediction of car park availability is a demanded subject that is being studied
in the context of smart cities [19], especially now when most car parking spaces
have a sensing infrastructure connected to a cloud based system. Smart parking
services based on parking prediction allow drivers to organize their transports
before departures or during their trips. This type of services is a common way of
forecasting the occupancy rate of parking spaces, and even they could provide
the possibility of booking a parking spot in advance.
Some of the most popular prediction approaches assume that vehicles arrive
to car park spaces following a Poison distribution. Then, they predict their ca-
pacity by using a Markov Chain [16, 25]. However, the efficacy of these methods
is limited because the demand of the parking spaces depends on different fac-
tors, including the time of the day, the day of the week, weather conditions, etc.,
which are not considered by these proposed models.
Smart City projects are promoting the collection of large-scale car park in-
formation in urban areas. Therefore, researchers and practitioners have access
to realistic car park data sets. The analysis of SmartSantander on-street park-
ing data shown that the occupancy and parking periods of different parking
areas followed a Weibull distribution [34]. In [37], three different ML methods
were applied to predict the car park occupancy rate, over two data sets from
San Francisco (SFpark ) and Melbourne: a regression tree, a neural network,
and supported vector machines, in order to show their relative strengths and
weaknesses. SFpark was also used as a use case to compare a number of spatio-
temporal clustering strategies [29]. These methods reduced the storage required
by other prediction methods, providing similar accurate fitting than seven-day
models. A multi-variate regressive model to predict spatial and temporal corre-
lations of car park availability has been also applied using real-time data from
San Francisco and Los Angeles [26, 27].
In this study, we focus on the use of DL based on a special type of neural
networks, RNN. Neural networks have been applied by different authors in this
domain. The main idea is to study the relation between aggregating parking
lots and predicting car park availability by applying feed-forward networks [18].
These type of approaches are improved when they are used together with Internet
of Things (IoT) systems, because they can continuously improve the accuracy
of the occupancy predictions by using back-propagation [34, 35].
Like the aforementioned studies, this work considers real world data retrieved
by our smart-parking data-collecting system [33] from Birmingham (U.K.). Thus,
we analyze the application of DL for predicting the occupancy rate and compare
the performance against other previously used ML methods [33].
DL allows the application of learning processes that can be done with multiple
layers of abstraction, by taking the advantage of the currently available, high
computational resources [12]. DL has dramatically improved the results provided
by ML in the past [17]. Nevertheless, the efficient design of DL methods is still
an open problem and there is room for improvement [24]. In RNN, the issue
of efficiency is even harder, because they are updated or rebuilt repeatedly to
capture the temporal structure of the data. Thus, special care must be taken
with the efficiency of the training process, the efficacy of the training method,
and the appropriateness of the architecture.
In order to deal with the issue of finding a suitable architecture to the car
park prediction problem (a very influent decision for the final quality of the
prediction), we propose the application of an automatic and intelligent procedure
based on EAs. A few number of authors have already studied Neuroevolution,
i.e., the optimization of artificial neural networks by using EAs [1, 36]. However,
their solutions cannot directly be applied on DL due to the high complexity
of the neural networks used in DL [24]. Therefore we propose to improve and
extend these ideas to DL, giving rise to deep neuroevolution.
In this section we present the details of our proposal. First, we comment on the
training of the RNN, then we introduce the fitness, and finally we outline two
evolutionary approaches to optimize the architecture of the RNN.
3.1 Learning
Artificial neural networks (ANN) are computational models inspired by the hu-
man brain, and as our brain does, they are capable of learning. In our particular
problem, we are interested in an iterative type of learning process referred as
supervised learning [13]. Supervised learning consists in supplying training data
of N input-output pairs (X,Y ). Then, for each X the ANN produces an output
Z, which is compared to Y using an error (cost or distance) function. Finally,
this error is minimized in an iterative manner.
Minimizing the error is a tough task. There have been several approaches
proposed for this, including gradient descent-based [13] and metaheuristic-based
ones [1, 24], but up-to-date the most used method is a first-order gradient descent
algorithm named backpropagation [30] (BP). It is very important to notice that
in order to use BP to train a RNN, the network has to be unfold [15] (so the error
is propagated backwards). This means that the network is copied and connected
in series, building an unrolled or unfold version of the RNN. The number of
times that the RNN is unfold is usually referred as to look back, because we are
allowing the RNN to look back a finite number of times.
Once an ANN is trained, we are interested in its generalization capability, i.e.
its proficiency to offer general solutions rather than overfitting to a training data
set. This issue is usually tackled by adding a stochastic component in the training
process, a technique called dropout [31]. This technique has proved to be very
useful to reduce overfitting, however Reed et al. [28] showed that “large networks
generally learn rapidly”, but “they tend to generalize poorly”, suggesting that
we might consider the architecture to improve this capability.
Therefore, we propose to look for an RNN that better adapts to our problem
by optimizing its generalization ability, including its architecture (the number
of hidden layers and neurons), and its unfolded representation.
Since training an RNN is costly (in terms of computational resources) and
the number of RNN architectures is infinite (or extremely large if we impose
restrictions to the number of hidden layers or neurons), we are enforced to define
a smart search strategy to find an optimal RNN.
Among the many potential optimization techniques, metaheuristics are well
know because of their ability to combine the exploration and exploitation strate-
gies. Thus, they are suitable to address complex, nonlinear, and non differen-
tiable problems [2, 24]. Having said that, we decided to adapt two metaheuristic
algorithms to solve our problem of finding an RNN architecture that allow us to
minimize the generalization error.
3.2 Fitness
N
1 X
minimize Fitness = M AE(zi , yi ) (1)
N i
subject to H ≤ max hidden layers (2)
L ≤ max neurons per layer (3)
(
x0 if i = 0
x̂i = (4)
zi−1 if i > 0
In order to find a RNN architecture that is fitted to the series, we designed a ge-
netic algorithm (GA) [14] that evolves an initial population of RNN candidates
(Algorithm 1). Particularly, we encoded (i.e. the representation) the architec-
ture of the RNN and two training features (solution) as an integer vector of
variable length. The first position of the solution corresponds to the dropout (a
learning parameter that avoids over-fitting), the second, to the look back (how
many times the net is unfold during the training), and the third and successive
positions correspond to the number of neurons of the i -th hidden layer (the ar-
chitecture, properly). Thus, the number of hidden layers is defined by the length
of the solution. Note that the input and output layers are defined by the time
series, therefore we did not include them as part of the solution.
First, an initial population of pop size individuals (Initialize) is created
randomly, and evaluated (Evaluate), i.e. each solution is decoded into an RNN
architecture, then it is trained (using a training data set), and finally the fitness
is computed over a test data set.
Then, while the number of evaluations is smaller than max evaluations, the
evolution of the population is performed by selecting a subset of parents using
the binary tournament (Selection). After this, the parents are recombined into
an offspring (of size equal to offspring size) using the single point crossover
(Recombination) with cx prob. It is important to remark that with 1-cx prob
probability one of the parents is returned unmodified.
Once the recombination is done, each offspring is mutated by a two phase
process (Mutation). In the first step, a uniformly distributed value in the range
[1, mut max step] is added to or subtracted from the i -th component of the
solution with probability mut prob. In the second step, a hidden layer is added
to (copied) or removed from the solution with probability mut x prob.
Finally, the offspring is evaluated, and the offspring size worst solutions (in
terms of fitness) of the population are replaced by the offspring (Replace).
In line with the GA-based RNN optimizer, we designed an RNN optimizer based
on an (1 + 1) Evolutionary Strategy (ES) [2] approach. Note that we selected
small µ and λ because of the high computational cost of training an RNN. Our
proposal (Algorithm 2) evolves a single solution using the encoding and mutation
already defined in Section 3.3, and a plus replacement criteria, i.e. if the fitness
of the new candidate solution (mutated) is at least as good as the old solution,
the new candidate replaces the old one.
To improve the performance of the ES-based algorithm we included a proce-
dure to self-adjust the parameters [8]. Particularly, if the new candidate solution
(mutated ) improves the fitness in regard to the old solution, then the mut prob
and mut x prob values are multiplied by 1.5, otherwise these values are divided
by 4. Therefore, while the solution is improving, we widen the local search.
4 Experimental Study
The data set analyzed in this article is the one used in [32], comprising valid
occupancy rates of 29 car parks operated by NCP (National Car Parks) in the
city of Birmingham in the U.K.
1
https://github.com/acamero/dlopt
Birmingham, is a major city in the West Midlands of England, standing on
the small River Rea. It is the largest and most populous British city outside
London, with an estimated population of 1,124,569 as of 2016 [23].
Several cities in the U.K. have been publishing their open data to be used,
not only by researchers and companies, but also for citizens for better know
the place where they live. The Birmingham data set is licensed under the Open
Government License v3.0 and it is updated every 30 minutes from 8:00 to 16:30
(18 occupancy values per car park and day). In our study, we worked with data
collected from Oct 4, 2016 to Dec 19, 2016 (11 weeks).
Figure 1(a) shows the occupancy data available for all the car parks and
dates. Figure 1(b) presents a box plot showing the data distribution of the car
park occupancy by weekdays. We can see in the former that almost all car parks
begin the day with at least 50% of free spaces and that they are progressively
occupied during the day with a clear peak between 13:00 and 14:00 hours. Finally,
the latter figure shows that car parks have more available spaces on Sundays and
Saturdays which was to be expected as they are not working days.
The numeric characteristics of the data set are: 77 days of occupancy data of
29 car parks which account for 33,292 values for training plus 3,425 for testing.
Fig. 1. Occupancy data of the 29 car parks and their distribution on weekdays.
Section 3.2, computed over the test data set defined in Section 4.1), and the
parameters values from the ranges defined in Table 2. Note that we included in
the hyper-parameterization process three training parameters (batch size, min
delta, and patience), because we wanted to explore the impact of discarding
an RNN earlier (i.e. increasing the min delta and decreasing the patience) and
augmenting the amount of data available on each BP iteration.
The best configuration found (using the referred method) for the GA-based
is shown in Table 3. We inferred from the configuration that discarding solutions
a ‘little bit earlier’ than the original configuration improves the final result.
On the other hand, the ES-based approach does not require an hyper-
parameterization, because it is a self-adjusting algorithm. Hence, we initialize
the algorithm using an off-the-shelve configuration (presented in Table 4).
Parameter Value
Mut max step 3
Mut prob 0.2
Mut x prob 0.2
4.3 RNN Optimization Performance
In order to benchmark the performance of the optimized RNN against an expert
defined architecture, we compare the predictions for both approaches.
First, we executed 30 independent runs of both RNN optimization algorithms
(GA and ES-based), using the parameters configuration defined in Table 3 and 4,
the data set defined in Section 4.1, and the fitness defined in Section 3.2.
Then, we trained three expert defined RNN architectures (refer to Table 5
for details), the training parameters defined in Table 1, and the same data set
and fitness measured mentioned above. Note that these configurations are based
on Google Tensorflow sample models2 .
To continue with our analysis, we study the architectures of the first decile.
Figure 3 shows the fitness of the best architectures evaluated. A smaller fitness
(blacker dots) implies a more accurate prediction.
The results suggest that, for this particular problem, there is an archetype
(i.e. specific shape) of RNN architecture, because the majority of the architec-
tures that belong to the first decile present a similar number of neurons and
hidden layers. Therefore, we envision that this kind of information has to be
considered for the design of future architecture optimization algorithms.
Since the final goal of our work is to obtain an accurate prediction of the car park
occupancy, we selected the best of the trained RNN (including both algorithms)
and used it to predict the car park occupancy on the testing data set. Then, we
compared these predictions against the ones presented by Stolfi et al. [32]. In
that article, the authors trained six predictors to forecast the future occupancy
rates of the same data set analyzed here. Concretely, they used polynomials (P),
Fourier series (F), k -means clustering (KM), polynomials fitted to the k -means’
centroids (KP), shift and phase modifications to KP polynomials (SP), and time
series (TS).
Fig. 3. Fitness of the first decile. A lower fitness is desirable.
Table 7 presents the MAE measured for each car park (over the entire pre-
dicted period), as well as the summarized statistics. The results of the RNN
do not exceed all its competitors (in terms of the mean or median), however
there is no significant difference between its predictions and the ones made us-
ing polynomials (Wilcoxon test p-value=0.096) or time series (Wilcoxon test
p-value=0.099).
Therefore, we consider that the results are useful, not only because there is
no significant difference between the RNN forecast values and polynomials or
time series, but also because the predictions were made by only one predictor,
while all its competitors consist on multiple predictors (one per each car park
and day, i.e. 203 predictors!). Moreover, the predictions were made based on
already predicted data, hence the results could be improved in a real situation
by updating the forecast periodically with updated data.
Acknowledgements
This research was partially funded by Ministerio de Economı́a, Industria y Com-
petitividad, Gobierno de España, and European Regional Development Fund
grant numbers TIN2014-57341-R (http://moveon.lcc.uma.es), TIN2016-81766-
REDT (http://cirti.es), and TIN2017-88213-R (http://6city.lcc.uma.es). Daniel
H. Stolfi is supported by a FPU grant (FPU13/00954) from the Spanish Ministry
of Education, Culture and Sports. Universidad de Málaga. Campus Internacional
de Excelencia, Andalucı́a TECH.
References
1. Alba, E., Martı́, R.: Metaheuristic Procedures for Training Neural Networks,
vol. 35. Springer Science & Business Media (2006)
2. Back, T.: Evolutionary Algorithms in Theory and Practice: Evolution Strategies,
Evolutionary Programming, Genetic Algorithms. Oxford university press (1996)
3. Bakici, T., Almirall, E., Wareham, J.: A Smart City Initiative: the Case of
Barcelona. Journal of the Knowledge Economy 4(2), 135–148 (2013)
4. Benevolo, C., Dameri, R.P., D’Auria, B.: Smart mobility in smart city. In: Torre, T.,
Braccini, A.M., Spinelli, R. (eds.) Empowering Organizations. pp. 13–28. Springer
Intl. Pub. (2016)
5. Bergstra, J., Yamins, D., Cox, D.: Making a science of model search: Hyperparam-
eter optimization in hundreds of dimensions for vision architectures. In: Interna-
tional Conference on Machine Learning. pp. 115–123 (2013)
6. Camero, A., Toutouh, J., Alba, E.: DLOPT: Deep learning optimization library.
arXiv preprint arXiv:1807.03523 (july 2018)
7. Cintrano, C., Stolfi, D.H., Toutouh, J., Chicano, F., Alba, E.: Ctpath: A real
world system to enable green transportation by optimizing environmentaly friendly
routing paths. In: Alba, E., Chicano, F., Luque, G. (eds.) Smart Cities. pp. 63–75.
Springer Intl. Pub. (2016)
8. Doerr, C.: Non-static parameter choices in evolutionary computation. In: Genetic
and Evolutionary Computation Conference, GECCO 2017, Berlin, Germany, July
15-19, 2017, Companion Material Proceedings. ACM (2017)
9. Fortin, F.A., De Rainville, F.M., Gardner, M.A., Parizeau, M., Gagné, C.: DEAP:
Evolutionary algorithms made easy. Journal of Machine Learning Research 13,
2171–2175 (jul 2012)
10. Giuffré, T., Siniscalchi, S.M., Tesoriere, G.: A novel architecture of parking man-
agement for smart cities. Procedia - Social and Behavioral Sciences 53, 16 – 28
(2012), sIIV-5th Intl. Congress - Sustainability of Road Infrastructures 2012
11. Goldberg, D.E., Holland, J.H.: Genetic algorithms and machine learning. Machine
learning 3(2), 95–99 (1988)
12. Goodfellow, I., Bengio, Y., Courville, A.: Deep learning. The MIT press (2016)
13. Haykin, S.: Neural networks and learning machines, vol. 3. Pearson (2009)
14. Holland John, H.: Adaptation in natural and artificial systems: an introductory
analysis with applications to biology, control, and artificial intelligence. USA: Uni-
versity of Michigan (1975)
15. Jaeger, H.: Tutorial on training recurrent neural networks, covering BPPT, RTRL,
EKF and the echo state network approach, vol. 5. GMD (2002)
16. Klappenecker, A., Lee, H., Welch, J.L.: Finding available parking spaces made easy.
Ad Hoc Networks 12, 243 – 249 (2014)
17. LeCun, Y., Bengio, Y., Hinton, G.: Deep learning. nature 521(7553), 436 (2015)
18. Lin, T.: Smart parking: Network, infrastructure and urban service. Ph.D. thesis,
Lyon, INSA (2015)
19. Lin, T., Rivano, H., Mouël, F.L.: A survey of smart parking solutions. IEEE Trans-
actions on Intelligent Transportation Systems 18(12), 3229–3253 (Dec 2017)
20. Massobrio, R., Toutouh, J., Nesmachnow, S., Alba, E.: Infrastructure deployment
in vehicular communication networks using a parallel multiobjective evolutionary
algorithm. International Journal of Intelligent Systems 32(8), 801–829 (2017)
21. Morse, G., Stanley, K.O.: Simple evolutionary optimization can rival stochastic
gradient descent in neural networks. In: Proc. of the Genetic and Evolutionary
Computation Conf. 2016. pp. 477–484. GECCO ’16, ACM (2016)
22. Nesmachnow, S., Rossit, D., Toutouth, J.: Comparison of multiobjective evolu-
tionary algorithms for prioritized urban waste collection in montevideo, uruguay.
Electronic Notes in Discrete Mathematics (2018), in press
23. Office for National Statistics: Population Estimates for UK.
http://www.nomisweb.co.uk/articles/747.aspx (2016), accessed: 2017-12-16
24. Ojha, V.K., Abraham, A., Snášel, V.: Metaheuristic design of feedforward neu-
ral networks: A review of two decades of research. Engineering Applications of
Artificial Intelligence 60, 97 – 116 (2017)
25. Pullola, S., Atrey, P.K., Saddik, A.E.: Towards an intelligent gps-based vehicle
navigation system for finding street parking lots. In: 2007 IEEE International Con-
ference on Signal Processing and Communications. pp. 1251–1254 (Nov 2007)
26. Rajabioun, T., Foster, B., Ioannou, P.A.: Intelligent parking assist. In: Control &
Automation (MED), 2013 21st Mediterranean Conf. pp. 1156–1161. IEEE (2013)
27. Rajabioun, T., Ioannou, P.A.: On-street and off-street parking availability predic-
tion using multivariate spatiotemporal models. IEEE Transactions on Intelligent
Transportation Systems 16(5), 2913–2924 (Oct 2015)
28. Reed, R., Marks, R., Oh, S.: Similarities of error regularization, sigmoid gain scal-
ing, target smoothing, and training with jitter. IEEE Transactions on Neural Net-
works 6(3), 529–538 (1995)
29. Richter, F., Martino, S.D., Mattfeld, D.C.: Temporal and spatial clustering for a
parking prediction service. In: 2014 IEEE 26th International Conference on Tools
with Artificial Intelligence. pp. 278–282 (Nov 2014)
30. Rumelhart, D., Hinton, G.E., Williams, R.j.: Learning Internal Representations by
Error Propagation. Tech. Rep. No. ICS-8506, California Univ San Diego La Jolla
Inst for Cognitive Science (1985)
31. Srivastava, N., Hinton, G., Krizhevsky, A., Sutskever, I., Salakhutdinov, R.:
Dropout: A simple way to prevent neural networks from overfitting. The Jour-
nal of Machine Learning Research 15(1), 1929–1958 (2014)
32. Stolfi, D.H., Alba, E., Yao, X.: Predicting car park occupancy rates in smart cities.
In: International Conference on Smart Cities. pp. 107–117. Springer (2017)
33. Stolfi, D.H., Armas, R., Alba, E., Aguirre, H., Tanaka, K.: Fine tuning of traffic in
our cities with smart panels: The quito city case study. In: Proc. of the Genetic and
Evolutionary Computation Conf. 2016. pp. 1013–1019. GECCO ’16, ACM (2016)
34. Vlahogianni, E.I., Kepaptsoglou, K., Tsetsos, V., Karlaftis, M.G.: A real-time park-
ing prediction system for smart cities. Journal of Intelligent Transportation Sys-
tems 20(2), 192–204 (2016)
35. Vlahogianni, E., Kepaptsoglou, K., Tsetsos, V., Karlaftis, M.G.: Exploiting new
sensor technologies for real-time parking prediction in urban areas. In: Transp.
Research Board 93rd Annual Meeting Compendium of Papers. pp. 14–1673 (2014)
36. Yao, X.: Evolving artificial neural networks. Proceedings of the IEEE 87(9), 1423–
1447 (1999)
37. Zheng, Y., Rajasegarar, S., Leckie, C.: Parking availability prediction for sensor-
enabled car parks in smart cities. In: 2015 IEEE 10th Intl. Conf. on Intelligent
Sensors, Sensor Networks and Information Processing (ISSNIP). pp. 1–6 (2015)