Skip to main content
Log in

Supporting users in finding successful matches in reciprocal recommender systems

  • Published:
User Modeling and User-Adapted Interaction Aims and scope Submit manuscript

Abstract

Online platforms which assist users in finding a suitable match, such as online-dating and job recruiting environments, have become increasingly popular in the last decade. Many of these environments include recommender systems which, for instance in online dating, aim at helping users to discover a suitable partner who will likely be interested in them. Generating successful recommendations in such systems is challenging as the system must balance two objectives: (1) recommending users with whom the recommendation receiver is likely to initiate an interaction and (2) recommending users who are likely to reply positively to the recommendation receiver initiated interaction. Unfortunately, these objectives are partially conflicting since very often the recommendation receiver is likely to contact users who are not likely to respond positively, and vice versa. Furthermore, users in these environments vary in the extent to which they contemplate the other side’s preferences before initiating an interaction. Therefore, an effective recommender system must effectively model each user and balance these objectives. In our work, we tackle this challenge through two novel components: (1) an explanation module, which leverages an estimate of why the recommended user is likely to respond positively to the recommendation receiver; and (2) a novel reciprocal recommendation algorithm, which finds an optimal balance, individually tailored to each user, between the partially conflicting objectives mentioned above. In an extensive empirical evaluation, in both simulated and real-world dating Web platforms with 1204 human participants, we find that both components contribute to attaining these objectives and that the combinations thereof are more effective than each one on its own.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Subscribe and save

Springer+ Basic
$34.99 /Month
  • Get 10 units per month
  • Download Article/Chapter or eBook
  • 1 Unit = 1 Article or 1 Chapter
  • Cancel anytime
Subscribe now

Buy Now

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Fig. 1
Fig. 2
Fig. 3
Fig. 4
Fig. 5
Fig. 6
Fig. 7
Fig. 8
Fig. 9
Fig. 10
Fig. 11
Fig. 12

Similar content being viewed by others

Notes

  1. http://ec.europa.eu/justice/data-protection/.

  2. https://www.xing.com.

  3. In Xia et al. (2015), \(ReFrom_{x}\) is denoted as \(Se_{x}\) and \(SentTo_{x}\) is denoted as \(Re_{x}\).

  4. This method utilizes user-to-user similarities. Another option for finding the mutual interest is to use item-to-item similarities, meaning the attractiveness similarity of the recommended user to the group of users who received messages from the service user. This option was also examined in Xia et al. (2015). Both of these methods significantly outperformed RECON, and there was no significant difference between them. We chose the first method because it performed slightly better than the second.

  5. Participants were aware that the profiles were simulated although based upon real data and that the messages were not actually sent to recipients. They were guided to send simulated messages to profiles they viewed as relevant matches for them.

  6. We manually classified all of the samples, which included a response into two classes: (1) positive response and (2) negative response.

  7. For comparison, following are the best AUC scores received by other prediction models which we tested: (1) random forest classifier: 0.798; (2) logistic regression: 0.795; (3) multi-layer-perceptron classifier: 0.791; (4) Gaussian naïve Bayes classifier: 0.672.

  8. In all top-k recommendations except the top-50.

  9. Some users did not view all of the recommendations, either because they did not log-in during the week following the recommendations or because they did not view their inbox.

  10. We only focus on the active recommended users, since non-active users receive fewer messages regardless of their popularity and attractiveness.

References

  • Abdi, H., Williams, L.J.: Tukey’s honestly significant difference (HSD) test. Encyclopedia of Research Design. Sage, Thousand Oaks, pp. 1–5 (2010)

  • Abdollahpouri, H., Adomavicius, G., Burke, R., Guy, I., Jannach, D., Kamishima, T., Krasnodebski, J., Pizzato, L.: Multistakeholder recommendation: survey and research directions. User Model. User Adapt. Interact. 30(1), 127–158 (2020)

    Article  Google Scholar 

  • Abel, F., Benczúr, A., Kohlsdorf, D., Larson, M., Pálovics, R.: Recsys challenge 2016: job recommendations. In: Proceedings of the 10th ACM Conference on Recommender Systems, pp. 425–426. ACM (2016)

  • Akehurst, J., Koprinska, I., Yacef, K., Pizzato, L., Kay, J., Rej, T.: Ccr—a content-collaborative reciprocal recommender for online dating. In: Twenty-Second International Joint Conference on Artificial Intelligence (2011)

  • Batista, G.E.A.P.A., Prati, R.C., Monard, M.C.: A study of the behavior of several methods for balancing machine learning training data. ACM SIGKDD explorations newsletter 6(1), 20–29 (2004)

    Article  Google Scholar 

  • Benesty, J., Chen, J., Huang, Y., Cohen, I.: Pearson correlation coefficient. In: Noise reduction in speech processing, pp. 1–4. Springer, Berlin (2009)

  • Brent, R.P.: An algorithm with guaranteed convergence for finding a zero of a function. The Computer Journal 14(4), 422–425 (1971)

    Article  MathSciNet  Google Scholar 

  • Brozovsky, L., Petricek, V.: Recommender system for online dating service. arXiv preprintarXiv:cs/0703042 (2007)

  • Cramer, H., Evers, V., Ramlal, S., Van Someren, M., Rutledge, L., Stash, N., Aroyo, L., Wielinga, B.: The effects of transparency on trust in and acceptance of a content-based art recommender. User Model. User Adapt. Interact. 18(5), 455–496 (2008)

    Article  Google Scholar 

  • Dixon, W.J., Massey Frank, J.: Introduction To Statistical Analsis. McGraw-Hill Book Company, Inc, New York (1950)

    Google Scholar 

  • Gedikli, F., Jannach, D., Ge, M.: How should I explain? A comparison of different explanation types for recommender systems. Int. J. Hum. Comput. Stud. 72(4), 367–382 (2014)

    Article  Google Scholar 

  • Girden, E.R.: ANOVA: repeated measures. Number 84. Sage (1992)

  • Goodman, B., Flaxman, S.: European union regulations on algorithmic decision-making and a “ right to explanation”. In: Workshop on Human Interpretability in Machine Learning at the International Conference on Machine Learning (2016)

  • Goodman, R.: Psychometric properties of the strengths and difficulties questionnaire. J. Am. Acad. Child Adolesc. Psychiatry 40(11), 1337–1345 (2001)

    Article  Google Scholar 

  • Gunning, D.: Explainable artificial intelligence (xai). Defense Advanced Research Projects Agency (DARPA), nd Web (2017)

  • Guy, I., Ronen, I., Wilcox, E.: Do you know?: recommending people to invite into your social network. In: Proceedings of the 14th International Conference on Intelligent User Interfaces, pp. 77–86. ACM (2009)

  • Hall, M.A.: Feature selection for discrete and numeric class machine learning (1999)

  • Hall, M.A.: Correlation-based feature selection for machine learning. PhD thesis, University of Waikato Hamilton (1999)

  • Hartley, J.: Some thoughts on likert-type scales. Int. J. Clin. Health Psychol. 14(1), 83–86 (2014)

    Article  Google Scholar 

  • Herlocker, J.L., Konstan, J.A., Riedl, J.: Explaining collaborative filtering recommendations. In: Proceedings of the 2000 ACM Conference on Computer Supported Cooperative Work, pp. 241–250. ACM (2000)

  • Hitsch, G.J., Hortaçsu, A., Ariely, D.: Matching and sorting in online dating. Am. Econ. Rev. 100(1), 130–63 (2010)

    Article  Google Scholar 

  • Hitsch, G.J., Hortaçsu, A., Ariely, D.: What makes you click?—Mate preferences in online dating. Quant. Market. Econ. 8(4), 393–427 (2010)

    Article  Google Scholar 

  • Hong, W., Zheng, S., Wang, H., Shi, J.: A job recommender system based on user clustering. JCP 8(8), 1960–1967 (2013)

    Google Scholar 

  • Kleinerman, A., Rosenfeld, A., Kraus, S.: Providing explanations for recommendations in reciprocal environments. In: Proceedings of the 12th ACM Conference on Recommender Systems. ACM (2018)

  • Kleinerman, A., Rosenfeld, A., Ricci, F., Kraus, Sa.: Optimally balancing receiver and recommended users’ importance in reciprocal recommender systems. In: Proceedings of the 12th ACM Conference on Recommender Systems. ACM (2018)

  • Knijnenburg, B.P., Willemsen, M.C., Gantner, Z., Soncu, H., Newell, C.: Explaining the user experience of recommender systems. User Model User Adapt Interact 22(4–5), 441–504 (2012)

    Article  Google Scholar 

  • Koller, D., Sahami, M.: Toward optimal feature selection. Technical report, Stanford InfoLab (1996)

  • Komiak, S.Y.X., Benbasat, I.: The effects of personalization and familiarity on trust and adoption of recommendation agents. MIS Q. 941–960 (2006)

  • Krzywicki, A., Wobcke, W., Cai, X., Mahidadia, A., Bain, M., Compton, P., Kim, Y.S.: Interaction-based collaborative filtering methods for recommendation in online dating. In: International Conference on Web Information Systems Engineering, pp. 342–356. Springer, Berlin (2010)

  • Krzywicki, A., Wobcke, W., Kim, Y.S., Cai, X., Bain, M., Mahidadia, A., Compton, P.: Collaborative filtering for people-to-people recommendation in online dating: data analysis and user trial. Int. J. Hum. Comput. Stud. 76, 50–66 (2015)

    Article  Google Scholar 

  • McNee, S.M., Riedl, J., Konstan, J.A.: Being accurate is not enough: how accuracy metrics have hurt recommender systems. In: CHI’06 extended abstracts on Human factors in computing systems, pp. 1097–1101. ACM (2006)

  • National Science and Technology Council. The National Artificial Intelligence Research And Development Strategic Plan (2016)

  • OkCupid. Okcupid blog: a women advantage. https://theblog.okcupid.com/a-womans-advantage-82d5074dde2d (2015). Accessed: 2018-04-25

  • Özcan, G., Ögüdücü, S.G.: Applying different classification techniques in reciprocal job recommender system for considering job candidate preferences. In: 2016 11th International Conference for Internet Technology and Secured Transactions (ICITST), pp. 235–240. IEEE (2016)

  • Pizzato, L., Rej, T., Chung, T., Koprinska, I., Kay, J.: RECON: a reciprocal recommender for online dating. In: Proceedings of the fourth ACM conference on Recommender systems, pp. 207–214. ACM (2010)

  • Pu, P., Chen, L.: Trust building with explanation interfaces. In: Proceedings of the 11th International Conference on Intelligent User Interfaces, pp. 93–100. ACM (2006)

  • Pearl, P., Chen, L.: Trust-inspiring explanation interfaces for recommender systems. Knowl. Based Syst. 20(6), 542–556 (2007)

    Article  Google Scholar 

  • Pu, P., Chen, L., Hu, R.: A user-centric evaluation framework for recommender systems. In: Proceedings of the fifth ACM Conference on Recommender Systems, pp. 157–164. ACM (2011)

  • Quadrana, M., Cremonesi, P., Jannach, D.: Sequence-aware recommender systems. ACM Comput. Surv. 51(4), 66:1–66:36 (2018)

    Article  Google Scholar 

  • Razali, N.M., Wah, Y.B., et al.: Power comparisons of Shapiro–Wilk, Kolmogorov–Smirnov, lilliefors and anderson-darling tests. J. Stat. Model. Anal. 2(1), 21–33 (2011)

    Google Scholar 

  • Rosenfeld, A., Kraus, S.: Predicting human decision-making: from prediction to action. Synth. Lect. Artif. Intell. Mach. Learn. 12(1), 1–150 (2018)

    Google Scholar 

  • Shani, G., Gunawardana, A.: Evaluating recommendation systems. In: Recommender Systems Handbook, pp. 257–297. Springer, Berlin (2011)

  • Sharma, A., Cosley, D.: Do social explanations work?: studying and modeling the effects of social explanations in recommender systems. In: Proceedings of the 22nd International Conference on World Wide Web, pp. 1133–1144. ACM (2013)

  • Sinha, R., Swearingen, K.: The role of transparency in recommender systems. In: CHI’02 Extended Abstracts on Human Factors in Computing Systems, pp. 830–831. ACM (2002)

  • Sutherland, S.C., Harteveld, C., Young, M.E.: Effects of the advisor and environment on requesting and complying with automated advice. ACM Trans. Interact. Intell. Syst. (TiiS) 6(4), 27 (2016)

    Google Scholar 

  • Symeonidis, P., Nanopoulos, A., Manolopoulos, Y.: Moviexplain: a recommender system with explanations. In: Proceedings of the Third ACM Conference on Recommender Systems, pp. 317–320. ACM (2009)

  • Tintarev, N., Masthoff, J.: ffective explanations of recommendations: user-centered design. In: Proceedings of the 2007 ACM Conference on Recommender Systems, pp. 153–156. ACM (2007)

  • Tintarev, N., Masthoff, J.: Designing and evaluating explanations for recommender systems. In: Recommender Systems Handbook, pp. 479–510. Springer, Berlin (2011)

  • Xia, P., Liu, B., Sun, Y., Chen, C.: Reciprocal recommendation system for online dating. In: Proceedings of the 2015 IEEE/ACM International Conference on Advances in Social Networks Analysis and Mining 2015, pp. 234–241. ACM (2015)

  • Zheng, Y., Dave, T., Mishra, N., Kumar, H.: Fairness in reciprocal recommendations: a speed-dating study. In: Adjunct Publication of the 26th Conference on User Modeling, Adaptation and Personalization, pp. 29–34. ACM (2018)

Download references

Acknowledgements

This article extends the preceding papers: Kleinerman et al. (2018b) and Kleinerman et al. (2018a), with the following additions: (1) In Sect. 7.1, we present an extensive offline evaluation, based on data from 7668 users, of our novel recommendation method and additional variations of RWS. The evaluation results demonstrate the efficiency of RWS and justify our decision to use RWS in the online evaluation. (2) In Sect. 8, we describe an additional large-scale live experiment, including 488 participants, in which we investigate the integration of both our novel recommendation generation method and explanation method. This experiment strengthens our conclusions from previous experiments and provides credibility for the use of both methods together. (3) In "Appendix 2" (Sect. 1), we present the full process which led us to use the correlation-based explanation method for the evaluation of the reciprocal explanation. This process includes a sequence of the experiments, involving 114 participants, in which we compared a few explanation methods and found that the correlation-based method was superior to all other methods investigated.

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Akiva Kleinerman.

Additional information

Publisher's Note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

This paper or a similar version is not currently under review by a journal or conference. This paper is void of plagiarism or self-plagiarism as defined by the Committee on Publication Ethics and Springer Guidelines.

Appendices

Appendix 1: User experience questionnaire for evaluation of reciprocal explanations

Our questionnaire, which evaluated the effect of explanation on the user experience (Sect. 5), included 5 Likert-scale questions, with a scale ranging from 1 (“strongly disagree”) to 5 (“strongly agree”). These questions measured five prominent factors of user experience in recommender systems: user satisfaction from the recommendations, perceived competence of the system, perceived transparency of the system, and trust in the system (Cramer et al. 2008; Pu et al. 2011; Knijnenburg et al. 2012). In addition, the users were asked specifically about the explanation usefulness, namely the extent to which the users considered the explanations to be helpful. The questions are presented in Table 5. The second question, which is ‘negatively worded’, was reverse-scored Hartley (2014). In order to make sure that the different questions actually evaluate different measures, we calculated the cross-scale Pearson correlation coefficients Goodman (2001) which show that the answers to the questions are not strongly correlated. The full correlation table is presented in Table 6.

Table 5 User experience questionnaire
Table 6 Cross-scale Pearson correlation coefficients

1.1 Questionnaire results in the simulated environment

In this section, we present the results of the questionnaire, which evaluated the user experience in the simulated environment, for both negligible cost and explicit cost settings.

1.1.1 Negligible cost

In the negligible cost setting, the one-sided condition outperformed the reciprocal condition also in the user experience, as in the relevance measure (Sect. 5.2.1). Specifically, the satisfaction (\({\mathrm{mean}}= 4\), \({\mathrm{s.d.}}= 0.85\) vs. \({\mathrm{mean}}=3.57\), \({\mathrm{s.d.}}=0.86\), \(p\le 0.05\)) and perceived competence (\({\mathrm{mean}}= 4.13\), \({\mathrm{s.d.}}= 0.83\) vs. \({\mathrm{mean}}= 3.27\), \({\mathrm{s.d.}}= 0.9\), \(p\le 0.01\)) were found to be significantly superior for the one-sided explanation condition. No statistically significant difference was found between the conditions for the remaining measures. The results are presented in Fig. 13.

Fig. 13
figure 13

Reciprocal versus one-sided explanations in MM with negligible cost. Error bars represent the standard error

1.1.2 Explicit cost

In the explicit cost setting, in addition to the acceptance (Sect. 5.2.2), the participants’ trust in the system was found to be higher under the reciprocal explanation condition (one-sided: \({\mathrm{mean}}=2.93\), \({\mathrm{s.d.}}=1.14\) vs. reciprocal: \({\mathrm{mean}}=3.38\), \({\mathrm{s.d.}}=1.01\), \(p\le 0.05\)). No statistically significant difference was found between the conditions for the remaining measures. The results are presented in Fig. 14.

Fig. 14
figure 14

Reciprocal versus one-sided explanations in MM with explicit cost. Error bars represent the standard error

Appendix 2: Choosing the explanation method

Before the evaluation of one-sided and reciprocal explanations in RRSs, we performed a preliminary investigation in order to find the best suited explanation method for online dating, the domain on which we focus throughout this paper.

1.1 Comparison of correlation-based and transparent explanation methods

In addition to the correlation-based explanation method, which is described in Sect. 4, we designed a similar explanation method based on the same guidelines (described above in Sect. 2.1). We called this explanation method the “transparent” explanation method.

The transparent explanation method, which aims to reflect the actual reasoning for the recommendations provided by the RECON algorithm, works as follows: to explain to user x a recommendation of user y, the method returns the top-k attributes of y which are the most prominent among users who received a message from user x.

figure e

To illustrate the difference between the transparent and the correlation-based explanation methods, we revisit Example 1. Assume an RRS has decided to recommend Alice, who never smokes and is slim, to Bob. Recall that Bob sent 6 messages to users who never smoke and 4 to slim users. For \(k=1\), the transparent explanation method would provide “never smoke” as an explanation because Bob sent more messages to users who never smoke than to users who are slim. Now say Bob viewed a total of 25 users, of whom 18 never smoke and 4 were slim. In other words, Bob sent messages to only a third of the users he viewed who never smoke, and to all users he viewed who are slim. Thus, the correlation-based method would find a stronger correlation between the presence of “slim body” and Bob’s messaging behavior; hence, “slim body” would be provided as an explanation.

In order to compare the transparent and the correlation-based explanation methods, we used the MM simulated system discussed in Sect. 5.1. We asked 59 of the 118 participants who took part in the data collection phase and did not take part in the negligible-cost experiment (Sect. 5) to enter the MM platform, where each participant then received a list of five personal recommendations generated by the RECON algorithm along with either transparent explanations (30 participants) or correlation-based explanations (29 participants). Participants were randomly assigned to one of the two conditions. As in the negligible-cost experiment, participants were asked to rate the relevance of each recommendation separately, on a five-point Likert scale from 1 (extremely irrelevant) to 5 (extremely relevant). Next, participants answered the questionnaire (available in Appendix 1), debriefing them on their user experience.

1.1.1 Results

All collected data were found to be approximately normally distributed according to the Anderson-Darling normality test Razali and Wah (2011). All reported results were compared using an unpaired t test. The results from the questionnaire showed that participants in the correlation-based condition were more satisfied than those in the transparent explanation condition and perceived the system as more useful (\(p\le 0.05\)). We did not find a significant difference in the way participants rated the relevance of the provided recommendations nor did we find a significant difference in the reported trust of the system.

1.2 Additional explanation methods evaluated in the MM environment

1.2.1 Comparison to baseline

Prior to our main experiment, we first compared the correlation-based explanations with a baseline condition: recommendations without any explanation. We recruited an additional group of 30 participants who were asked to enter the mm environment. We used the same experimental methodology described in Sect. 5. We measured all evaluation measures with the exception of the explanation usefulness (which was not relevant to the baseline condition). We found that the correlation-based condition significantly outperformed the baseline condition in the relevance measure (\(p\le 0.05\)).

1.2.2 Comparison to collaborative filtering explanation style

We further examined another explanation method, similar to a method which was presented in previous work Herlocker et al. (2000). This explanation method justifies the recommendation by simply stating that “similar users” to the service user have shown interest in the recommended match. We call this explanation style “collaborative filtering,” because the explanation indicates that the recommendation was generated using collaborative filtering methodology, where recommendations are based on similarity measures. Unlike the previous methods, these explanations do not include any information about the attributes of the recommended users. Of course, this explanation does not reflect the actual reasoning for the recommendation, since the underlying algorithm is content-based. Nevertheless, previous work has shown that explanations which are not related to the underlying algorithm can also be highly effective Herlocker et al. (2000).

For the evaluation of this explanation method, we recruited an additional group of 25 subjects. All of the experimental setup was identical to the setup in the previous experiment, with the only difference being the explanation method.

Our results show that the correlation-based recommendation method was significantly superior to the collaborative filtering explanation style. Specifically, the relevance rate in the correlation-based condition was significantly higher than the collaborative filtering (correlation: mean = 3.34 vs. collaborative filtering: mean = 2.36, \(p\le 0.01\)). In addition, the subjects in the experiment with the correlation-based method were significantly more satisfied than the users in the experiment with the collaborative filtering method (correlation: mean = 4 vs. collaborative filtering: mean = 3.28, \(p\le 0.01\)).

Appendix 3: Features in the reply prediction model

Public Profile Features of Sender and Receiver:

These features are part of the users’ profile and are public to all the users in the environment.

  1. 1.

    Age

  2. 2.

    Gender

  3. 3.

    Marital status

  4. 4.

    Number of children

  5. 5.

    Height

  6. 6.

    Smoking habits

  7. 7.

    Number of pictures in profile

  8. 8.

    Are pictures public? (The users on the site had an option of keeping their pictures private)

  9. 9.

    Religious observance level

  10. 10.

    Dating goal

  11. 11.

    Living area

  12. 12.

    Self-description length (Number of characters in the user’s self description)

  13. 13.

    Preferences description length (Number of characters in the user’s description of his/her preferences)

  14. 14.

    Economic status

  15. 15.

    Ethnic background

Each feature corresponds to two features in the model—one for the sender and one for the receiver.

Interaction and Activity features of Sender:

  1. 1.

    Number of profiles he/she viewed.

  2. 2.

    Number of users who viewed him/her.

  3. 3.

    Number of users he/she liked.

  4. 4.

    Number of users who liked him/her.

  5. 5.

    Number of messages he/she sent.

  6. 6.

    Number of messages he/she received.

  7. 7.

    Number of his/her messages which were positively replied to before current message.

  8. 8.

    Percent of positively replied messages before current message.

  9. 9.

    Number of received messages which he/she did not view.

  10. 10.

    Number of users who viewed him/her which he/she did not view.

  11. 11.

    Number of users who liked him/her which he/she did not view.

Interaction and Activity features of Recipient:

  1. 1.

    Number of users who viewed him/her.

  2. 2.

    Number of users he/she viewed.

  3. 3.

    Number of users who liked him/her.

  4. 4.

    Number of users he/she liked.

  5. 5.

    Percent of messages he/she replied to positively from all received messages.

  6. 6.

    Number of messages he/she received.

  7. 7.

    Did he/she send a message to the sender before?

  8. 8.

    Did he/she like the sender before?

  9. 9.

    Has he/she replied positively to any message before?

  10. 10.

    Was he logged-in while the message was received?

  11. 11.

    Log-ins to the environment in the week before the message.

  12. 12.

    Average duration of logins in previous week (before the message).

  13. 13.

    Number of sent messages in previous week (before the message).

Rights and permissions

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Kleinerman, A., Rosenfeld, A., Ricci, F. et al. Supporting users in finding successful matches in reciprocal recommender systems. User Model User-Adap Inter 31, 541–589 (2021). https://doi.org/10.1007/s11257-020-09279-z

Download citation

  • Received:

  • Accepted:

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.1007/s11257-020-09279-z

pFad - Phonifier reborn

Pfad - The Proxy pFad of © 2024 Garber Painting. All rights reserved.

Note: This service is not intended for secure transactions such as banking, social media, email, or purchasing. Use at your own risk. We assume no liability whatsoever for broken pages.


Alternative Proxies:

Alternative Proxy

pFad Proxy

pFad v3 Proxy

pFad v4 Proxy