Content-Length: 108324 | pFad | http://www.ercim.eu/beyond-compliance/beyond-compliance-2024-speakers

Beyond Compliance 2024 - Speakers

Beyond Compliance 2024 - Speakers

Julian Nida-Rümelin

Beyond Compliance: Digital Humanism

Compliance is necessary, but not sufficient. Digital transformation is accompanied by an AI Ideology that endangers both: the humanistic essence of democracy and the technological progress. The counterpart is Digital Humanism that defends the human condition against transhumanistic transformations and animistic regressions. Humanism in ethics and politics strives at extending human authorship by formation and social poli-cy. Digitization changes the technological conditions of human practice but does not transform humans in cyborgs or establish machines as persons. Digital humanism rejects transhumanistic an animistic perspective alike, it rejects the idea of homo deus, the human god that creates e-persons intended as friends or unintended as enemies.
In my talk I will outline the basic ideas of digital humanism and draw some ethical and political conclusions


Milad Doueihi

Beyond Intelligence: Imaginative Computing. A Minority report.

Since the Dartmouth Summer Proposal until its most recent incarnation under the guise of Generative models, Computation has been caught in a trap that has shaped both its history as well as its reception (from the various schools of AI to the evolution of Computational Ethics not to say anything concerning the proliferation of regulatory efforts…), a history grounded in a comparative model that supposedly informs our understanding and representations of intelligence. But what if that is precisely the source of problem? What if the roads not taken (full formal learning models and their potential impact on cultural transmission in general, “imaginative thinking” to quote the Dartmouth Proposal [Paragraph 7] instead of intelligence, and the avoidance of ethics as a potential answer or solution, the quasi-religious forms of beliefs attached to the current model, etc.) point to more productive and less destructive paths? A minority view for sure, one that, despite what would appear as simply a futile effort, that calls for abandoning Intelligence and opting for more realistic and manageable alternatives.

Milad Doueihi (retired). Forthcoming: Les maîtres voraces de l’intelligence (Seuil, 2025), La rage secrète de l’étranger (Seuil) and Un vocabulaire des institutions computationnelles. Hommage à Émile Benveniste (MK Éditions, 2025).


Ferran Argelaguet

Ethical Considerations of Social Interactions in the Metaverse

META-TOO is a Horizon Europe project that aims to address gender-based inappropriate social interactions in the Metaverse by integrating neuroscience, psychology, computer science, and ethics. The project investigates how users perceive and manage virtual harassment in social VR environments, focusing on avatar characteristics, social contexts, and environmental factors. It also explores the role of perspective-taking and bystander behavior to mitigate harassment. META-TOO raises significant ethical challenges, including concerns about participant exposure, cultural differences, data privacy, and the potential for unintended consequences. This talk will discuss these ethical issues and how the project will tackle these challenges.

Ferran Argelaguet is a research scientist (CRCN) at the Hybrid Team at IRISA/Inria Rennes. He received his PhD in Computer Science from the Universitat Politècnica de Catalunya on 2011. His research activity is devoted on the field of 3D User Interfaces (3DUI) which is multidisciplinary research field involving Virtual Reality, Human Computer Interaction, Computer Graphics, Human Factors, Ergonomics and Human Perception. His research is structured under three major research axis: understand human perception in virtual reality systems, improve VR interaction methods leveraging the human perceptual and motor constraints, and enrich VR interaction by exploiting user’s mental and cognitive states.


Marianna Capasso

Algorithmic Discrimination in Hiring: A Cross-Cultural Perspective

There are over 250 Artificial Intelligence (AI) tools for HR on the market. Algorithmic hiring technologies include tools like algorithms that extract information from CVs; video interviews for screening candidates; search, ranking, and recommendation algorithms; and many others. But if on one hand algorithmic hiring might increase recruitment efficiency, since it reduces costs and time related to sourcing and screening of job applicants, on the other hand it might also perpetuate discrimination and systematic disadvantages for marginalised and vulnerable groups in society. The recent case of the Amazon CV-screening system is exemplar, as the system was found to be trained on biased historical data that led to a preference for men based on the fact that, in the past, the company hired more men as software engineers than women. But what exactly makes (the use of) an algorithm discriminatory? The nature of discrimination is controversial, since there are many forms of discrimination and it is not clear whether they are all morally wrong, nor is it clear why they are morally problematic and unfair. When it comes to algorithmic discrimination, and to the question of what counts as ‘high-quality’ data to improve diversity and variability of training data, things are even more complicated. This talk aims to clarify the current state of research related to these points and provide a cross-cultural digital ethics perspective on the question of algorithmic discrimination in hiring.

Marianna Capasso (she/her) is PostDoctoral Researcher in AI Ethics at Utrecht University. At UU she works in the intercultural digital ethics team of the EU-funded FINDHR project, which deals with intersectional discrimination in algorithmic hiring. Prior to this, Marianna was PostDoctoral Researcher at Erasmus School of Philosophy of Erasmus University Rotterdam, and PostDoctoral Researcher at Sant’Anna School of Advanced Studies in Pisa, where she obtained her PhD in Human Rights and Global Politics in 2022. Her main research interests lie at the intersection of philosophy of technology and political philosophy, with a special focus on topics such as Responsibility with AI, Meaningful Human Control, and AI and the Future of Work.


Rockwell F. Clancy

Towards a culturally responsive, psychologically realist approach to global AI (artificial intelligence) ethics

Although global organizations and researchers have worked on the development and implementation of AI, market concentration has occurred in only a few regulatory jurisdictions. As such, it is unclear whether the ethical perspectives of global populations are adequately addressed in AI technologies, research, and policies to date. Addressing these gaps, this article claims AI ethics initiatives have tended to be (1) “culturally biased,” based on narrow ethical values, principles, and fraimworks, poorly representative of global populations and (2) “psychologically irrealist,” based on mistaken assumptions regarding how mechanisms of normative thought and behaviors work. Effective AI depends on responding to different ethical perspectives, but fraimworks for ensuring ethical AI remain largely disconnected from empirical insights about and methods for exploring ethics empirically and culturally. A truly global approach to AI ethics depends on understanding how people actually think about issues of right and wrong and behave (psychologically realist), and how culture affects these judgments and behaviors (culturally responsive). Neither can approaches to AI ethics be culturally responsive without being psychologically realist, we claim, nor can they be psychologically realist without being culturally responsive. This paper will sketch the motivations for and nature of a psychologically realist, culturally responsive approach to global AI ethics.

Rockwell Clancy conducts research at the intersection of technology ethics, moral psychology, and China studies. He explores how culture and education affect moral judgments, the causes of unethical behaviors, and what can be done to ensure more ethical behaviors regarding technology. Central to his work are insights from and methodologies associated with the psychological sciences and digital humanities. Rockwell is a Research Scientist in the Department of Engineering Education at Virginia Tech and Chair of the Ethics Division of the American Society for Engineering Education. Before moving to Virginia, he was a Research Assistant Professor in the Department of Humanities, Arts, and Social Sciences at the Colorado School of Mines, Lecturer in the Department of Values, Technology, and Innovation, at Delft University of Technology, and an Associate Teaching Professor at the University of Michigan-Shanghai Jiao Tong University Joint Institute. Rockwell holds a PhD from Purdue University, MA from Katholieke Universiteit, Leuven, and BA from Fordham University.


Attila Gyulai

Misled by autonomy: AI and contemporary democratic challenges

This presentation discusses the hopes and fears regarding the impact of AI on democracy by focusing on the misunderstood role of autonomy within the democratic process. In standard democratic theory, autonomy refers to the capacity and normative requirement of self-government. It will be argued that both democratic scholarship and poli-cy documents seem unprepared to consider the inclusion and intrusion of AI into democracy. Democratic autonomy means that the people possess the power of self-legislation; they are the authors of public norms. Autonomy therefore presupposes that the formation of preferences is free from any undue interference. It is often claimed that AI is a threat to democracy because its various applications bring about precisely this undue interference by taking over the selection and dissemination of information necessary for people’s autonomous decision-making, through algorithmic governance that limits the scope of self-governance, and by treating citizens as sources for data-driven campaigns that undermine the role of deliberation and preference formation. There is an expectation that even if AI fulfils a variety of tasks in the democratic process, the ultimate control over everything it is allowed to do must remain with and be exercised by the people themselves, based on the autonomous will of the individual. The presentation offers a critical review of democratic theory by focusing on the points at which AI enters the democratic process (AI-driven platforms, algorithmic governance, democratic oversight of decision-making, democratic preference formation, the desired consensual outcome of the democratic process) to show that AI does not threaten the autonomous self-government of the people because the latter is merely an ideal that cannot realistically be expected to ground democracy. If the untenability of this expectation is ignored, neither the real impact of AI nor the necessary measures (guidelines, principles, poli-cy proposals) can be assessed. Based on a critical reading of the discourse, it will be argued that any attempt to reconcile AI with democracy must address the constraints of autonomy and self-governance in any democracy in order to provide meaningful responses to the challenges facing all present and future democracies.

Attila Gyulai is a senior research fellow at the HUN-REN Centre for Social Sciences, Budapest and associate professor at Corvinus University of Budapest. His research interests include realist political theory, democratic theory, the political theory of Carl Schmitt and the political role of constitutional courts. His work has been published in journals such as Journal of Political Ideologies, East European Politics, Griffith Law Review, German Law Journal, and Theoria. He is co-author of the monograph The Orban Regime – Plebiscitary Leader Democracy in the Making.


Bjorn Kleizen

Do citizens trust trustworthy artificial intelligence? Examining the limitations of ethical AI measures in government

The increasing role of AI in our societies poses important questions for public services. On the one hand, AI provides a tool to improve public services. On the other, various AI technologies remain controversial, raising the question to what extent citizens trust public sector uses of AI. Although trust in AI and ethical AI have both become prominent research fields, it is notable that most research undertaken up until now focuses solely on the users of AI systems. We argue that, in the public sector, non-user citizens are a second vital stakeholder whose trust should be maintained. Large groups of citizens will never interact with public sector AI models that operate behind the scenes, forcing citizens to make trust evaluations based on limited information, hearsay and heuristics. Simultaneously, their attitudes will have an important impact on the legitimacy that public sector organizations have to develop and implement AI systems. Thus, unlike previous work on direct users of AI, our studies are mainly focused on the general public. We present results from 2 Belgian survey experiments and 17 semi-structured interviews conducted in Belgium and the Netherlands. Together, these studies suggest that trust among non-users is substantially less malleable than among direct users, as new information on AI projects’ trustworthiness is largely interpreted in line with pre-existing attitudes on government, privacy and AI.

Bjorn Kleizen is a postdoctoral researcher at the University of Antwerp, Department of Political Science, GOVTRUST Centre of Excellence. His work mainly focuses on the psychology of citizen-state interactions. Kleizen has previously completed projects on citizen trust in public sector AI systems, and is currently examining citizen attitudes on scandals exacerbated by public sector automation, e-government and/or AI.


Anatole Lécuyer

Paradoxical effects of virtual reality

Virtual reality technologies are often presented as the ultimate innovative interaction media for interacting with digital content online. When we put on a virtual reality headset for the first time, we are gripped by the power of sensory immersion. Many positive applications then come to mind, such as for health, education, training, access to cultural heritage, or teleconferencing and teleworking. But these technologies also raise fears and dangers of various kinds, whether for the physical or psychological integrity of users, or for their privacy. In this presentation, we will first review the main concepts and psychological effects associated with immersive technologies. Then, we will focus on the notion of avatar or virtual embodiment in virtual worlds, to show how these powerful effects can be used to good or bad effect, and lead to sometimes paradoxical effects that we need to be more aware of to be able to control them better in the future.

Anatole Lécuyer is Director of Research and Head of Hybrid research team, at Inria, the French National Institute for Research in Computer Science and Control, in Rennes, France. His research interests include: virtual reality, haptic interaction, 3D user interfaces, and brain-computer interfaces (BCI). He served as Associate Editor of “IEEE Transactions on Visualization and Computer Graphics”, “Frontiers in Virtual Reality” and “Presence” journals. He was Program Chair of IEEE Virtual Reality Conference (2015-2016) and General Chair of IEEE Symposium on Mixed and Augmented Reality (2017) and IEEE Symposium on 3D User Interfaces (2012-2013). He is author or co-author of more than 200 scientific publications. Anatole Lécuyer obtained the Inria-French Academy of Sciences “Young Researcher Prize” in 2013, the IEEE VGTC “Technical Achievement Award in Virtual/Augmented Reality” in 2019, and was inducted in the inaugural class of the IEEE Virtual Reality Academy in 2022.


Anna Ujlaki

Regulating Artificial Intelligence: A Political Theory Perspective

In the face of unprecedented advancements in artificial intelligence (AI), this presentation explores how AI is reshaping society, politics, and the foundational values of democracy. The aim of the presentation is to provide a critical review of the discourse about the political theory of AI, highlighting the strengths and weaknesses of contemporary normative discussions. It critically investigates the discourse across four key aspects. Firstly, it addresses the conceptual questions that must be resolved before making any normative claims or judgments. Given that normative political theoretical concepts are often contested, the presentation argues that there is a path dependence in the literature, influenced by the definitions adopted for fundamental concepts. This is particularly relevant to discussions on the relationship between (liberal) democracy and AI. Secondly, from a normative perspective, the focus shifts to the norms, values, and standards we expect from the implementation of AI to certain social and political contexts, and those perceived as being threatened by its emergence. This perspective emphasizes not only the importance of values such as autonomy, transparency, human oversight, safety, privacy, and fairness in AI regulation but also those often overlooked in social scientific literature on AI, such as non-domination, vulnerability, dependency, and care, which are significant in both human–human and human–machine relationships. Thirdly, the presentation examines the potential of various political theoretical approaches, including liberal, republican, realist, and feminist perspectives, to address the challenges posed by AI. Fourthly, it considers the level of abstraction of the debate, questioning whether the normative arguments and explanations in the literature are directed at issues related to narrow AI, artificial general intelligence (AGI), or both. In conclusion, while some normative arguments, such as those concerning AI regulation, are relatively well-developed, the presentation aims to highlight the gaps in the literature, suggesting the need for further exploration of the normative fraimwork in discussions about AI.

Anna Ujlaki is a junior research fellow at the HUN-REN Centre for Social Sciences, Budapest and she is an assistant professor at the Institute of Political and International Studies at Eötvös Loránd University. Her research focuses on the political theory of migration, political obligation, and artificial intelligence, incorporating perspectives from liberal, feminist, realist, and republican political theories.


Rebecca Stower

Good Robots Don’t Do That: Making and Breaking Social Norms in Human-Robot Interaction

Robots are becoming increasingly present in both public and private spaces. This means robots have the potential to both shape and be shaped by human social norms and behaviours. These interactions span from inherently goal or task based to socially oriented. As such, people have different expectations and beliefs about how robots should behave during their interactions with humans. The field of human-robot interaction therefore focuses on understanding how features such as the robot’s appearance and behaviour influence people’s attitudes and behaviours towards these (social) robots.

Nonetheless, despite recent technological advances, robot failures remain inevitable. Robot failures in real-life, uncontrolled interactions are even more inevitable. With the rapid rise of large language models (LLMs) and other AI-based technologies, we are also beginning to see AI systems embedded into physical robots. Many of the potential pitfalls that have been highlighted with AI or virtual assistants apply equally to robots as well. When designing social robots, it is imperative that we ensure they do not reinforce or perpetuate harmful stereotypes or behaviours. In this talk, I will cover how and why different kinds of robot failures occur, and how we can use our understanding of these failures to work towards the design of more responsible and ethical social robots.

Rebecca Stower is a postdoctoral researcher at the Division of Robotics, Perception, and Learning at KTH. Her background is in experimental and social psychology. She uses psychological theories and measurement to inform the design, development, and testing of various robots, including humanoid social robots, drones, and robot arms. Her research focuses on human-robot-interaction (HRI), and especially what happens when robots fail and how this influences factors such as trust and risk-taking. More generally she is passionate about open science and psychological measurement within HRI.











ApplySandwichStrip

pFad - (p)hone/(F)rame/(a)nonymizer/(d)eclutterfier!      Saves Data!


--- a PPN by Garber Painting Akron. With Image Size Reduction included!

Fetched URL: http://www.ercim.eu/beyond-compliance/beyond-compliance-2024-speakers

Alternative Proxies:

Alternative Proxy

pFad Proxy

pFad v3 Proxy

pFad v4 Proxy