0% found this document useful (0 votes)
41 views10 pages

AI Sesors and Dashboards

AI Sensors

Uploaded by

brgdraider
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
41 views10 pages

AI Sesors and Dashboards

AI Sensors

Uploaded by

brgdraider
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 10

COVER FEATURE TECHNOLOGY PREDICTIONS

AI Sensors and
Dashboards
Huber Flores , University of Tartu

Measurements are fundamental to our understanding


and control of technology. We predict AI sensors and
dashboards that monitor AI inference capabilities and
its performance and enable users to interact with AI,
promoting responsible usage, building trust, and ensuring
compliance with ethical and regulatory standards.

T
he adoption of artificial intelligence (AI) in is doing it while preserving the fundamental rights and
our society is imminent. Despite its enormous liberties of individuals. In this article, AI sensors and
economic impact, the lack of human-perceived dashboards are predicted to become an integral part of
control and safety is redefining the way in AI solutions. AI sensors can gauge the inference capabil-
which emerging AI-based technologies are developed ities of the technology, whereas AI dashboards can allow
and deployed in systems and end applications. New individuals to monitor and tune it transparently.
regulatory requirements to make AI trustworthy and
responsible are transforming the role that humans play AI TRUSTWORTHINESS
when interacting with AI, and consequently, AI is now The AI market value is expected to increase from US$100
not just creating new opportunities and markets, but it billion to US$2 trillion by 2030 according to reports from
Statista and numerous other sources.1 This exponen-
Digital Object Identifier 10.1109/MC.2024.3394056
tial growth emphasizes the imminent adoption of AI in
Date of current version: 26 July 2024 everyday applications. AI’s disruptive inference process

This work is licensed under a Creative Commons Attribution 4.0 License. For
C O M P U T E R more information, see https://creativecommons.org/licenses/by/4.0/ PUBLISHED BY THE IEEE COMPUTER SOCIET Y A U G U S T 2 0 2 4  55
TECHNOLOGY PREDICTIONS

has baffled the world as an increased Likewise, the United States has acknowl- privacy, accuracy versus fairness, and
number of users reported and perceived edged the significance of regulating transparency versus security. Thus,
human-like reasoning when interacting AI usage through its U.S. AI Act Execu- AI sensors are envisioned to interact
with powerful AI-based models avail- tive Order 13859/13960.6 China has also and establish negotiations between
able online,2 for example, ChatGPT, emphasized the importance of regulat- them to obtain a balanced level of trust
Ernie, and Gemini. This advanced per- ing generative AI developments as cru- based on the type of application at
formance seemed incomprehensible cial steps in developing trustworthy AI hand.11 Our prediction is that all appli-
at first hand, leading to the release of technology.7 cations and systems implementing
an open global petition in March 2023 AI’s inference capabilities and its AI-based functionality will provide a
for slowing down AI developments performance can be characterized dashboard and will be instrumented
for at least six months.3 Indeed, the through the use of different trust- with sensors that measure, adjust, and
opacity and black-box characteristics worthy properties. AI trustworthiness guarantee trustworthiness, such that
in machine and deep learning mod- is defined by extending the properties individuals interacting with AI can be
els have demonstrated high inference of trustworthy computing software aware of its trust level. We highlight
capabilities when trained at scale, but with new considerations that take into the technical challenges, current tech-
since its internal mechanics are obfus- account the probabilistic and opaque nological enablers to build upon, and
cated and unclear, the use of AI mod- nature of AI algorithms and quality of implications of realizing this vision.
els fostered distrust and unsafety for training data.8 Trustworthy AI is valid,
human operators and developers.3 Cur- reliable, safe, fair, free of biases, secure, CONCEPTUAL BACKGROUND
rent development practices that ensure robust, resilient, privacy preserving, The responsible deployment of AI in
the trustworthiness of software, for accountable, transparent, explainable, everyday applications is key to scaling
example, formal verification, are not and interpretable.4 Notice, however, up the adoption of the technology. To
applicable to the construction of AI that AI trustworthiness is an ongoing analyze this, we first reflect on current
models.4 Thus, new methods for gaug- process whose definition is evolving AI regulations and their implications
ing and controlling the capabilities of continuously and that involves collab- for software development practices.
AI are key to making the technology oration among technologists, develop- After this, we then highlight exist-
trustworthy and fostering responsible ers, scientists, policymakers, ethicists, ing solutions aimed at characterizing
deployments of AI in everyday applica- and other stakeholders. Moreover, the the inference process of AI. With this
tions and interactions with humans. mapping and implications of the ethi- information, we introduce the concept
All economic and regulatory systems cal and legal requirements of technical of AI sensors and dashboards.
worldwide recognize the need to culti- solutions remain unclear.
vate trustworthiness in digital technol- In this article, we predict AI sensors Control over AI via regulations
ogies, and AI is the key one to focus on. and dashboards as a research vision that Regulations over AI seek to promote
The lack of transparency, accountabil- is an integral part of the adoption of AI the responsible development and
ity, and resilience in emerging AI-based and its interactions with individuals. deployment of AI technologies. Europe
technologies is a global concern, which An AI sensor can aid in monitoring a has crafted an extensive and compre-
has led to the imposition of strict regu- specific property of trustworthiness, hensive legislative proposal that high-
lations for their development. National whereas an AI dashboard can provide lights the possible risks and unwanted
and international sovereignty over AI- visual insights that allow humans to practices for the development of AI
based applications and services aims gauge and control the inherent prop- models. Moreover, it also emphasizes
to ensure public trust in AI usage. As a erties of AI based on human feedback. the assessment of AI-based technolo-
result, the European Union (EU) strate- Moreover, it has been demonstrated gies to verify transparency and adher-
gic plan for AI adoption, outlined in the that trustworthy properties can be ence to human rights as a way to foster
EU General Data Protection Regulation considered tradeoffs when imple- trust in society.5 To fulfill these goals,
2016/679 and EU AI Act,5 has emerged mented in practice,9,10 suggesting that regulations provide guidelines and
and become an international benchmark modifying one property can impact compliance support for handling data
since the early stages of AI developments. others, for example, robustness versus and developing software architectures.

56 COMPUTER  W W W.CO M P U T E R .O R G /CO M P U T E R


Consequently, software engineers and as well as verifying its internal infer- to analyze the (serialized) AI model (in
other practitioners must consider new ence behavior. JSON/YAML), the dataset, and its respec-
requirements, such as data traceabil- tive pipeline. Functioning as applica-
ity, minimization, rectification, and Toward AI sensors tion programming interfaces (APIs), AI
erasure. They also address system and dashboards sensors leverage standard technologies
security, privacy, and risk manage- Sensors are commonly instrumented for system integration and interoper-
ment. Similar and overlapping prin- within applications to enable the mon- ability. AI sensors are designed with a
ciples are also described in the U.S. AI itoring of their functionality during clear separation between their interface
Act,6 China’s regulations over genera- runtime. Sensors are fundamental (client API) and functionality (deployed
tive AI,7 and those of other countries, mechanisms for data collection and in a back end), ensuring lightweight
like Japan, Brazil, and Canada. measurements. AI sensors are envi- instrumentation routines and reducing
sioned as software-based mechanisms, processing costs in end applications.
Modern applications and AI for example, virtual sensors.13 A virtual At the same time, this clear separation
Modern applications have evolved sig- sensor thus is a program that character- allows for changing the functionality
nificantly beyond classical client-server izes or continuously profiles the behav- of the AI sensors without modifying
architectures. Currently, modern archi- ior of certain implemented functional- the end application. This is useful as
tectures incorporate machine and deep ity. Since AI models are updated on time currently there is a mismatch between
learning pipelines (AI components) that (retrained as new data are obtained), technical and legal trustworthiness.
collect data from user interactions AI sensors observe how these changes Upgrading the functionality of an AI
and exploit them to train AI models— influence different characteristics of the sensor can then become simple by
using either centralized or distributed models, for example, resilience, accu- adopting system architecture patterns
approaches.12 In practice, analyzing racy, and fairness to mention a few. AI like microservices.
the inference capabilities of AI thus sensors can also potentially learn from In addition, another important
involves evaluating: 1) the trained AI these observations to determine when reason to separate interface and func-
model itself, 2) the training data, and models have been alternated drastically tionality is that several AI sensors are
3) the overall AI pipeline that con- by contributions, for example, possible required to be instrumented within
structed the model. However, modern attacks. In turn, an AI dashboard com- an application, such that it is possible
applications with integrated AI lack municates through visual insights the to characterize different trustworthy
features to monitor the inference capa- measurements collected by the AI sen- properties. This can cause the process-
bilities of AI effectively. As a result, sors, such that individuals can inspect, ing requirements of applications to
they fall short of complying with AI assess, and tune the behavior of AI. become higher. Thus, outsourcing the
regulations. Ongoing efforts to com- functionality to remote infrastructure
municate the internal logic of AI mod- ENABLING TECHNOLOGIES can be helpful to avoid introducing
els have led to the development of AI sensors and dashboards simplify extra processing overhead in applica-
monitoring solutions, where the per- the complexity of advancing the mon- tions. Furthermore, AI sensors instru-
formance characteristics of AI mod- itoring tools of AI trustworthiness. mented in an application are meant to
els can be quantified and visualized Building these tools, however, requires interact between them, such that the
in terms of metrics, such as accuracy building upon existing technologies. autonomous tuning of trustworthi-
and F1-score. Examples of this include Thus, we continue by describing the ness can be achieved based on the type
TensorLeap (https://tensorleap.ai/), technological enablers supporting of application or context at hand. This
Neptune AI (https://neptune.ai/), the implementation of AI sensors and autonomous tuning (or negotiations)
and Comet ML (https://w w w.comet. dashboards in practice. also requires further processing capa-
com/site/). Adva nced mon itor i ng bilities that allow AI sensors to reach
tools that facilitate the comprehen- Path to AI sensors an agreement regarding the level of
sive characterization of AI trustwor- AI sensors are envisioned to be instru- trust to be provisioned to users. This
thiness are a promising approach to mented within modern applications at is particularly helpful in dynamic sit-
engaging humans in the tuning of AI the code level, such that it is possible uations where the use of data becomes

AUGUST 2024  57
TECHNOLOGY PREDICTIONS

context dependent, 14 requiring, in but only designated expert stakehold- this matching decision. As an example,
some cases, consent from surrounding ers can apply user feedback to refine consider an online bookstore (like
individuals to use their data. In such the model. Tuning of AI models can be Amazon); book recommendations are
cases, AI sensors can act on behalf of achieved through several existing open provided to users, but the details on
users to aid in automatizing the data source and proprietary tools and librar- how a recommendation is triggered are
process of data handling and man- ies, including Ray Tune (https://ray.io/), speculative to the users receiving them.
agement. Notice, however, that users Optuna (https://optuna.org/), Hyper- AI dashboards can help users explore
are required to be aware of their pref- opt (https://hyperopt.github.io), Vizer whether recommendations provided by
erences and how these are configured (https://github.com/vizier-db), Micro- the website were taken given different
within applications. soft NNI (Neural Network Intelligence, parameters, like demographic groups,
https://nni.readthedocs.io), Keras age, type of behavioral interactions,
Path to AI dashboards Tuner (https://keras-team.github.io/ and overall, a large variety of human
An AI dashboard communicates through keras-tuner/), and SigOpt (https:// patterns. AI sensors can provide addi-
concise visual insights the measure- sigopt.com/). Naturally, model tuning tional fine-grained information regard-
ments collected by the AI sensors, such may compromise AI developments, ing the model characteristics, such as
that individuals can inspect, assess, requiring the use of secure technolo- privacy and biases, demonstrating that
and tune the behavior of AI. Notice that gies to ensure that AI models are not even simpler applications can rely on AI
while the quantified information of all hampered intentionally. sensors and dashboards to improve the
trustworthy properties can be presented, awareness of AI to individuals.
the type of application from which trust- IMPACT
worthiness is estimated can play a role AI sensors and dashboards are pre- Autonomous applications
in presenting the results in the AI dash- dicted to be introduced in applica- Thanks to the emergence of robust AI
board. As an example, fairness can be tions, as shown in Figure 1. We next models for navigation and localiza-
an important factor for employment-, highlight how AI sensors and dash- tion, autonomous technologies (like
healthcare-, and finance-related appli- boards can improve the perception autonomous cars and drones) are now
cations, but it may be of less impor- and interaction of users with different fully operational and deployed in urban
tance for autonomous applications like types of applications. areas, for example, delivery drones
self-driving cars and drone delivery. and autonomous cars.16 The account-
This suggests that visualization through Existing real-world applications ability of these technologies when fac-
an AI dashboard depends on the type of Currently, online applications already ing unexpected crashes and abnormal
application, requiring methods to reor- implement AI models to some extent, behaviors remains a key challenge for
ganize content, such as hierarchy anal- in the form of either recommendations their safe adoption.17 Besides this, the
ysis or progressive disclosure mecha- or personal guidance for individuals. lack of visual human operators causes
nisms.15 Once information is available These applications request that users distrust in users. AI dashboards run-
in the AI dashboard, tuning or provid- enable their history interactions with ning on the personal devices of users
ing feedback to enhance AI inference applications to improve their recom- can potentially retrieve general infor-
capabilities is not an individualized pro- mendation logic, providing better sug- mation about AI in cars and drones,
cess but requires specific stakeholders, gestions that match users’ interests. such that users can decide whether
such as domain- or application-specific Several existing applications provide to use it or not. This information can
experts to adjust AI models based on coarse-grained estimates about this include safety and performance trust-
user insights. interest-matching characterization; for worthiness metrics, highlighting the
AI dashboards facilitate model tun- example, Netflix provides a matching effective operations of the autono-
ing for experts and provide insights score for movie recommendations. AI mous decision models. These dash-
into inference capabilities for all users. sensors and dashboards can provide boards can also provide and collect
For example, in an AI model for bank additional benefits for these applica- feedback over time from other users,
loans, end users can assess the fairness tions, providing fine-grained details increasing the usability and comfort
of the model through the dashboard, on the considerations taken to reach of the technologies.

58 COMPUTER  W W W.CO M P U T E R .O R G /CO M P U T E R


Personalized applications instance, it may be that the data con- environments for users to experience.
Federated learning as a service has tributions and features are irrelevant However, this adaptive functionality
been proposed to build personalized for certain users. As a result, users can can hamper other functionality in the
applications for personal devices.12 proactively decide whether to accept digital environment. For instance, the
These applications train robust AI or reject certain contributions from behavior of AI models in other objects
models over time in a collaborative others through the AI dashboard. can change significantly, reducing
manner as users encounter other indi- their robustness levels. Thus, AI sen-
viduals with similar preferences and Metaverse applications sors can then characterize and monitor
interests. Since not all the updates to AI Augmented reality/virtual reality tech- over time the resilience and robustness
models are beneficial,4 AI dashboards nologies exploit AI to provide advanced of these objects when facing different
can provide insights on whether aggre- immersive experiences to users. 18 environments. The AI dashboard can
gation is beneficial or detrimental to Indeed, generative AI can easily con- then provide this information to users
personalized model performance. For struct a large variety of different digital to determine the level of operational

AI Model

Driving

AI Model AI Model
Self-Driving
Cars
Individuals Drone
Delivery Object Navigation
Surveillance Detection

FIGURE 1. A vision of AI sensors and dashboards for modern applications.

AUGUST 2024  59
TECHNOLOGY PREDICTIONS

immersive experience that a particular performance of models and their rela- the core challenges to overcome to
digital environment can provide with- tionship with generated data. Poten- achieve our vision.
out failures. AI dashboards can be pre- tially, AI sensors can adjust and bal-
sented to users as a part of their immer- ance the difference between real and Sensor instrumentation
sive experience and description of their synthetic data. Likewise, the AI dash- By default, common practices for ana-
virtual environment. board can provide detailed informa- lyzing AI models are performed using
tion about how reliable the model is a post-de facto verification approach.8
Generative applications based on real measurements and pro- This means that the AI model is ana-
Generative data produced by AI models vide insights about the amount of gen- lyzed once it is fully constructed, deployed,
is key for augmenting and enriching erative data supporting the AI model. and functional. AI models can be instru-
scarce datasets.19 This incidentally can mented with AI sensors using stan-
influence the explainability and inter- CHALLENGES AND dard API routines. However, this is not
pretability of models. Synthetic gener- FORESEEN DEVELOPMENTS a trivial task. As shown in Figure 2,
ated data can introduce biases in model We next reflect on the current state of building an AI model involves mul-
inference. AI sensors can monitor the existing technologies and highlight tiple steps abstracted into a pipeline.

Data Ingestion Data Model Training Model


Prediction
Preparation Deployment
–Raw Data Hyperparameterization
–Aggregation Bias and End Device
–Fusion Normalization Parallelization

Data Model Back End

Data
Sources
Model Trained

New Incremental
Contributions

Retraining
(Updates)

Each
trustworthy
AI
Sensor property is
linked to a Monitoring and
sensor. Adjusting

FIGURE 2. A standard machine learning pipeline instrumented with AI sensors and collecting measurements displayed in an
AI ­dashboard.

60 COMPUTER  W W W.CO M P U T E R .O R G /CO M P U T E R


Each step influences the overall result- autonomy, it is possible to balance the individuals relying on AI models can
ing model that is produced, suggest- trust in applications automatically.10 be aware of the limitations and scope
ing that the overall pipeline requires Furthermore, once instrumented, of the decision support provided by AI
the instrumentation of AI sensors. the configuration of an AI sensor plays models. Ultimately, dashboards can
For instance, it is possible to establish a crucial role in determining the level support humans in deciding whether
the level of fairness of a model before of trustworthiness in monitoring. The or not to use AI to aid with a particu-
its construction just by analyzing its sampling rate directly affects energy lar task. As mentioned earlier, effec-
raw data, for example, using statisti- consumption and application perfor- tively presenting trustworthy results
cal parity or a data imbalance method, mance, requiring optimal sampling is crucial for communicating import-
such as resampling.9 Similarly, fair- for improved user experience. While ant AI characteristics to users. The
ness can be derived once the model is it may seem feasible to sample the AI method of presentation, however,
fully operational or after each update, model every time it updates, the risk of depends on the specific type of applica-
for example, using equal opportunity adversarial attacks or induced changes tion being used. Another key challenge
or equalized odds metrics.9 Thus, a persists over time, requiring frequent that emerges when interacting with
key challenge to enable AI sensors is model assessment and analysis. Con- AI models through an AI dashboard is
to develop sensors tailored to monitor sequently, selecting the optimal sam- the type of device. AI dashboards have
each step of the AI pipeline. pling frequency for AI sensors remains to be designed for different types of
This has two implications an ongoing challenge, necessitating device characteristics and continuous
further research across various appli- cross-device interactions—beyond sim-
1. A trustworthy-by-design cations. Once sampled, however, the ple screen size. For example, an AI dash-
approach must be encour- quality of data collected by AI sensors board for a smartwatch may be visu-
aged instead of a post-de facto can create several commercialization alized instead in a smartphone rather
analysis. opportunities. AI sensors yielding data than in the smartwatch itself.15 This is
2. A single sensor for monitoring that align well with both legal and tech- to avoid users misunderstanding infor-
a specific trustworthy property nical requirements can gain a compet- mation in the dashboard, but it requires
may not be enough, requir- itive edge in the market. This can also designing AI dashboards to fit into mul-
ing instead having multiple create opportunities for certifying AI tidevice usage patterns. Another exam-
AI sensors of the same type sensors, facilitating easier auditing and ple is a self-driving car; a user may pair
embedded at different steps of accountability for trustworthy AI soft- their personal device with the AI dash-
the pipeline. ware. Certified AI sensors can allow board of the car temporally, such that
developers to focus more on imple- the user can be aware of the capabili-
Another challenge is to develop loose menting application-specific function- ties of the car for navigation.
instrumentation principles, such that ality rather than evaluating trustwor-
AI sensors can be easily equipped thiness properties. Human oversight
into pipelines. Notice, however, that AI dashboards can also be doors for
this depends on the level of complex- Dashboard integration and usage interacting with AI models. As a part
ity of the method analyzing a spe- Once sensors are instrumented, mea- of the EU AI Act, humans play a critical
cific trustworthy property. For exam- surements can be continuously extracted role in overseeing the behavior of AI.
ple, the explainability of AI models from AI models, and these can then However, interacting with AI models is
(through methods like LIME, SHAP, be presented to users or any stake- a difficult task, especially when tuning
and occlusion sensitivity) is measured holders in dashboards.10 By using the AI models. Human intervention in AI
by looking at how data inputs influ- dashboards, stakeholders can visu- tuning can negatively impact perfor-
ence model outputs, requiring a com- alize critical aspects that influence mance by introducing biases or open-
plete overview of the whole pipeline the inference behavior of AI mod- ing back doors based on model rec-
execution. AI sensors are expected to els. For example, the level of fairness, ommendations. Thus, a key challenge
interact between them, suggesting robustness, and resilience, to mention is to abstract the characteristics and
that by equipping them with further a few. Through dashboard inspection, functionality of AI models in a clear

AUGUST 2024  61
TECHNOLOGY PREDICTIONS

and concise form to individuals. This Privacy-preserving and proposed different technical methods on
abstraction also has to consider the secure monitoring how to quantify each aspect of trustwor-
interaction of AI models with different AI models can be adversely affected thiness. For instance, several different
groups of (stakeholder) users. Here, a by induced and noninduced changes methods have been proposed to mea-
group depicts users with different lev- at any stage of their construction pipe- sure the explainability (LIME, SHAP, and
els of expertise or domain knowledge. line. Noninduced changes emerge Grad-CAM, among others), fairness, and
This hierarchy also depicts the level of from unintentional situations where resilience of AI models. Currently, how-
involvement that humans have with the data are hampered as they are ever, there is a clear mismatch between
the AI tuning. For example, end users collected and prepared for storage: for legal/ethical and technical require-
may provide feedback, but implement- instance, an image corrupted by a cam- ments. The EU and U.S. AI Acts have
ing it requires a different group with era failure. Similarly, induced changes identified requirements to ensure the
specialized skills and domain knowl- arise from the intentional manipula- trustworthiness of AI. Moreover, inter-
edge. Advancements in large language tion of the data (adversarial attacks). national initiatives and projects, such
model (LLM) technologies can aid in Since analyzing the trustworthiness as open source SHAPASH, the PwC AI
this matter, providing an adaptive way of AI requires access to the AI model, trust index, Microsoft’s AI trust and
to generate explanations for different its dataset, and its pipeline, it is then transparency, IBM’s AI Fairness 360, and
types of users. Indeed, prompts tailored important to protect them against Open AI’s AI Impact Assessment, have
with domain-specific terminology intentional attacks. Thus, a key chal- defined trustworthiness and identified
can be created to communicate with lenge is to guarantee that the continu- their respective properties. Likewise, EU
each stakeholder. ous monitoring of trustworthy proper- projects, such as EU TRUST-AI (https://
Additionally, interaction between ties is conducted in a secure manner.20 trustai.eu/), EU SPATIAL (https://spatial
AI sensors can also be supported through Existing methods based on multiparty -h2020.eu/), and EU TAILOR (https://
LLM interfaces, meaning that negotia- computation, homomorphic encryp- tailor-network.eu/), have also proposed
tion happens through natural language tion, and trusted execution environ- principles and guidelines to ensure trust-
interactions. This way, individuals can ments (TEEs) could be adopted in this worthiness in AI development practices.
also have a way of troubleshooting AI mat ter. Integrating t hese mecha- While there is a clear overlap
behavior just by inspecting dialogue- n isms within the architectures, how- between all these works, a key chal-
like conversations. Negotiation between ever, requires managing extra compu- lenge that remains unexplored is iden-
AI-based chatbots has been investigated tation overhead in the analysis as well tifying the essential requirements of
and demonstrated over the years.11 as solving several technological lim- trustworthiness. While the assump-
Besides this, another key challenge itations to achieve scalable solutions. tion is that the EU regulatory approach
is to determine what changes can be For instance, while TEEs are currently (properly implemented) could ensure
applied to the model by individuals: available to aid in secure computation, the trustworthiness of AI technolo-
for instance, removing personal data they have several limitations regard- gies, it is important that these solu-
from the training dataset, changing the ing the specific characteristics in soft- tions are interoperable acceptable and
machine learning algorithm, hyperpa- ware runtime execution, for example, manageable options in other legal/
rameter tuning of the models (optimiz- programming language, dependen- economic environments. More impor-
ing inference performance), or sim- cies, and storage to mention the tantly, mapping legal/ethical to techni-
ply adding/referring new data to the most common. cal requirements is a critical challenge
model, among others. This is a critical to identify the limitations and impli-
challenge to overcome as AI models Legal and technical cations of trustworthiness in practice.
have to support the individual needs trustworthiness This can potentially lead to concrete
of users while preserving general val- Defined regulatory trustworthiness dif- procedures on how AI sensors are con-
ues from groups and society. Other- fers when implemented in practice. structed and instrumented. Moreover,
wise, conflicts on AI usage may arise, Indeed, characterizing and measuring standard specifications of AI dash-
halting everyday activities and human trustworthiness in AI is an ongoing pro- boards can also be adopted, such that
processes. cess. Several works have developed and individuals have a clear understanding

62 COMPUTER  W W W.CO M P U T E R .O R G /CO M P U T E R


of AI even in different geographical the control of an individual’s data are key Lastly, it is expected that any appli-
and legal/economic environments. to fostering EU liberties and rights. On cation implementing AI functionality
the other hand, general models preserv- is equipped with AI sensors and dash-
RISKS TO PREDICTION ing the ethical values and legal/­economic boards. While AI sensors can follow
AI pipelines are a part of larger sys- requirements of societal groups are key standard guidelines for their instru-
tems. This suggests that all trust- for using AI without conflicts. As a mentation in software applications, AI
worthy AI properties are not achiev- result, AI dashboards can potentially dashboards require integration based
able just by examining AI-related provide insights into effective AI perfor- on the type of application. For instance,
components. For instance, security mance, but it is foreseen that changes to AI dashboards in Metaverse applica-
is a property defined by trustworthi- tune the behavior of the model would be tions can be interfaces that are part of
ness, but securing a large system is a applicable only by defined authorities. the virtual experience, whereas wear-
general task carried out for the overall Furthermore, notice also that several able applications require interfaces to
underlying infrastructure and ignores technological enablers are currently avail- be designed for a variety of personal
whether AI is present or not in the able to aid in realizing the vision; mul- devices. Besides this, it is also possi-
system. As a result, not all the trust- tiple paths can be followed to build AI ble for users to take for granted the
worthy properties can be envisioned sensors and dashboards. However, the behavior of AI over time. This means
only within the scope of AI. In this use of a specific technology ultimately that trust in AI is by default expected,
case, AI sensors can collect measure- depends on its rate of development and and AI dashboards are not frequently
ments to determine the level of secu- level of maturity. checked by individuals. AI dashboards,
rity of the entire system, meaning that Additionally, while it is possible for AI however, are still required to facilitate
trustworthy properties are not unique sensors to monitor intentional changes the verifying and auditing of AI-based
to AI only but they extend to other in data, for example, data poisoning, it applications before they are released
components of the whole system. is unlikely that AI sensors will be used to to the public. Moreover, AI dashboards
Foundational models are larger monitor nonintentional data changes as can enable faster response times
models built considering billions of those are based on situational and man- and proactive decisions when facing
parameters. AI sensors and dashboards agement factors. Collecting large vol- cyberattacks.
embedded into the design stages of umes of real data that are free of errors

N
these models could easily aid in ensur- and not missing records is unfeasible,
ing that pretrained models are free of and extensive cleaning and preprocess- ew regulatory requirements for
biases, secure, and overall trustworthy. ing methods are available to prepare and the development of AI are ensur-
Foundational models can, however, pose verify data before training. In parallel to ing the trustworthiness of the
a big challenge in the use of AI sensors this, generative AI has transformed the technology for its usage in everyday appli-
when examining them via post-de facto use of synthetic data for the training of cations. To further strengthen the liber-
and verifying their regulatory compli- robust AI models. Generative AI can now ties and rights of individuals when inter-
ance before using them. Currently, it be used to augment and enrich scarce acting with AI, in this article, we predict
is unclear to what extent foundational datasets, improving the overall decision a research vision of AI sensors and dash-
models can be augmented and used making of AI models. While the use of boards. The first gauges and character-
within applications without analyzing generative AI is foreseen to continue and izes the behavior of AI models and their
their retraining and dissecting their become a standard practice in AI devel- evolving trustworthy properties, whereas
inference logic. opments, AI sensors and dashboards can the latter introduces human-in-the-loop
While AI dashboards and sensors can foster its safe usage by communicating supervision and control to tune and mon-
provide quantifiable properties about the to users first the quantifiable amount of itor the behavior of AI with human sup-
trustworthiness of AI models, it is difficult synthetic data used in the model infer- port. We highlighted how modern appli-
to predict whether end users or specific ence process and second the sources used cations can benefit from AI sensors and
stakeholders would be able to modify/ in the generative creation of the dataset dashboards and described the technical
tune the behavior of AI in applications. On used for training: for instance, text trans- research challenges that have to be ful-
the one hand, personalized AI models and formed into images or vice versa. filled to achieve our vision.

AUGUST 2024  63
TECHNOLOGY PREDICTIONS

ABOUT THE AUTHOR Annu. Int. Conf. Mobile Syst., Appl.


Services (MobiSys), 2022, pp. 605–606,
doi: 10.1145/3498361.3539693.
HUBER FLORES is an associate professor at the Institute of Computer Science, 13. D. Martin, N. Kühl, and G. Satzger,
University of Tartu, 50090 Tartu, Estonia. His research interests are distributed, “Virtual sensors,” Bus. Inform. Syst.
mobile, and pervasive computing systems. Flores received a Ph.D. in computer Eng., vol. 63, no. 3, pp. 315–323, 2021,
science from the University of Tartu. Contact him at huber.flores@ut.ee. doi: 10.1007/s12599-021-00689-w.
14. C. B. Fernandez et al., “Implement-
ing GDPR for mobile and ubiquitous
computing,” in Proc. ACM 23rd Annu.
ACKNOWLEDGMENT Executive-Order-13960-AI-Use-Case Int. Workshop Mobile Comput. Syst.
This research is part of a SPATIAL proj- -Inventories-Reference Appl. (HotMobile), 2022, pp. 88–94,
ect that has received funding from the 7. “Interim measures for the manage- doi: 10.1145/3508396.3512880.
European Union’s Horizon 2020 research ment of generative artificial intel- 15. S. Park et al., “AdaM: Adapting
and innovation program under Grant ligence services.” CAC. Accessed: multi-user interfaces for collabora-
101021808. Mar. 1, 2024. [Online]. Available: tive environments in real-time,” in
http://www.cac.gov.cn/2023- Proc. ACM CHI Conf. Human Factors
REFERENCES 07/13/c_1690898327029107.htm Comput. Syst., 2018, pp. 1–14, doi:
1. T. Babina et al., “Artificial intelli- 8. J. M. Wing, “Trustworthy AI,” Com- 10.1145/3173574.3173758.
gence, firm growth, and product inno- mun. ACM, vol. 64, no. 10, pp. 64–71, 16. E. Frachtenberg, “Practical drone
vation,” J. Financial Econ., vol. 151, Jan. 2021, doi: 10.1145/3448248. delivery,” Computer, vol. 52, no. 12,
2024, Art. no. 103745, doi: 10.1016/j. 9. A. H. Celdran et al., “A framework pp. 53–57, Dec. 2019, doi: 10.1109/
jfineco.2023.103745. quantifying trustworthiness of super- MC.2019.2942290.
2. S. Herbold et al., “A large-scale com- vised machine and deep learning 17. B. S. Miguel et al., “Putting account-
parison of human-written versus models,” in Proc. AAI SafeAI2023 ability of AI systems into practice,” in
chatGPT-generated essays,” Sci. Rep., Workshop, 2023, pp. 2938–2948. Proc. 29th Int. Joint Conf. Artif. Intell.
vol. 13, no. 1, 2023, Art. no. 18617, doi: 10. Y. Wang, “Balancing trustworthi- (IJCAI), 2021, pp. 5276–5278, doi:
10.1038/s41598-023-45644-9. ness and efficiency in artificial 10.24963/ijcai.2020/768.
3. A. Goldfarb, “Pause artificial intel- intelligence systems: An analysis 18. H. Ning et al., “A survey on the
ligence research? Understanding AI of tradeoffs and strategies,” IEEE metaverse: The state-of-the-art,
policy challenges,” Canadian J. Econ./ Internet Comput., vol. 27, no. 6, pp. technologies, applications, and
Revue Canadienne D’économique, early 8–12, Nov./Dec. 2023, doi: 10.1109/ challenges,” IEEE Internet Things
access, 2024, doi: 10.1111/caje.12705. MIC.2023.3303031. J., vol. 10, no. 16, pp. 14,671–14,688,
4. B. Li et al., “Trustworthy AI: From 11. S. Chen et al., “An intelligent chatbot Aug. 2023, doi: 10.1109/JIOT.
principles to practices,” ACM Comput. for negotiation dialogues,” in Proc. 2023.3278329.
Surv., vol. 55, no. 9, pp. 1–46, 2023, IEEE Smartworld, Ubiquitous Intell. 19. K. Cui et al., “GenCo: Generative
doi: 10.1145/3555803. Comput., Scalable Comput. Commun., co-training for generative adversarial
5. “European approach to artificial Digit. Twin, Privacy Comput., Metaverse, networks with limited data,” Proc.
intelligence.” European Commission. Auton. Trusted Vehicles (SmartWorld/ AAAI Conf. Artif. Intell., vol. 36, no. 1,
Accessed: Mar. 1, 2024. [Online]. UIC/ScalCom/DigitalTwin/PriComp/ pp. 499–507, 2022, doi: 10.1609/aaai.
Available: https://digital-strategy.ec. Meta), Haikou, China, 2022, pp. 1172– v36i1.19928.
europa.eu/en/policies/european 1177, doi: 10.1109/SmartWorld-UIC 20. F. Mo et al., “DarkneTZ: Towards
-approach-artificial-intelligence -ATC-ScalCom-DigitalTwin-Pri- model privacy at the edge using
6. “Executive order (EO) 13960.” Comp-Metaverse56740.2022.00168. trusted execution environments,” in
CIO.gov. Accessed: Mar. 1, 2024. 12. K. Katevas et al., “FLaaS—Enabling Proc. 18th Int. Conf. Mobile Syst., Appl.,
[Online]. Available: https://www. practical federated learning on mobile Services (MobiSys), 2020, pp. 161–174,
cio.gov/policies-and-priorities/ environments,” in Proc. ACM 20th doi: 10.1145/3386901.3388946.

64 COMPUTER  W W W.CO M P U T E R .O R G /CO M P U T E R

You might also like

pFad - Phonifier reborn

Pfad - The Proxy pFad of © 2024 Garber Painting. All rights reserved.

Note: This service is not intended for secure transactions such as banking, social media, email, or purchasing. Use at your own risk. We assume no liability whatsoever for broken pages.


Alternative Proxies:

Alternative Proxy

pFad Proxy

pFad v3 Proxy

pFad v4 Proxy