0% found this document useful (0 votes)
17 views7 pages

Autonomous Industrial Control Using An Agentic Framework With Large Language Models

This paper presents a novel framework for autonomous industrial control using large language models (LLMs) that incorporates validation and reprompting architectures to enhance fault handling in dynamic environments. The proposed multi-agent system, consisting of monitoring, actor, validator, and reprompter agents, allows for real-time decision-making and adaptation to unforeseen disturbances without human intervention. A case study involving temperature control on a microcontroller demonstrates the effectiveness of this approach in achieving robust and adaptive control in complex industrial settings.

Uploaded by

weishiyan
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
17 views7 pages

Autonomous Industrial Control Using An Agentic Framework With Large Language Models

This paper presents a novel framework for autonomous industrial control using large language models (LLMs) that incorporates validation and reprompting architectures to enhance fault handling in dynamic environments. The proposed multi-agent system, consisting of monitoring, actor, validator, and reprompter agents, allows for real-time decision-making and adaptation to unforeseen disturbances without human intervention. A case study involving temperature control on a microcontroller demonstrates the effectiveness of this approach in achieving robust and adaptive control in complex industrial settings.

Uploaded by

weishiyan
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 7

Autonomous Industrial Control using an

Agentic Framework with Large Language


Models
Javal Vyas, Mehmet Mercangöz

Autonomous Industrial Systems Lab, Imperial College London,


Imperial College Rd, South Kensington Campus, London, SW7 2AZ,
United Kingdom

Abstract: As chemical plants evolve towards full autonomy, the need for effective fault handling
and control in dynamic, unpredictable environments becomes increasingly critical. This paper
arXiv:2411.05904v1 [cs.MA] 8 Nov 2024

proposes an innovative approach to industrial automation, introducing validation and reprompt-


ing architectures utilizing large language model (LLM)-based autonomous control agents. The
proposed agentic system—comprising of operator, validator, and reprompter agents—enables
autonomous management of control tasks, adapting to unforeseen disturbances without human
intervention. By utilizing validation and reprompting architectures, the framework allows agents
to recover from errors and continuously improve decision-making in real-time industrial scenar-
ios. We hypothesize that this mechanism will enhance performance and reliability across a variety
of LLMs, offering a path toward fully autonomous systems capable of handling unexpected
challenges, paving the way for robust, adaptive control in complex industrial environments. To
demonstrate the concept’s effectiveness, we created a simple case study involving a temperature
control experiment embedded on a microcontroller device, validating the proposed approach.

Keywords: Autonomous systems, industrial AI, generative AI, multi-agent systems.

1. INTRODUCTION Currently, human operators play a key role in manag-


ing the type of unknown unknowns discussed previously.
Chemical plants are moving towards autonomous opera- Leveraging their reasoning abilities and domain knowledge
tions. Especially for routine operations that follow well- human operators can dynamically assess a situation and
defined procedures, autonomous operation is considered adjust their actions based on real-time feedback. The
technically feasible with currently available technologies overarching goal of this work is to bridge these reasoning
(Borghesan et al. (2022)). However, a significant challenge and knowledge use abilities to autonomous systems using
in developing autonomous control systems is the need to generative machine learning models as intelligent control
account for long-tail events, which are rare, unpredictable agents. We particularly focus on the use of Large Language
occurrences that fall outside of the scope of typical op- Models (LLMs) for this purpose.
erational scenarios. In industrial contexts, these long-tail LLMs, with their extensive knowledge bases and reasoning
events can range from unexpected equipment failures to capabilities, represent a promising avenue for developing
highly unusual process disturbances. Traditional automa- intelligent control agents capable of autonomously ana-
tion approaches struggle to handle such events, as they lyzing incoming data, diagnosing anomalies, and making
rely heavily on predefined rules and algorithms, render- informed control decisions in a zero-shot manner- making
ing them overly rigid and poorly adapted to situations inferences and offering solutions to scenarios they have not
that deviate from expected patterns. Solutions leveraging explicitly encountered in training (Pantelides et al., 2024).
machine learning models have made some progress in The challenge is transitioning to a fully automated system
handling known unknowns such as known disturbances that can evaluate responses and adjust actions indepen-
or possible plant-model mismatch but they tend to fail dently. To address this, we propose a reprompting archi-
in handling anomalies. This is primarily because these tecture that empowers LLMs to function as autonomous
models are trained on majority-class data, as anomaly control agents. This architecture enables agents to validate
data is scarce or available in too few samples. As a result, their actions against a digital twin, implementing them
these solutions struggle to detect and react to anomalies in the physical system if they pass validation; if not, the
in real-time, particularly in scenarios involving unknown agent is prompted to revise its approach. This iterative
unknowns—unforeseen disturbances that the system was process significantly enhances decision-making capabilities
not designed to handle. and improves system performance in real-time.

⋆ The authors gratefully acknowledge the financial support provided


by the Department of Chemical Engineering at Imperial College
London for this work
2. RELATED WORK used the historical demonstrations along with the prompt
to the LLM performance in controlling the HVAC system
2.1 Evolution of Autonomous Systems for Industrial Control in the building. They demonstrated that an LLM performs
equivalent or surpasses the RL performance. Researchers
Autonomous systems have been defined in various ways have also used LLMs for the modular production and con-
across the literature, with some emphasizing their capabil- trol of autonomous industrial systems, where the LLMs are
ity to solve tasks independently of specific programming connetced to the digital twins and LLMs adapt with the
instructions (Hrabia et al. (2015), Abbass et al. (2018)) interactions with the digital twin for a specific task (Xia
and others noting their ability to achieve goals without et al. (2023)). Xia et al. (2024) propose a framework to
step-by-step guidance (Beer et al. (2014), Watson and achieve an end to end industrial automation system. Their
Scheidt (2005)). Another perspective highlights autonomy framework supplies LLMs with real-time events on dif-
as the capacity to make decisions under incomplete infor- ferent context semantic levels, allowing them to interpret
mation (Abbass et al. (2018), Aniculaesei et al. (2018)). the information, generate production plans, and control
These definitions underscore the growing role of artificial operations on the automation system. In this work, the
intelligence (AI) in industrial control systems, positioning researchers propose to use a digital-twin of the industrial
Autonomous Industrial Systems as a key area within In- system for generating context for the automation agents
dustrial AI, intersecting with fields like Machine Learning, but they do not consider a validation or reprompting
Natural Language Processing, and Robotics (Peres et al. scheme or the generation of any kind of feedback or cri-
(2020)). tique for the LLM actions.
Initial approaches in agent based systems utilized rule-
based agents for tasks such as intrusion detection (Jha 2.3 Prompting Strategies for Enhanced Control
and Hassan (2002)) or decision support (Gao et al. (2009)).
Despite some success, rule-based agents are inherently lim- Prompting strategies have become central to improving
ited to predefined situations, making them less adaptable LLMs’ decision-making abilities in complex tasks. Among
to novel scenarios (Siu et al. (2021)). This constraint led these, the Chain of Thought (CoT) approach prompts
researchers to explore machine learning and deep learn- LLMs to break down tasks into intermediate reasoning
ing (DL) agents, which can adapt based on data. For steps before producing a final response (Wei et al. (2022)).
instance, DL-based multi-agent systems have shown effec- Building on this, the Tree of Thought (ToT) approach
tiveness in intrusion detection (Louati and Ktata (2020)), expands on CoT by allowing LLMs to explore multiple
yet the complexities of data collection and validation paths in their reasoning (Yao et al. (2023)), and the Graph
in distributed environments create substantial challenges of Thought (GoT) consolidates LLM reasoning paths to
in many industrial applications (Hanga and Kovalchuk enhance task completion accuracy (Besta et al. (2024)).
(2019)). Other prompting frameworks adapt feedback-driven ap-
To address these limitations, researchers turned to rein- proaches to improve LLM behavior in agent-based set-
forcement learning (RL) agents. RL agents, while highly tings. REACT (Yao et al. (2022)) enables agents to pro-
effective for specialized tasks, face challenges in sample effi- cess thoughts before taking actions, while REFLECTION
ciency, generalizability, and lengthy training times (Cheng (Shinn et al. (2023)) allows agents to interact with the
et al. (2024)). Despite their effectiveness in specific ap- system, reflect on actions, and store the interaction history
plications like process control in crystallization (Meng for iterative learning. Related approaches, including self-
et al. (2023)) and inventory management (Mousa et al. refine (Madaan et al. (2023)), RCI (Kim et al. (2023)),
(2024)), RL-based approaches often require well-defined and self-debugging (Chen et al. (2023)), utilize feedback
problem settings and reward functions, which can limit for error-correction and optimization in domain-specific
their scalability in complex, dynamic environments (Nian tasks. For iterative tasks, Chen et al. (2024b) introduce
et al. (2020)) and may not be well suited for handling a “gradient descent” style reprompting method to refine
anomalous conditions. prompts based on interaction history, while Xu et al.
(2024) use automatic reprompting with CoT to improve
2.2 Large Language Models (LLMs) in Industrial Control task accuracy.
This progression toward LLMs highlights their potential
Recently, large language models (LLMs) have emerged to address the limitations of earlier methods, providing
as a promising tool in agent based systems due to their a versatile approach that combines interpretability and
adaptability and generalization abilities. LLMs have made robustness for industrial automation.
significant inroads in chemical engineering, such as pre-
dicting material properties (Jia et al. (2024), Balaji et al. 2.4 Contribution
(2023)) and process decision-making (Chen et al. (2024a),
Schweidtmann (2024)). LLMs have also been employed for
tasks like fault detection and flowsheet generation, where In this work, we introduce the concept of using reprompt-
they assist in complex problem-solving by completing, ing architectures for industrial control, where LLMs oper-
correcting, or even generating flowsheets autonomously ate autonomously in complex process environments. The
(Balhorn et al. (2024), Hirtreiter et al. (2023)). idea here is to have an agent based system which can
carry out tasks autonomously. The engine for the agents in
LLMs have also been considered for industrial control as the system would be a large language model (LLM). We
well. Song et al. (2023) proposed a framework to control hypothesize that with the reprompting architecture, the
the HVAC system in a building using an LLM. Researchers performance of the system would improve. LLMs which
Fig. 1. Schematic of an agentic framework for monitoring and controlling a process plant during anomalous conditions

are prone to hallucinations as their inherent model char- Agent, and Reprompter Agent—that interact with a simu-
acteristic may result in erroneous action, which can be lated digital twin environment (see e.g. Fig 1). This digital
hazardous in safety critical systems. Thus, having another twin serves as a proxy for the physical system, enabling
agent which acts as a critique in navigating to an safe-to- safe validation of actions and structured feedback loops
optimal response would decrease the likelihood of the error before passing actions to the physical plant.
rates. More specifically we introduce validation agents
• Monitor Agent: The Monitoring Agent gathers
utilizing a simulation capability e.g. using a digital-twin
the state from the plant and can act as a versatile
to check the utility of the actions generated by the LLM
agent. The Monitoring agent can be used for both
agents and use a reprompting agent to provide feedback
continuous control or anomaly detection. In case of
to the actor agent for improving the action in case the
continuous control, it would keep the track of the
previously suggested action does not pass the validation
performance of the system and would allow for a
check.
planned action in a continuous manner. While in case
To illustrate the potential of this approach, we present of anomaly detection, the Monitoring agent would
a case study focused on temperature control using a only trigger the subsequent agents if it detects the
physical micro-controller. We argue that this methodology anomaly.
aligns with the growing trend towards adaptive, fully • Actor Agent: The Actor Agent initiates actions
autonomous systems and establishes a new pathway for aimed at achieving control objectives, such as modify-
intelligent industrial automation. ing parameters or toggling operational states. It oper-
ates based on predefined goals, and once it formulates
The following sections delve deeper into the components of
an action, the Actor Agent passes this decision to the
the proposed framework. Section 3 provides an overview of
digital twin. This simulation evaluates the potential
the framework and its components. In section 4 we present
effects of the action, minimizing the risk of unsafe
the temperature control case study and its architecture.
interventions on the physical system.
Section 5 discusses the results of the case study and its
• Digital Twin Simulation: The digital twin emu-
findings. Finally, section 6 we touch upon the future work.
lates the behavior of the physical system in response
to the Actor Agent’s actions, enabling real-time as-
3. METHODOLOGY sessment in a no-risk environment. This simulated
feedback captures anticipated system responses, al-
The proposed framework introduces a modular and adap- lowing agents to test actions safely before deploy-
tive LLM-based multi-agent system, with a focus on pro- ment.
grammatically leveraging a Reprompting step via a Re- • Validator Agent: Following the simulation, the Val-
prompter Agent to guide an Actor Agent toward safe idator Agent assesses the Actor Agent’s proposed
and effective solutions. Each agent is assigned a specific action based on safety and operational criteria. If the
role, equipped with tools, and tasked with distinct actions action meets these criteria, it is ready for physical
that contribute to the overarching system objectives. This deployment. However, if it is deemed unsafe or subop-
section outlines the framework’s role in enhancing system timal, the Validator Agent flags the action, prompting
reliability and responsiveness through a coordinated agent- the Reprompter Agent to intervene for a predefined
based approach. iterations after which, if the actions are unsafe, the
safety system would override the actions.
3.1 Framework Overview • Reprompter Agent: The Reprompter Agent is a
pivotal component in ensuring system safety and re-
The core of this framework is built around four principal finement. When an action fails validation, the Re-
agents—the Monitoring Agent, Actor Agent, Validator prompter Agent collaborates with the Actor Agent to
adjust the initial decision. Using alternative prompts
generated by processing the digital-twin outputs, the
Reprompter Agent conditions the Action Agent until
it aligns with the Validator Agent’s criteria. This
process forms a feedback loop in which each iteration
is tested in the digital twin and validated again,
ensuring the action is both safe and optimized. The
loop persists until the action either satisfies validation
standards or reaches a predefined limit on iterations,
safeguarding stability in the control process.
This structured interaction between agents, anchored by
the Reprompter Agent’s corrective capabilities, enables
the system to autonomously navigate complex control en-
vironments. By leveraging programmatic refinement, the Fig. 2. Case Study Schematic
Reprompter Agent helps the Actor Agent reach safe and
effective solutions, ensuring robust and adaptive control in the Actor, Validator, and Reprompter agents in real-world
dynamic industrial settings. scenarios, showcasing the framework’s capability to au-
tonomously navigate complex control challenges.

3.2 Components of framework 4. CASE STUDY


• Agents: Each agent functions as a specialized LLM- This case study demonstrates the application of the pro-
driven entity with a distinct role in the feedback loop, posed LLM-based multi-agent framework to autonomously
contributing to adaptive control: control a physical Arduino microcontroller known as
· Role: Defines the purpose of each agent. TCLab (Oliveira and Hedengren (2019)). The setup aims
· Goal: Provides clarity on what each agent should to manage heater operations based on specific temperature
achieve. thresholds: heaters are turned off when the temperature
· LLM: Serves as the core reasoning engine, en- exceeds 27°C and turned on when it falls below 25°C. This
abling agents to analyze, evaluate, and adapt. creates a cyclical oscillation within these thresholds, with
· Tools: Specify tools which the agent would have the control sequence monitored over a 40-minute period.
access to in the decision making process. The goal of this case study is to assess how effectively
• Tools: These serve as specialized utility functions the proposed multi-agent framework improves via the use
that support agents during decision-making. Tools of re-prompting for autonomous control under real-world
enable agents to handle tasks that are beyond the conditions.
core capabilities of an LLM, such as performing com-
plex calculations or accessing specific lookup tables. Structure of the Case Study The case study employs
By supplementing the LLM’s reasoning with precise a three-agent structure (Fig 2), each with distinct roles to
computational and data-access functions, the tools facilitate intelligent decision-making and control processes:
enhance the agents’ ability to make informed, accu-
rate decisions.
• Tasks: These are targeted assignments given to each • Operator Agent: The Operator agent initiates an
agent, ranging from concise directives to detailed action based on real-time temperature readings. It
instructions that guide the LLM in executing spe- determines when to activate or deactivate the heater
cific actions. Tasks are carefully assigned to agents based on the predefined thresholds(4). By leveraging
equipped with the necessary expertise or context, its role-specific prompts (3), the Operator Agent
ensuring the agent’s background aligns with the re- interprets data and issues commands to maintain the
quirements of the task. Each task description provides target temperature range.
clear guidance to optimize agent performance and • Validator Agent: The Validator Agent assesses
the actions proposed by the Operator Agent. It
streamline the overall decision-making process.
verifies whether the action aligns with the control
In summary, this methodology outlines a structured, it- logic—specifically, maintaining temperatures within
erative framework designed to leverage the capabilities the desired range. If the proposed action does not
of Large Language Model (LLM)-based agents in au- meet the criteria, the Validator flags it for reevalu-
tonomous industrial control. Each component, from spe- ation, preventing potentially unsafe or incorrect re-
cialized agents to supporting tools and defined tasks, works sponses from being implemented.
in concert to ensure safe, adaptive, and effective control ac- • Reprompter Agent: Upon a validation failure, the
tions within a digital twin environment. The introduction Reprompter Agent is activated to analyze and refine
of a Reprompter Agent strengthens the system’s resilience, the action suggested by the Operator Agent. The
facilitating a feedback-driven refinement process that iter- Reprompter Agent recalibrates the initial action to
atively adjusts actions until they meet safety and efficacy ensure it aligns with the system’s predefined require-
standards. ments. The refined action undergoes a secondary val-
idation before being deployed.
To demonstrate the practical application of this frame-
work, we present a case study in temperature regulation. While the framework is generally designed to work with a
This case study illustrates the roles and interactions of digital twin model, the simplicity of this control task allows
Fig. 3. Operator Agent Description

Fig. 5. Temperature Profile for GPT 3.5

Fig. 4. Operater Agent Task Description


us to embed it into the Validator Agent in this case study.
This approach demonstrates the framework’s adaptability
to different control tasks, highlighting each agent’s contri- Fig. 6. Temperature Profile for GPT 4o-mini
bution to maintaining stable, autonomous control. Future
work will integrate a digital twin, particularly for complex,
safety-critical scenarios like fault detection, enhancing the
framework’s robustness for advanced industrial control.
This case study underscores the potential of this multi-
agent configuration to autonomously manage and correct
actions in real-world applications.

5. RESULTS

The case study framework was realized using CrewAI


mutli-agent platform (CrewAI, 2024). The framework was
createdand executed locally while the agents’ engine uti-
lized LLMs from OpenAI suite (OpenAI, 2024) accessed Fig. 7. Temperature Profile for GPT 4o
in the cloud via the OpenAI APIs for the correspond-
ding LLMs . The communication between TCLab and
the framework was achieved via a Python based wrapper.
The performance of the proposed framework, leveraging
OpenAI’s large language models (LLMs) suite as control
agents, was evaluated within a temperature regulation case
study. This evaluation centered on the agents’ accuracy in
executing control actions and their control performance.
We measured accuracy across two settings—initial pass
accuracy and accuracy post-reprompting—to analyze the
models’ ability to correct missteps autonomously.
Table 1. Accuracy Performance of Language
Models in the proposed framework
Fig. 8. Temperature Profile for GPT 4
Metric GPT3.5 GPT4omini GPT4o GPT4
Accuracy- first pass (%) 60.04 72.49 99.63 93.75 Here the sampling rates for GPT 4o is the highest while
Accuracy - reprompts (%) 85.34 89.97 99.81 96.09
Samples 423 394 554 128 GPT 4 has the lowest sampling rate. This is a result
Passes 254 253 552 120 of the inference time of these models, thus impact the
Fails 169 61 2 8
Pass after reprompts 107 96 1 3
sampling rates. This although of lesser importance for this
application, indicates that autonomous systems with LLM
The results in Table 1 show a two fold story, one is based agents may not be suitable for a system that needs
about the sampling rates and other about the accuracy. to have fast dynamics.
In terms of accuracy, GPT 4o outperforms other OpenAI REFERENCES
models, with GPT 4 following as the second-best per-
former. Notably, when reprompting is applied, the sys- Abbass, H.A., Scholz, J., and Reid, D.J. (2018). Founda-
tem’s performance increases across all models, with the tions of trusted autonomy. Springer Nature.
most significant improvement observed in GPT 3.5 rising Aniculaesei, A., Grieser, J., Rausch, A., Rehfeldt, K., and
from 60.4% to 85.34%. These results highlight the poten- Warnecke, T. (2018). Towards a holistic software sys-
tial of reprompting architectures to enhance model perfor- tems engineering approach for dependable autonomous
mance significantly, enabling even less capable models to systems. In Proceedings of the 1st International Work-
approach the accuracy of more advanced ones. shop on Software Engineering for AI in Autonomous
Systems, 23–30.
The control performance of these models were evaluated Balaji, S., Magar, R., Jadhav, Y., and Farimani, A.B.
using the average temperate deviation from the midpoint (2023). Gpt-molberta: Gpt molecular features language
of the temperature range. Table 2 shows that GPT 4o mini model for molecular property prediction.
has the best control performance where the overshoots Balhorn, L.S., Caballero, M., and Schweidtmann, A.M.
and undershoots are minimal, whereas while GPT 4 being (2024). Toward autocorrection of chemical process flow-
highly accurate performs the worst in control performance sheets using large language models, 3109–3114. Elsevier.
amongst the OpenAI LLM suite. This is attributed to the doi:10.1016/b978-0-443-28824-1.50519-6.
inference time of the model. For GPT 4 the inference time Beer, J.M., Fisk, A.D., and Rogers, W.A. (2014). Toward a
was high resulting in previous action being implemented framework for levels of robot autonomy in human-robot
for an extended period of time. Since the system does not interaction. Journal of human-robot interaction, 3(2),
have cooling, even when the heaters are switched off, resid- 74.
ual heat continues to dissipate, raising the temperature Besta, M., Blach, N., Kubicek, A., Gerstenberger, R.,
further. Thus, it is important to note that LLM inference Podstawski, M., Gianinazzi, L., Gajda, J., Lehmann, T.,
times do influence the control performance of the system. Niewiadomski, H., Nyczyk, P., and Hoefler, T. (2024).
Table 2. Control Performance of Language Graph of thoughts: Solving elaborate problems with
Models in the proposed framework large language models. Proceedings of the AAAI Con-
ference on Artificial Intelligence, 38(16), 17682–17690.
Metric GPT3.5 GPT4omini GPT4o GPT4 doi:10.1609/aaai.v38i16.29720.
Average Deviation 0.832 0.077 0.582 1.469
Time above 27C (s) 949 499.10 887.70 1163.09 Borghesan, F., Zagorowska, M., and Mercangöz, M.
Time below 25C (s) 0 432.30 285.87 173.40 (2022). Unmanned and autonomous systems: Future
Time outside range (s) 949.24 931.40 1173.58 1336.50
of automation in process and energy industries. IFAC-
These results confirm that the proposed framework effec- PapersOnLine, 55(7), 875–882.
tively utilizes LLMs as control agents, and the reprompting Chen, H., Constante-Flores, G.E., and Li, C. (2024a).
mechanism significantly enhances accuracy and reliability, Diagnosing infeasible optimization problems using large
especially for models with initially lower performance. language models. INFOR: Information Systems and
Operational Research, 1–15.
6. CONCLUSION Chen, W., Koenig, S., and Dilkina, B. (2024b). Reprompt:
Planning by automatic prompt engineering for large lan-
In conclusion, this paper highlights the promising potential guage models agents. doi:10.48550/ARXIV.2406.11132.
of large language models (LLMs) as autonomous control Chen, X., Lin, M., Schärli, N., and Zhou, D. (2023).
Teaching large language models to self-debug. doi:
agents in industrial applications. The proposed framework,
10.48550/ARXIV.2304.05128.
enhanced by a reprompting architecture, demonstrates a
Cheng, Y., Zhang, C., Zhang, Z., Meng, X., Hong, S.,
significant capability for agents to autonomously correct
Li, W., Wang, Z., Wang, Z., Yin, F., Zhao, J., and
their actions, leading to improved reliability and accuracy
He, X. (2024). Exploring large language model based
in control tasks. Our results indicate that even earlier
intelligent agents: Definitions, methods, and prospects.
models like GPT 3.5-turbo can achieve substantial per-
doi:10.48550/ARXIV.2401.03428.
formance gains through reprompting, with accuracy im-
CrewAI (2024). Crewai: An autonomous control frame-
proving from 60.04% to 85.34%. More advanced models,
work. URL https://github.com/crewai/crewai. Ac-
such as GPT 4o, reached near-perfect accuracy exceeding
cessed: 2024-11-08.
99%, showcasing the framework’s effectiveness in harness-
ing LLMs for control tasks with proposed framework. Gao, Y., Shang, Z., and Kokossis, A. (2009). Agent-
based intelligent system development for decision sup-
These findings validate the viability of LLM-based systems port in chemical process industry. Expert Sys-
in autonomous industrial control, where rapid and precise tems with Applications, 36(8), 11099–11107. doi:
decision-making is essential. While this case study focused 10.1016/j.eswa.2009.02.078.
on a relatively straightforward task, the adaptability of Hanga, K.M. and Kovalchuk, Y. (2019). Machine learning
the framework positions it well for application in more and multi-agent systems in oil and gas industry applica-
complex control scenarios. Future work may explore its de- tions: A survey. Computer Science Review, 34, 100191.
ployment in fault handling and digital twin environments, Hirtreiter, E., Schulze Balhorn, L., and Schweidtmann,
where real-time decision-making is critical in dynamic and A.M. (2023). Toward automatic generation of control
unpredictable settings. Overall, this research supports the structures for process flow diagrams with large language
integration of LLMs with reprompting architecture as a models. AIChE Journal, 70(1). doi:10.1002/aic.18259.
vital component toward realizing fully autonomous and Hrabia, C.E., Masuch, N., and Albayrak, S. (2015). A
intelligent industrial systems. metrics framework for quantifying autonomy in com-
plex systems. In Multiagent System Technologies: 13th Siu, H.C., Peña, J., Chen, E., Zhou, Y., Lopez, V., Palko,
German Conference, MATES 2015, Cottbus, Germany, K., Chang, K., and Allen, R. (2021). Evaluation of
September 28-30, 2015, Revised Selected Papers 13, 22– human-ai teams for learned and rule-based agents in
41. Springer. hanabi. In Advances in Neural Information Processing
Jha, S. and Hassan, M. (2002). Building agents for Systems, volume 34, 16183–16195. Curran Associates,
rule-based intrusion detection system. Computer Com- Inc.
munications, 25(15), 1366–1373. doi:10.1016/s0140- Song, L., Zhang, C., Zhao, L., and Bian, J. (2023). Pre-
3664(02)00038-5. trained large language models for industrial control. doi:
Jia, S., Zhang, C., and Fung, V. (2024). Llmatdesign: 10.48550/ARXIV.2308.03028.
Autonomous materials discovery with large language Watson, D.P. and Scheidt, D.H. (2005). Autonomous
models. systems. Johns Hopkins APL technical digest, 26(4),
Kim, G., Baldi, P., and McAleer, S. (2023). Language 368–376.
models can solve computer tasks. In Advances in Neu- Wei, J., Wang, X., Schuurmans, D., Bosma, M., Ichter, B.,
ral Information Processing Systems, volume 36, 39648– Xia, F., Chi, E., Le, Q., and Zhou, D. (2022). Chain-
39677. Curran Associates, Inc. of-thought prompting elicits reasoning in large language
Louati, F. and Ktata, F.B. (2020). A deep learning-based models. doi:10.48550/ARXIV.2201.11903.
multi-agent system for intrusion detection. SN Applied Xia, Y., Jazdi, N., Zhang, J., Shah, C., and Weyrich, M.
Sciences, 2(4), 675. (2024). Control industrial automation system with large
Madaan, A., Tandon, N., Gupta, P., Hallinan, S., Gao, language models. doi:10.48550/ARXIV.2409.18009.
L., Wiegreffe, S., Alon, U., Dziri, N., Prabhumoye, Xia, Y., Shenoy, M., Jazdi, N., and Weyrich, M. (2023).
S., Yang, Y., Gupta, S., Majumder, B.P., Hermann, Towards autonomous system: flexible modular produc-
K., Welleck, S., Yazdanbakhsh, A., and Clark, P. tion system enhanced with large language model agents.
(2023). Self-refine: Iterative refinement with self- In 2023 IEEE 28th International Conference on Emerg-
feedback. doi:10.48550/ARXIV.2303.17651. URL ing Technologies and Factory Automation (ETFA), 1–8.
https://arxiv.org/abs/2303.17651. doi:10.1109/ETFA54631.2023.10275362.
Meng, Q., Anandan, P.D., Rielly, C.D., and Benyahia, Xu, W., Banburski, A., and Jojic, N. (2024). Reprompting:
B. (2023). Multi-Agent Reinforcement Learning and Automated chain-of-thought prompt inference through
RL-Based Adaptive PID Control of Crystallization Pro- gibbs sampling.
cesses, 1667–1672. Elsevier. doi:10.1016/b978-0-443- Yao, S., Yu, D., Zhao, J., Shafran, I., Griffiths, T.L.,
15274-0.50265-1. Cao, Y., and Narasimhan, K. (2023). Tree of thoughts:
Mousa, M., van de Berg, D., Kotecha, N., del Rio Chanona, Deliberate problem solving with large language models.
E.A., and Mowbray, M. (2024). An analysis of multi- doi:10.48550/ARXIV.2305.10601.
agent reinforcement learning for decentralized inventory Yao, S., Zhao, J., Yu, D., Du, N., Shafran, I.,
control systems. Computers and Chemical Engineering, Narasimhan, K., and Cao, Y. (2022). React: Syner-
188, 108783. doi:10.1016/j.compchemeng.2024.108783. gizing reasoning and acting in language models. doi:
Nian, R., Liu, J., and Huang, B. (2020). A re- 10.48550/ARXIV.2210.03629.
view on reinforcement learning: Introduction and ap-
plications in industrial process control. Comput-
ers and Chemical Engineering, 139, 106886. doi:
10.1016/j.compchemeng.2020.106886.
Oliveira, P.M. and Hedengren, J.D. (2019). An apmonitor
temperature lab pid control experiment for undergradu-
ate students. In 2019 24th IEEE International Confer-
ence on Emerging Technologies and Factory Automation
(ETFA), 790–797. IEEE.
OpenAI (2024). Gpt-3.5, gpt-4, or other openai models.
URL https://openai.com. Accessed: 2024-11-08.
Pantelides, C., Baldea, M., Georgiou, A.T., Gopaluni, B.,
Mehmet, M., Sheth, K., Zavala, V.M., and Georgakis,
C. (2024). From automated to autonomous process
operations. doi:10.2139/ssrn.4963632.
Peres, R.S., Jia, X., Lee, J., Sun, K., Colombo, A.W.,
and Barata, J. (2020). Industrial artificial intelli-
gence in industry 4.0 - systematic review, challenges
and outlook. IEEE Access, 8, 220121–220139. doi:
10.1109/ACCESS.2020.3042874.
Schweidtmann, A.M. (2024). Generative artificial intelli-
gence in chemical engineering. Nature Chemical Engi-
neering, 1(3), 193–193. doi:10.1038/s44286-024-00041-5.
Shinn, N., Cassano, F., Berman, E., Gopinath, A.,
Narasimhan, K., and Yao, S. (2023). Reflexion: Lan-
guage agents with verbal reinforcement learning. doi:
10.48550/ARXIV.2303.11366.

You might also like

pFad - Phonifier reborn

Pfad - The Proxy pFad of © 2024 Garber Painting. All rights reserved.

Note: This service is not intended for secure transactions such as banking, social media, email, or purchasing. Use at your own risk. We assume no liability whatsoever for broken pages.


Alternative Proxies:

Alternative Proxy

pFad Proxy

pFad v3 Proxy

pFad v4 Proxy