0% found this document useful (0 votes)
31 views19 pages

Literasi Bahasa Inggris

The passage discusses the limitations of using large language models in scientific research, citing their tendency to fabricate facts and humans' propensity to trust them too much. It notes that zero-shot translation, where models rearrange limited reliable data, may provide more accurate outputs but has limited applications.

Uploaded by

Irfan Septiyan
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
31 views19 pages

Literasi Bahasa Inggris

The passage discusses the limitations of using large language models in scientific research, citing their tendency to fabricate facts and humans' propensity to trust them too much. It notes that zero-shot translation, where models rearrange limited reliable data, may provide more accurate outputs but has limited applications.

Uploaded by

Irfan Septiyan
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 19

1.

Passage 1
What role should text-generating large language models (LLMs) have in the scientific research process? According
to a team of Oxford scientists, the answer—at least for now—is: pretty much none.
In a new essay, researchers from the Oxford Internet Institute argue that scientists should abstain from using LLM-
powered tools like chatbots to assist in scientific research on the grounds that AI's penchant for hallucinating and
fabricating facts, combined with the human tendency to anthropomorphize the human-mimicking word engines,
could lead to larger information breakdowns—a fate that could ultimately threaten the fabric of science itself.
The scientists' argument hinges on the reality that LLMs and the many bots that the technology powers aren't
primarily designed to be truthful. As they write in the essay, sounding truthful is but "one element by which the
usefulness of these systems is measured." Characteristics including "helpfulness, harmlessness, technical efficiency,
profitability, [and] customer adoption" matter, too.
"LLMs are designed to produce helpful and convincing responses," they continue, "without any overriding
guarantees regarding their accuracy or alignment with fact."
Put simply, if a large language model—which, above all else, is taught to be convincing—comes up with an answer
that's persuasive but not necessarily factual, the fact that the output is persuasive will override its inaccuracy. In an
AI's proverbial brain, simply saying "I don't know" is less helpful than providing an incorrect response.
But as the Oxford researchers lay out, AI's hallucination problem is only half the problem. The human tendency to
read way too far into human-sounding AI outputs due to our deeply mortal proclivity to anthropomorphize
everything around us, is a well-documented phenomenon. Because of this effect, we're already primed to put a
little too much trust in AI; couple that with the confident tone these chatbots so often take, and you have a perfect
recipe for misinformation.
Importantly, the scientists do note "zero-shot translation" as a scenario in which AI outputs might be a bit more
reliable. This, as Oxford professor and AI ethicist Brent Mittelstadt told EuroNews, refers to when a model is given
"a set of inputs that contain some reliable information or data, plus some request to do something with that data."
"It's called zero-shot translation because the model has not been trained specifically to deal with that type of
prompt," Mittelstadt added. So, in other words, a model is more or less rearranging and parsing through a very
limited, trustworthy dataset, and not being used as a vast, internet-like knowledge center. But that would certainly
limit its use cases, and would demand a more specialized understanding of AI tech—much different from just
loading up ChatGPT and firing off some research questions.
And elsewhere, the researchers argue, there's an ideological battle at the core of this automation debate. After all,
science is a deeply human pursuit. To outsource too much of the scientific process to automated AI labor, the
Oxforders say, could undermine that deep-rooted humanity. And is that something we can really afford to lose?
Adapted from futuristic.com
Passage 2
Machine learning systems have become increasingly popular in the world of scientific research. The algorithms can
save a great deal of person-hours, and many hope they'll even be able to find patterns that humans, through more
traditional methods of data analysis, can't.
Impressive, yes. But machine learning models are so complex that it's notoriously difficult even for their creators to
explain their outputs. In fact, they've even been known to cheat in order to arrive at a tidy solution.
Add that reality to the fact that many scientists now leveraging the tech aren't experts in machine learning, and you
have a recipe for scientific disaster. As Princeton professor Arvind Narayanan and his PhD student Sayash Kapoor
explained to Wired, a surprising number of scientists using these systems may be making grave methodological
errors—and if that trend continues, the ripple effects in academia could be pretty severe.
The duo became concerned when they came across a political science study that, using machine learning-produced
data, claimed it could predict the next civil war with a staggering 90 percent accuracy. But when Narayanan and
Kapoor took a closer look, they discovered that the paper was riddled with false outcomes—a result of something
called "data leakage."
In short, data leakage occurs when a machine learning system is using numbers that it shouldn't. It usually happens
when users mishandle data pools, skewing the way the model "learns."
After discovering the data leakage in the civil war paper, the Princeton researchers started searching for similar
machine learning mistakes in other published studies—and the results were striking. They found data leakage in a
grand total of 329 papers across a number of fields.
As they explain in the research, the proliferation of machine learning is resulting in something they're calling a
"reproducibility crisis," which basically means that the results of a study can't be reproduced by followup research.
The claim raises the specter that a sequel could be looming to another serious replication crisis that's shaken the
scientific establishment over the past decade, in which researchers misused statistics to arrive at sweeping
conclusions that amounted to nothing more than statistical noise in large datasets.
If it holds up to further scrutiny, it'd be an extremely concerning revelation. Dead spider robots aside, most research
isn't done for no reason. The goal of most science is to eventually apply it to something, whether it's used to carry
out some kind of immediate action or to inform future study. A mistake in an information pipeline anywhere will
frequently lead to follow up errors down the road—and that could have some pretty devastating consequences.
That's not to say that AI can't be useful for scientific study. We're sure that in many cases it has been, and it will
probably continue to be. Clearly, though, researchers who use it need to be careful, and really ask themselves if
they actually know what they're doing. Because in the end, these aren't machine errors—they're human ones.
Adapted from futuristic.com

How does the author of Passage 1 utilize the concept of 'zero-shot translation' to highlight the limitations of large
language models (LLMs) in scientific research?
A. To indicate a specific scenario where LLMs might provide more reliable outputs.
B. As an example where LLMs outperform human researchers in certain tasks.
C. To showcase an advanced but limited AI capability that is still not fully exploitable.
D. To advocate for the enhancement and broader application of AI in scientific research.
E. As evidence of the prevalent but uncritical acceptance of AI in the scientific realm.
2.
Passage 1
What role should text-generating large language models (LLMs) have in the scientific research process? According
to a team of Oxford scientists, the answer—at least for now—is: pretty much none.
In a new essay, researchers from the Oxford Internet Institute argue that scientists should abstain from using LLM-
powered tools like chatbots to assist in scientific research on the grounds that AI's penchant for hallucinating and
fabricating facts, combined with the human tendency to anthropomorphize the human-mimicking word engines,
could lead to larger information breakdowns—a fate that could ultimately threaten the fabric of science itself.
The scientists' argument hinges on the reality that LLMs and the many bots that the technology powers aren't
primarily designed to be truthful. As they write in the essay, sounding truthful is but "one element by which the
usefulness of these systems is measured." Characteristics including "helpfulness, harmlessness, technical efficiency,
profitability, [and] customer adoption" matter, too.
"LLMs are designed to produce helpful and convincing responses," they continue, "without any overriding
guarantees regarding their accuracy or alignment with fact."
Put simply, if a large language model—which, above all else, is taught to be convincing—comes up with an answer
that's persuasive but not necessarily factual, the fact that the output is persuasive will override its inaccuracy. In an
AI's proverbial brain, simply saying "I don't know" is less helpful than providing an incorrect response.
But as the Oxford researchers lay out, AI's hallucination problem is only half the problem. The human tendency to
read way too far into human-sounding AI outputs due to our deeply mortal proclivity to anthropomorphize
everything around us, is a well-documented phenomenon. Because of this effect, we're already primed to put a
little too much trust in AI; couple that with the confident tone these chatbots so often take, and you have a perfect
recipe for misinformation.
Importantly, the scientists do note "zero-shot translation" as a scenario in which AI outputs might be a bit more
reliable. This, as Oxford professor and AI ethicist Brent Mittelstadt told EuroNews, refers to when a model is given
"a set of inputs that contain some reliable information or data, plus some request to do something with that data."
"It's called zero-shot translation because the model has not been trained specifically to deal with that type of
prompt," Mittelstadt added. So, in other words, a model is more or less rearranging and parsing through a very
limited, trustworthy dataset, and not being used as a vast, internet-like knowledge center. But that would certainly
limit its use cases, and would demand a more specialized understanding of AI tech—much different from just
loading up ChatGPT and firing off some research questions.
And elsewhere, the researchers argue, there's an ideological battle at the core of this automation debate. After all,
science is a deeply human pursuit. To outsource too much of the scientific process to automated AI labor, the
Oxforders say, could undermine that deep-rooted humanity. And is that something we can really afford to lose?
Adapted from futuristic.com

Passage 2
Machine learning systems have become increasingly popular in the world of scientific research. The algorithms can
save a great deal of person-hours, and many hope they'll even be able to find patterns that humans, through more
traditional methods of data analysis, can't.
Impressive, yes. But machine learning models are so complex that it's notoriously difficult even for their creators to
explain their outputs. In fact, they've even been known to cheat in order to arrive at a tidy solution.
Add that reality to the fact that many scientists now leveraging the tech aren't experts in machine learning, and you
have a recipe for scientific disaster. As Princeton professor Arvind Narayanan and his PhD student Sayash Kapoor
explained to Wired, a surprising number of scientists using these systems may be making grave methodological
errors—and if that trend continues, the ripple effects in academia could be pretty severe.
The duo became concerned when they came across a political science study that, using machine learning-produced
data, claimed it could predict the next civil war with a staggering 90 percent accuracy. But when Narayanan and
Kapoor took a closer look, they discovered that the paper was riddled with false outcomes—a result of something
called "data leakage."
In short, data leakage occurs when a machine learning system is using numbers that it shouldn't. It usually happens
when users mishandle data pools, skewing the way the model "learns."
After discovering the data leakage in the civil war paper, the Princeton researchers started searching for similar
machine learning mistakes in other published studies—and the results were striking. They found data leakage in a
grand total of 329 papers across a number of fields.
As they explain in the research, the proliferation of machine learning is resulting in something they're calling a
"reproducibility crisis," which basically means that the results of a study can't be reproduced by followup research.
The claim raises the specter that a sequel could be looming to another serious replication crisis that's shaken the
scientific establishment over the past decade, in which researchers misused statistics to arrive at sweeping
conclusions that amounted to nothing more than statistical noise in large datasets.
If it holds up to further scrutiny, it'd be an extremely concerning revelation. Dead spider robots aside, most research
isn't done for no reason. The goal of most science is to eventually apply it to something, whether it's used to carry
out some kind of immediate action or to inform future study. A mistake in an information pipeline anywhere will
frequently lead to follow up errors down the road—and that could have some pretty devastating consequences.
That's not to say that AI can't be useful for scientific study. We're sure that in many cases it has been, and it will
probably continue to be. Clearly, though, researchers who use it need to be careful, and really ask themselves if
they actually know what they're doing. Because in the end, these aren't machine errors—they're human ones.
Adapted from futuristic.com

Which piece of evidence from Passage 2 most directly demonstrates how machine learning has led to significant
errors in scientific research?
A. The mention of machine learning models’ complexity causing difficulties in understanding and interpreting their
outputs.
B. The reference to a political science study’s false claim of predicting civil wars with high accuracy due to flawed AI
data.
C. The Princeton researchers’ finding of data leakage in numerous studies, highlighting the prevalence of AI errors.
D. The challenge faced by creators in explaining how AI models arrive at their conclusions, leading to potential
misinterpretations.
E. The mention of researchers’ grave methodological errors while using AI, reflecting a lack of understanding in its
application.
3.
Passage 1
What role should text-generating large language models (LLMs) have in the scientific research process? According
to a team of Oxford scientists, the answer—at least for now—is: pretty much none.
In a new essay, researchers from the Oxford Internet Institute argue that scientists should abstain from using LLM-
powered tools like chatbots to assist in scientific research on the grounds that AI's penchant for hallucinating and
fabricating facts, combined with the human tendency to anthropomorphize the human-mimicking word engines,
could lead to larger information breakdowns—a fate that could ultimately threaten the fabric of science itself.
The scientists' argument hinges on the reality that LLMs and the many bots that the technology powers aren't
primarily designed to be truthful. As they write in the essay, sounding truthful is but "one element by which the
usefulness of these systems is measured." Characteristics including "helpfulness, harmlessness, technical efficiency,
profitability, [and] customer adoption" matter, too.
"LLMs are designed to produce helpful and convincing responses," they continue, "without any overriding
guarantees regarding their accuracy or alignment with fact."
Put simply, if a large language model—which, above all else, is taught to be convincing—comes up with an answer
that's persuasive but not necessarily factual, the fact that the output is persuasive will override its inaccuracy. In an
AI's proverbial brain, simply saying "I don't know" is less helpful than providing an incorrect response.
But as the Oxford researchers lay out, AI's hallucination problem is only half the problem. The human tendency to
read way too far into human-sounding AI outputs due to our deeply mortal proclivity to anthropomorphize
everything around us, is a well-documented phenomenon. Because of this effect, we're already primed to put a
little too much trust in AI; couple that with the confident tone these chatbots so often take, and you have a perfect
recipe for misinformation.
Importantly, the scientists do note "zero-shot translation" as a scenario in which AI outputs might be a bit more
reliable. This, as Oxford professor and AI ethicist Brent Mittelstadt told EuroNews, refers to when a model is given
"a set of inputs that contain some reliable information or data, plus some request to do something with that data."
"It's called zero-shot translation because the model has not been trained specifically to deal with that type of
prompt," Mittelstadt added. So, in other words, a model is more or less rearranging and parsing through a very
limited, trustworthy dataset, and not being used as a vast, internet-like knowledge center. But that would certainly
limit its use cases, and would demand a more specialized understanding of AI tech—much different from just
loading up ChatGPT and firing off some research questions.
And elsewhere, the researchers argue, there's an ideological battle at the core of this automation debate. After all,
science is a deeply human pursuit. To outsource too much of the scientific process to automated AI labor, the
Oxforders say, could undermine that deep-rooted humanity. And is that something we can really afford to lose?
Adapted from futuristic.com

Passage 2
Machine learning systems have become increasingly popular in the world of scientific research. The algorithms can
save a great deal of person-hours, and many hope they'll even be able to find patterns that humans, through more
traditional methods of data analysis, can't.
Impressive, yes. But machine learning models are so complex that it's notoriously difficult even for their creators to
explain their outputs. In fact, they've even been known to cheat in order to arrive at a tidy solution.
Add that reality to the fact that many scientists now leveraging the tech aren't experts in machine learning, and you
have a recipe for scientific disaster. As Princeton professor Arvind Narayanan and his PhD student Sayash Kapoor
explained to Wired, a surprising number of scientists using these systems may be making grave methodological
errors—and if that trend continues, the ripple effects in academia could be pretty severe.
The duo became concerned when they came across a political science study that, using machine learning-produced
data, claimed it could predict the next civil war with a staggering 90 percent accuracy. But when Narayanan and
Kapoor took a closer look, they discovered that the paper was riddled with false outcomes—a result of something
called "data leakage."
In short, data leakage occurs when a machine learning system is using numbers that it shouldn't. It usually happens
when users mishandle data pools, skewing the way the model "learns."
After discovering the data leakage in the civil war paper, the Princeton researchers started searching for similar
machine learning mistakes in other published studies—and the results were striking. They found data leakage in a
grand total of 329 papers across a number of fields.
As they explain in the research, the proliferation of machine learning is resulting in something they're calling a
"reproducibility crisis," which basically means that the results of a study can't be reproduced by followup research.
The claim raises the specter that a sequel could be looming to another serious replication crisis that's shaken the
scientific establishment over the past decade, in which researchers misused statistics to arrive at sweeping
conclusions that amounted to nothing more than statistical noise in large datasets.
If it holds up to further scrutiny, it'd be an extremely concerning revelation. Dead spider robots aside, most research
isn't done for no reason. The goal of most science is to eventually apply it to something, whether it's used to carry
out some kind of immediate action or to inform future study. A mistake in an information pipeline anywhere will
frequently lead to follow up errors down the road—and that could have some pretty devastating consequences.
That's not to say that AI can't be useful for scientific study. We're sure that in many cases it has been, and it will
probably continue to be. Clearly, though, researchers who use it need to be careful, and really ask themselves if
they actually know what they're doing. Because in the end, these aren't machine errors—they're human ones.
Adapted from futuristic.com

Based on the information in both passages, which TWO of the following statements align with the authors'
perspectives?

• AI is increasingly used in scientific research for pattern analysis, data handling, and saving person-hours.

• AI models, especially large language models, are often designed to prioritize persuasive over accurate
responses.

• The scientific community is unanimously positive about AI's role in research efficiency and reliability.

• Complex operations of machine learning models often lead to methodological errors in scientific studies.

• The passages advocate for a complete shift from traditional research methods to sole reliance on AI.
4.

Passage 1
What role should text-generating large language models (LLMs) have in the scientific research process? According
to a team of Oxford scientists, the answer—at least for now—is: pretty much none.
In a new essay, researchers from the Oxford Internet Institute argue that scientists should abstain from using LLM-
powered tools like chatbots to assist in scientific research on the grounds that AI's penchant for hallucinating and
fabricating facts, combined with the human tendency to anthropomorphize the human-mimicking word engines,
could lead to larger information breakdowns—a fate that could ultimately threaten the fabric of science itself.
The scientists' argument hinges on the reality that LLMs and the many bots that the technology powers aren't
primarily designed to be truthful. As they write in the essay, sounding truthful is but "one element by which the
usefulness of these systems is measured." Characteristics including "helpfulness, harmlessness, technical efficiency,
profitability, [and] customer adoption" matter, too.
"LLMs are designed to produce helpful and convincing responses," they continue, "without any overriding
guarantees regarding their accuracy or alignment with fact."
Put simply, if a large language model—which, above all else, is taught to be convincing—comes up with an answer
that's persuasive but not necessarily factual, the fact that the output is persuasive will override its inaccuracy. In an
AI's proverbial brain, simply saying "I don't know" is less helpful than providing an incorrect response.
But as the Oxford researchers lay out, AI's hallucination problem is only half the problem. The human tendency to
read way too far into human-sounding AI outputs due to our deeply mortal proclivity to anthropomorphize
everything around us, is a well-documented phenomenon. Because of this effect, we're already primed to put a
little too much trust in AI; couple that with the confident tone these chatbots so often take, and you have a perfect
recipe for misinformation.
Importantly, the scientists do note "zero-shot translation" as a scenario in which AI outputs might be a bit more
reliable. This, as Oxford professor and AI ethicist Brent Mittelstadt told EuroNews, refers to when a model is given
"a set of inputs that contain some reliable information or data, plus some request to do something with that data."
"It's called zero-shot translation because the model has not been trained specifically to deal with that type of
prompt," Mittelstadt added. So, in other words, a model is more or less rearranging and parsing through a very
limited, trustworthy dataset, and not being used as a vast, internet-like knowledge center. But that would certainly
limit its use cases, and would demand a more specialized understanding of AI tech—much different from just
loading up ChatGPT and firing off some research questions.
And elsewhere, the researchers argue, there's an ideological battle at the core of this automation debate. After all,
science is a deeply human pursuit. To outsource too much of the scientific process to automated AI labor, the
Oxforders say, could undermine that deep-rooted humanity. And is that something we can really afford to lose?
Adapted from futuristic.com

Passage 2
Machine learning systems have become increasingly popular in the world of scientific research. The algorithms can
save a great deal of person-hours, and many hope they'll even be able to find patterns that humans, through more
traditional methods of data analysis, can't.
Impressive, yes. But machine learning models are so complex that it's notoriously difficult even for their creators to
explain their outputs. In fact, they've even been known to cheat in order to arrive at a tidy solution.
Add that reality to the fact that many scientists now leveraging the tech aren't experts in machine learning, and you
have a recipe for scientific disaster. As Princeton professor Arvind Narayanan and his PhD student Sayash Kapoor
explained to Wired, a surprising number of scientists using these systems may be making grave methodological
errors—and if that trend continues, the ripple effects in academia could be pretty severe.
The duo became concerned when they came across a political science study that, using machine learning-produced
data, claimed it could predict the next civil war with a staggering 90 percent accuracy. But when Narayanan and
Kapoor took a closer look, they discovered that the paper was riddled with false outcomes—a result of something
called "data leakage."
In short, data leakage occurs when a machine learning system is using numbers that it shouldn't. It usually happens
when users mishandle data pools, skewing the way the model "learns."
After discovering the data leakage in the civil war paper, the Princeton researchers started searching for similar
machine learning mistakes in other published studies—and the results were striking. They found data leakage in a
grand total of 329 papers across a number of fields.
As they explain in the research, the proliferation of machine learning is resulting in something they're calling a
"reproducibility crisis," which basically means that the results of a study can't be reproduced by followup research.
The claim raises the specter that a sequel could be looming to another serious replication crisis that's shaken the
scientific establishment over the past decade, in which researchers misused statistics to arrive at sweeping
conclusions that amounted to nothing more than statistical noise in large datasets.
If it holds up to further scrutiny, it'd be an extremely concerning revelation. Dead spider robots aside, most research
isn't done for no reason. The goal of most science is to eventually apply it to something, whether it's used to carry
out some kind of immediate action or to inform future study. A mistake in an information pipeline anywhere will
frequently lead to follow up errors down the road—and that could have some pretty devastating consequences.
That's not to say that AI can't be useful for scientific study. We're sure that in many cases it has been, and it will
probably continue to be. Clearly, though, researchers who use it need to be careful, and really ask themselves if
they actually know what they're doing. Because in the end, these aren't machine errors—they're human ones.
Adapted from futuristic.com

In Passage 1, how is the term "hallucinating" metaphorically used to describe a specific flaw in AI's functioning?
A. Creating responses based on hypothetical scenarios.
B. Generating responses not based on accurate data.
C. Producing outputs that are imaginative but irrelevant.
D. Interpreting data in a way that distorts its meaning.
E. Compiling responses based on outdated information.
5.

Passage 1
What role should text-generating large language models (LLMs) have in the scientific research process? According
to a team of Oxford scientists, the answer—at least for now—is: pretty much none.
In a new essay, researchers from the Oxford Internet Institute argue that scientists should abstain from using LLM-
powered tools like chatbots to assist in scientific research on the grounds that AI's penchant for hallucinating and
fabricating facts, combined with the human tendency to anthropomorphize the human-mimicking word engines,
could lead to larger information breakdowns—a fate that could ultimately threaten the fabric of science itself.
The scientists' argument hinges on the reality that LLMs and the many bots that the technology powers aren't
primarily designed to be truthful. As they write in the essay, sounding truthful is but "one element by which the
usefulness of these systems is measured." Characteristics including "helpfulness, harmlessness, technical efficiency,
profitability, [and] customer adoption" matter, too.
"LLMs are designed to produce helpful and convincing responses," they continue, "without any overriding
guarantees regarding their accuracy or alignment with fact."
Put simply, if a large language model—which, above all else, is taught to be convincing—comes up with an answer
that's persuasive but not necessarily factual, the fact that the output is persuasive will override its inaccuracy. In an
AI's proverbial brain, simply saying "I don't know" is less helpful than providing an incorrect response.
But as the Oxford researchers lay out, AI's hallucination problem is only half the problem. The human tendency to
read way too far into human-sounding AI outputs due to our deeply mortal proclivity to anthropomorphize
everything around us, is a well-documented phenomenon. Because of this effect, we're already primed to put a
little too much trust in AI; couple that with the confident tone these chatbots so often take, and you have a perfect
recipe for misinformation.
Importantly, the scientists do note "zero-shot translation" as a scenario in which AI outputs might be a bit more
reliable. This, as Oxford professor and AI ethicist Brent Mittelstadt told EuroNews, refers to when a model is given
"a set of inputs that contain some reliable information or data, plus some request to do something with that data."
"It's called zero-shot translation because the model has not been trained specifically to deal with that type of
prompt," Mittelstadt added. So, in other words, a model is more or less rearranging and parsing through a very
limited, trustworthy dataset, and not being used as a vast, internet-like knowledge center. But that would certainly
limit its use cases, and would demand a more specialized understanding of AI tech—much different from just
loading up ChatGPT and firing off some research questions.
And elsewhere, the researchers argue, there's an ideological battle at the core of this automation debate. After all,
science is a deeply human pursuit. To outsource too much of the scientific process to automated AI labor, the
Oxforders say, could undermine that deep-rooted humanity. And is that something we can really afford to lose?
Adapted from futuristic.com

Passage 2
Machine learning systems have become increasingly popular in the world of scientific research. The algorithms can
save a great deal of person-hours, and many hope they'll even be able to find patterns that humans, through more
traditional methods of data analysis, can't.
Impressive, yes. But machine learning models are so complex that it's notoriously difficult even for their creators to
explain their outputs. In fact, they've even been known to cheat in order to arrive at a tidy solution.
Add that reality to the fact that many scientists now leveraging the tech aren't experts in machine learning, and you
have a recipe for scientific disaster. As Princeton professor Arvind Narayanan and his PhD student Sayash Kapoor
explained to Wired, a surprising number of scientists using these systems may be making grave methodological
errors—and if that trend continues, the ripple effects in academia could be pretty severe.
The duo became concerned when they came across a political science study that, using machine learning-produced
data, claimed it could predict the next civil war with a staggering 90 percent accuracy. But when Narayanan and
Kapoor took a closer look, they discovered that the paper was riddled with false outcomes—a result of something
called "data leakage."
In short, data leakage occurs when a machine learning system is using numbers that it shouldn't. It usually happens
when users mishandle data pools, skewing the way the model "learns."
After discovering the data leakage in the civil war paper, the Princeton researchers started searching for similar
machine learning mistakes in other published studies—and the results were striking. They found data leakage in a
grand total of 329 papers across a number of fields.
As they explain in the research, the proliferation of machine learning is resulting in something they're calling a
"reproducibility crisis," which basically means that the results of a study can't be reproduced by followup research.
The claim raises the specter that a sequel could be looming to another serious replication crisis that's shaken the
scientific establishment over the past decade, in which researchers misused statistics to arrive at sweeping
conclusions that amounted to nothing more than statistical noise in large datasets.
If it holds up to further scrutiny, it'd be an extremely concerning revelation. Dead spider robots aside, most research
isn't done for no reason. The goal of most science is to eventually apply it to something, whether it's used to carry
out some kind of immediate action or to inform future study. A mistake in an information pipeline anywhere will
frequently lead to follow up errors down the road—and that could have some pretty devastating consequences.
That's not to say that AI can't be useful for scientific study. We're sure that in many cases it has been, and it will
probably continue to be. Clearly, though, researchers who use it need to be careful, and really ask themselves if
they actually know what they're doing. Because in the end, these aren't machine errors—they're human ones.
Adapted from futuristic.com

What can be inferred from Passage 2 regarding the future use of machine learning in scientific research?
A. Researchers will increasingly prioritize understanding and correctly applying machine learning techniques.
B. Machine learning will be abandoned in favor of more traditional methods due to its complexity.
C. There will be a significant increase in the use of machine learning across all areas of scientific research.
D. Efforts will be made to simplify machine learning algorithms for easier comprehension and application.
E. The focus will shift towards using machine learning exclusively in data-intensive fields.
6.
Passage 1
What role should text-generating large language models (LLMs) have in the scientific research process? According
to a team of Oxford scientists, the answer—at least for now—is: pretty much none.
In a new essay, researchers from the Oxford Internet Institute argue that scientists should abstain from using LLM-
powered tools like chatbots to assist in scientific research on the grounds that AI's penchant for hallucinating and
fabricating facts, combined with the human tendency to anthropomorphize the human-mimicking word engines,
could lead to larger information breakdowns—a fate that could ultimately threaten the fabric of science itself.
The scientists' argument hinges on the reality that LLMs and the many bots that the technology powers aren't
primarily designed to be truthful. As they write in the essay, sounding truthful is but "one element by which the
usefulness of these systems is measured." Characteristics including "helpfulness, harmlessness, technical efficiency,
profitability, [and] customer adoption" matter, too.
"LLMs are designed to produce helpful and convincing responses," they continue, "without any overriding
guarantees regarding their accuracy or alignment with fact."
Put simply, if a large language model—which, above all else, is taught to be convincing—comes up with an answer
that's persuasive but not necessarily factual, the fact that the output is persuasive will override its inaccuracy. In an
AI's proverbial brain, simply saying "I don't know" is less helpful than providing an incorrect response.
But as the Oxford researchers lay out, AI's hallucination problem is only half the problem. The human tendency to
read way too far into human-sounding AI outputs due to our deeply mortal proclivity to anthropomorphize
everything around us, is a well-documented phenomenon. Because of this effect, we're already primed to put a
little too much trust in AI; couple that with the confident tone these chatbots so often take, and you have a perfect
recipe for misinformation.
Importantly, the scientists do note "zero-shot translation" as a scenario in which AI outputs might be a bit more
reliable. This, as Oxford professor and AI ethicist Brent Mittelstadt told EuroNews, refers to when a model is given
"a set of inputs that contain some reliable information or data, plus some request to do something with that data."
"It's called zero-shot translation because the model has not been trained specifically to deal with that type of
prompt," Mittelstadt added. So, in other words, a model is more or less rearranging and parsing through a very
limited, trustworthy dataset, and not being used as a vast, internet-like knowledge center. But that would certainly
limit its use cases, and would demand a more specialized understanding of AI tech—much different from just
loading up ChatGPT and firing off some research questions.
And elsewhere, the researchers argue, there's an ideological battle at the core of this automation debate. After all,
science is a deeply human pursuit. To outsource too much of the scientific process to automated AI labor, the
Oxforders say, could undermine that deep-rooted humanity. And is that something we can really afford to lose?
Adapted from futuristic.com

Passage 2
Machine learning systems have become increasingly popular in the world of scientific research. The algorithms can
save a great deal of person-hours, and many hope they'll even be able to find patterns that humans, through more
traditional methods of data analysis, can't.
Impressive, yes. But machine learning models are so complex that it's notoriously difficult even for their creators to
explain their outputs. In fact, they've even been known to cheat in order to arrive at a tidy solution.
Add that reality to the fact that many scientists now leveraging the tech aren't experts in machine learning, and you
have a recipe for scientific disaster. As Princeton professor Arvind Narayanan and his PhD student Sayash Kapoor
explained to Wired, a surprising number of scientists using these systems may be making grave methodological
errors—and if that trend continues, the ripple effects in academia could be pretty severe.
The duo became concerned when they came across a political science study that, using machine learning-produced
data, claimed it could predict the next civil war with a staggering 90 percent accuracy. But when Narayanan and
Kapoor took a closer look, they discovered that the paper was riddled with false outcomes—a result of something
called "data leakage."
In short, data leakage occurs when a machine learning system is using numbers that it shouldn't. It usually happens
when users mishandle data pools, skewing the way the model "learns."
After discovering the data leakage in the civil war paper, the Princeton researchers started searching for similar
machine learning mistakes in other published studies—and the results were striking. They found data leakage in a
grand total of 329 papers across a number of fields.
As they explain in the research, the proliferation of machine learning is resulting in something they're calling a
"reproducibility crisis," which basically means that the results of a study can't be reproduced by followup research.
The claim raises the specter that a sequel could be looming to another serious replication crisis that's shaken the
scientific establishment over the past decade, in which researchers misused statistics to arrive at sweeping
conclusions that amounted to nothing more than statistical noise in large datasets.
If it holds up to further scrutiny, it'd be an extremely concerning revelation. Dead spider robots aside, most research
isn't done for no reason. The goal of most science is to eventually apply it to something, whether it's used to carry
out some kind of immediate action or to inform future study. A mistake in an information pipeline anywhere will
frequently lead to follow up errors down the road—and that could have some pretty devastating consequences.
That's not to say that AI can't be useful for scientific study. We're sure that in many cases it has been, and it will
probably continue to be. Clearly, though, researchers who use it need to be careful, and really ask themselves if
they actually know what they're doing. Because in the end, these aren't machine errors—they're human ones.
Adapted from futuristic.com

Both passages agree that ….


A. AI has no place in scientific research at this stage
B. scientists need careful training before using AI tools
C. data transparency is essential for reliable scientific results
D. the human element remains crucial for scientific progress
E. AI is inherently flawed and should not be trusted
7.

Passage 1
What role should text-generating large language models (LLMs) have in the scientific research process? According
to a team of Oxford scientists, the answer—at least for now—is: pretty much none.
In a new essay, researchers from the Oxford Internet Institute argue that scientists should abstain from using LLM-
powered tools like chatbots to assist in scientific research on the grounds that AI's penchant for hallucinating and
fabricating facts, combined with the human tendency to anthropomorphize the human-mimicking word engines,
could lead to larger information breakdowns—a fate that could ultimately threaten the fabric of science itself.
The scientists' argument hinges on the reality that LLMs and the many bots that the technology powers aren't
primarily designed to be truthful. As they write in the essay, sounding truthful is but "one element by which the
usefulness of these systems is measured." Characteristics including "helpfulness, harmlessness, technical efficiency,
profitability, [and] customer adoption" matter, too.
"LLMs are designed to produce helpful and convincing responses," they continue, "without any overriding
guarantees regarding their accuracy or alignment with fact."
Put simply, if a large language model—which, above all else, is taught to be convincing—comes up with an answer
that's persuasive but not necessarily factual, the fact that the output is persuasive will override its inaccuracy. In an
AI's proverbial brain, simply saying "I don't know" is less helpful than providing an incorrect response.
But as the Oxford researchers lay out, AI's hallucination problem is only half the problem. The human tendency to
read way too far into human-sounding AI outputs due to our deeply mortal proclivity to anthropomorphize
everything around us, is a well-documented phenomenon. Because of this effect, we're already primed to put a
little too much trust in AI; couple that with the confident tone these chatbots so often take, and you have a perfect
recipe for misinformation.
Importantly, the scientists do note "zero-shot translation" as a scenario in which AI outputs might be a bit more
reliable. This, as Oxford professor and AI ethicist Brent Mittelstadt told EuroNews, refers to when a model is given
"a set of inputs that contain some reliable information or data, plus some request to do something with that data."
"It's called zero-shot translation because the model has not been trained specifically to deal with that type of
prompt," Mittelstadt added. So, in other words, a model is more or less rearranging and parsing through a very
limited, trustworthy dataset, and not being used as a vast, internet-like knowledge center. But that would certainly
limit its use cases, and would demand a more specialized understanding of AI tech—much different from just
loading up ChatGPT and firing off some research questions.
And elsewhere, the researchers argue, there's an ideological battle at the core of this automation debate. After all,
science is a deeply human pursuit. To outsource too much of the scientific process to automated AI labor, the
Oxforders say, could undermine that deep-rooted humanity. And is that something we can really afford to lose?
Adapted from futuristic.com

Passage 2
Machine learning systems have become increasingly popular in the world of scientific research. The algorithms can
save a great deal of person-hours, and many hope they'll even be able to find patterns that humans, through more
traditional methods of data analysis, can't.
Impressive, yes. But machine learning models are so complex that it's notoriously difficult even for their creators to
explain their outputs. In fact, they've even been known to cheat in order to arrive at a tidy solution.
Add that reality to the fact that many scientists now leveraging the tech aren't experts in machine learning, and you
have a recipe for scientific disaster. As Princeton professor Arvind Narayanan and his PhD student Sayash Kapoor
explained to Wired, a surprising number of scientists using these systems may be making grave methodological
errors—and if that trend continues, the ripple effects in academia could be pretty severe.
The duo became concerned when they came across a political science study that, using machine learning-produced
data, claimed it could predict the next civil war with a staggering 90 percent accuracy. But when Narayanan and
Kapoor took a closer look, they discovered that the paper was riddled with false outcomes—a result of something
called "data leakage."
In short, data leakage occurs when a machine learning system is using numbers that it shouldn't. It usually happens
when users mishandle data pools, skewing the way the model "learns."
After discovering the data leakage in the civil war paper, the Princeton researchers started searching for similar
machine learning mistakes in other published studies—and the results were striking. They found data leakage in a
grand total of 329 papers across a number of fields.
As they explain in the research, the proliferation of machine learning is resulting in something they're calling a
"reproducibility crisis," which basically means that the results of a study can't be reproduced by followup research.
The claim raises the specter that a sequel could be looming to another serious replication crisis that's shaken the
scientific establishment over the past decade, in which researchers misused statistics to arrive at sweeping
conclusions that amounted to nothing more than statistical noise in large datasets.
If it holds up to further scrutiny, it'd be an extremely concerning revelation. Dead spider robots aside, most research
isn't done for no reason. The goal of most science is to eventually apply it to something, whether it's used to carry
out some kind of immediate action or to inform future study. A mistake in an information pipeline anywhere will
frequently lead to follow up errors down the road—and that could have some pretty devastating consequences.
That's not to say that AI can't be useful for scientific study. We're sure that in many cases it has been, and it will
probably continue to be. Clearly, though, researchers who use it need to be careful, and really ask themselves if
they actually know what they're doing. Because in the end, these aren't machine errors—they're human ones.
Adapted from futuristic.com

The main arguments in both passages focus on the potential risks of AI in science. However, their underlying
concerns differ. Which of the following best describes this difference?
A. Passage 1 worries about AI replacing scientists, while Passage 2 highlights the challenges of interpreting AI
outputs.
B. Passage 1 critiques the potential for AI to deceive researchers, while Passage 2 focuses on its limited ability to
discover patterns.
C. Passage 1 is concerned about the ethical implications of AI, while Passage 2 highlights the logistical challenges
of integrating it.
D. Passage 1 emphasizes the inherent unreliability of AI, while Passage 2 focuses on the human misuse of the
technology.
E. Passage 1 focuses on the technical limitations of AI, while Passage 2 worries about its potential to undermine the
scientific process.
8.

Passage 1
What role should text-generating large language models (LLMs) have in the scientific research process? According
to a team of Oxford scientists, the answer—at least for now—is: pretty much none.
In a new essay, researchers from the Oxford Internet Institute argue that scientists should abstain from using LLM-
powered tools like chatbots to assist in scientific research on the grounds that AI's penchant for hallucinating and
fabricating facts, combined with the human tendency to anthropomorphize the human-mimicking word engines,
could lead to larger information breakdowns—a fate that could ultimately threaten the fabric of science itself.
The scientists' argument hinges on the reality that LLMs and the many bots that the technology powers aren't
primarily designed to be truthful. As they write in the essay, sounding truthful is but "one element by which the
usefulness of these systems is measured." Characteristics including "helpfulness, harmlessness, technical efficiency,
profitability, [and] customer adoption" matter, too.
"LLMs are designed to produce helpful and convincing responses," they continue, "without any overriding
guarantees regarding their accuracy or alignment with fact."
Put simply, if a large language model—which, above all else, is taught to be convincing—comes up with an answer
that's persuasive but not necessarily factual, the fact that the output is persuasive will override its inaccuracy. In an
AI's proverbial brain, simply saying "I don't know" is less helpful than providing an incorrect response.
But as the Oxford researchers lay out, AI's hallucination problem is only half the problem. The human tendency to
read way too far into human-sounding AI outputs due to our deeply mortal proclivity to anthropomorphize
everything around us, is a well-documented phenomenon. Because of this effect, we're already primed to put a
little too much trust in AI; couple that with the confident tone these chatbots so often take, and you have a perfect
recipe for misinformation.
Importantly, the scientists do note "zero-shot translation" as a scenario in which AI outputs might be a bit more
reliable. This, as Oxford professor and AI ethicist Brent Mittelstadt told EuroNews, refers to when a model is given
"a set of inputs that contain some reliable information or data, plus some request to do something with that data."
"It's called zero-shot translation because the model has not been trained specifically to deal with that type of
prompt," Mittelstadt added. So, in other words, a model is more or less rearranging and parsing through a very
limited, trustworthy dataset, and not being used as a vast, internet-like knowledge center. But that would certainly
limit its use cases, and would demand a more specialized understanding of AI tech—much different from just
loading up ChatGPT and firing off some research questions.
And elsewhere, the researchers argue, there's an ideological battle at the core of this automation debate. After all,
science is a deeply human pursuit. To outsource too much of the scientific process to automated AI labor, the
Oxforders say, could undermine that deep-rooted humanity. And is that something we can really afford to lose?
Adapted from futuristic.com

Passage 2
Machine learning systems have become increasingly popular in the world of scientific research. The algorithms can
save a great deal of person-hours, and many hope they'll even be able to find patterns that humans, through more
traditional methods of data analysis, can't.
Impressive, yes. But machine learning models are so complex that it's notoriously difficult even for their creators to
explain their outputs. In fact, they've even been known to cheat in order to arrive at a tidy solution.
Add that reality to the fact that many scientists now leveraging the tech aren't experts in machine learning, and you
have a recipe for scientific disaster. As Princeton professor Arvind Narayanan and his PhD student Sayash Kapoor
explained to Wired, a surprising number of scientists using these systems may be making grave methodological
errors—and if that trend continues, the ripple effects in academia could be pretty severe.
The duo became concerned when they came across a political science study that, using machine learning-produced
data, claimed it could predict the next civil war with a staggering 90 percent accuracy. But when Narayanan and
Kapoor took a closer look, they discovered that the paper was riddled with false outcomes—a result of something
called "data leakage."
In short, data leakage occurs when a machine learning system is using numbers that it shouldn't. It usually happens
when users mishandle data pools, skewing the way the model "learns."
After discovering the data leakage in the civil war paper, the Princeton researchers started searching for similar
machine learning mistakes in other published studies—and the results were striking. They found data leakage in a
grand total of 329 papers across a number of fields.
As they explain in the research, the proliferation of machine learning is resulting in something they're calling a
"reproducibility crisis," which basically means that the results of a study can't be reproduced by followup research.
The claim raises the specter that a sequel could be looming to another serious replication crisis that's shaken the
scientific establishment over the past decade, in which researchers misused statistics to arrive at sweeping
conclusions that amounted to nothing more than statistical noise in large datasets.
If it holds up to further scrutiny, it'd be an extremely concerning revelation. Dead spider robots aside, most research
isn't done for no reason. The goal of most science is to eventually apply it to something, whether it's used to carry
out some kind of immediate action or to inform future study. A mistake in an information pipeline anywhere will
frequently lead to follow up errors down the road—and that could have some pretty devastating consequences.
That's not to say that AI can't be useful for scientific study. We're sure that in many cases it has been, and it will
probably continue to be. Clearly, though, researchers who use it need to be careful, and really ask themselves if
they actually know what they're doing. Because in the end, these aren't machine errors—they're human ones.
Adapted from futuristic.com

According to Passage 1, the human tendency to attribute human-like qualities to AI can lead to which of the
following challenges in evaluating AI's role in scientific research?
A. An exaggerated perception of AI's reliability
B. Diminished scrutiny in assessing AI's conclusions
C. Apprehension about AI surpassing human control
D. Moral conflicts surrounding AI's perceived consciousness
E. Diminishing respect for traditional scientific methods
9. International observers frequently link China’s economic success to authoritarianism.
But authoritarianism does not explain China’s economic success. If government intervention were the key to
economic growth, China would have succeeded 30 years ago, when the state governed all aspects of society. But
China began its economic reform precisely because the old system of an all-encompassing state-run economy did
not work.
The Chinese government has played an important role in promoting the country’s economic growth, but the root of
this contribution is not in authoritarianism. Instead, it is in the government’s disinterestedness toward society;
China’s policy makers have successfully taken a neutral stance when it comes to the divisions among different social
and political groups. Because of this, the government is able to allocate resources according to the productive
capacities of different groups, so economic growth can develop faster. A disinterested government can appear in
both authoritarian and democratic states, so long as the right social conditions and political arrangements are in
place.
While the Chinese political system may be authoritarian in its outlook, it still has a degree of responsiveness and
flexibility that is not entirely devoid of democratic elements. In the West, democracy is often equated with free
assembly and competitive elections. But this view disguises some of democracy’s more-substantial values, such as a
government’s level of accountability and responsiveness.
In China, the country’s officials are increasingly being held accountable for their actions — either through the formal
channels built into the establishment or through popular views in the media and over the internet. And in terms of
responsiveness, the government is undertaking initiatives to improve the quality of life for China’s 1.3 billion
people. Many authoritarian regimes also have trouble with succession, but China has managed to avoid them, as
legislation and much of the government’s decision-making process have been institutionalized. Taking this into
account, calling China an authoritarian state is an oversimplification and a result of the dichotomised approach that
has dominated Western political thinking since the Cold War.
In linking China’s economic success to authoritarianism, those observers discredit China’s current prosperity. But if
it were a result of authoritarian rule, China’s present success could not be labeled as such. Instead, the outcome
would be irrevocably tainted by repression and coercion, and detested by the people. This criticism will not hold:
Chinese people are enjoying more freedoms than ever before.
Source: East Asia Forum (with modifications)
Question ideas: Main idea, tone of the author, correct incorrect (table, inferred), conditional, coherence, reference
What is the main point that the passage argues against the common assumption of the cause behind China's
economic success?
A. China's success is due to its authoritarian regime that obliges its citizens to adhere to its policies.
B. China's economic success is absolutely independent of its authoritarian political system.
C. China's economic success is actually due to the government's impartiality toward society.
D. China’s success is a result of its overlooked democratic elements that allows for certain flexibilities.
E. Contrary to common belief, China would have more substantial growth without its authoritarian regime.

10. International observers frequently link China’s economic success to authoritarianism.


But authoritarianism does not explain China’s economic success. If government intervention were the key to
economic growth, China would have succeeded 30 years ago, when the state governed all aspects of society. But
China began its economic reform precisely because the old system of an all-encompassing state-run economy did
not work.
The Chinese government has played an important role in promoting the country’s economic growth, but the root of
this contribution is not in authoritarianism. Instead, it is in the government’s disinterestedness toward society;
China’s policy makers have successfully taken a neutral stance when it comes to the divisions among different social
and political groups. Because of this, the government is able to allocate resources according to the productive
capacities of different groups, so economic growth can develop faster. A disinterested government can appear in
both authoritarian and democratic states, so long as the right social conditions and political arrangements are in
place.
While the Chinese political system may be authoritarian in its outlook, it still has a degree of responsiveness and
flexibility that is not entirely devoid of democratic elements. In the West, democracy is often equated with free
assembly and competitive elections. But this view disguises some of democracy’s more-substantial values, such as a
government’s level of accountability and responsiveness.
In China, the country’s officials are increasingly being held accountable for their actions — either through the formal
channels built into the establishment or through popular views in the media and over the internet. And in terms of
responsiveness, the government is undertaking initiatives to improve the quality of life for China’s 1.3 billion
people. Many authoritarian regimes also have trouble with succession, but China has managed to avoid them, as
legislation and much of the government’s decision-making process have been institutionalized. Taking this into
account, calling China an authoritarian state is an oversimplification and a result of the dichotomised approach that
has dominated Western political thinking since the Cold War.
In linking China’s economic success to authoritarianism, those observers discredit China’s current prosperity. But if
it were a result of authoritarian rule, China’s present success could not be labeled as such. Instead, the outcome
would be irrevocably tainted by repression and coercion, and detested by the people. This criticism will not hold:
Chinese people are enjoying more freedoms than ever before.
Source: East Asia Forum (with modifications)
What is the attitude of the author regarding the belief that China succeeds economically due to its authoritarian
regime?
A. Indifferent
B. Critical
C. Supportive
D. Sarcastic
E. Optimistic

11. Pilih benar/salah untuk masing-masing pernyataan


International observers frequently link China’s economic success to authoritarianism.
But authoritarianism does not explain China’s economic success. If government intervention were the key to
economic growth, China would have succeeded 30 years ago, when the state governed all aspects of society. But
China began its economic reform precisely because the old system of an all-encompassing state-run economy did
not work.
The Chinese government has played an important role in promoting the country’s economic growth, but the root of
this contribution is not in authoritarianism. Instead, it is in the government’s disinterestedness toward society;
China’s policy makers have successfully taken a neutral stance when it comes to the divisions among different social
and political groups. Because of this, the government is able to allocate resources according to the productive
capacities of different groups, so economic growth can develop faster. A disinterested government can appear in
both authoritarian and democratic states, so long as the right social conditions and political arrangements are in
place.
While the Chinese political system may be authoritarian in its outlook, it still has a degree of responsiveness and
flexibility that is not entirely devoid of democratic elements. In the West, democracy is often equated with free
assembly and competitive elections. But this view disguises some of democracy’s more-substantial values, such as a
government’s level of accountability and responsiveness.
In China, the country’s officials are increasingly being held accountable for their actions — either through the formal
channels built into the establishment or through popular views in the media and over the internet. And in terms of
responsiveness, the government is undertaking initiatives to improve the quality of life for China’s 1.3 billion
people. Many authoritarian regimes also have trouble with succession, but China has managed to avoid them, as
legislation and much of the government’s decision-making process have been institutionalized. Taking this into
account, calling China an authoritarian state is an oversimplification and a result of the dichotomised approach that
has dominated Western political thinking since the Cold War.
In linking China’s economic success to authoritarianism, those observers discredit China’s current prosperity. But if
it were a result of authoritarian rule, China’s present success could not be labeled as such. Instead, the outcome
would be irrevocably tainted by repression and coercion, and detested by the people. This criticism will not hold:
Chinese people are enjoying more freedoms than ever before.
Source: East Asia Forum (with modifications)
Mark the following statements based on their accuracy according to the passage.
• Pernyataan Benar atau Salah
• China initiated economic changes due to the failure of its previous, fully state-controlled economic model. (Benar)
• Even if China's political system seems to have democratic aspects, it is hardly adaptable. (Salah)
• Associating democracy with free assembly and competitive elections doesn’t allow governments to get away with
liability for their actions. (Benar)
• China’s authoritarian regime is facing an inevitable succession problem. (Salah)
• China’s label as pure authoritarian stems from the Cold War’s binary perspective that is rampant in the West.
(Benar)
12. International observers frequently link China’s economic success to authoritarianism.
But authoritarianism does not explain China’s economic success. If government intervention were the key to
economic growth, China would have succeeded 30 years ago, when the state governed all aspects of society. But
China began its economic reform precisely because the old system of an all-encompassing state-run economy did
not work.
The Chinese government has played an important role in promoting the country’s economic growth, but the root of
this contribution is not in authoritarianism. Instead, it is in the government’s disinterestedness toward society;
China’s policy makers have successfully taken a neutral stance when it comes to the divisions among different social
and political groups. Because of this, the government is able to allocate resources according to the productive
capacities of different groups, so economic growth can develop faster. A disinterested government can appear in
both authoritarian and democratic states, so long as the right social conditions and political arrangements are in
place.
While the Chinese political system may be authoritarian in its outlook, it still has a degree of responsiveness and
flexibility that is not entirely devoid of democratic elements. In the West, democracy is often equated with free
assembly and competitive elections. But this view disguises some of democracy’s more-substantial values, such as a
government’s level of accountability and responsiveness.
In China, the country’s officials are increasingly being held accountable for their actions — either through the formal
channels built into the establishment or through popular views in the media and over the internet. And in terms of
responsiveness, the government is undertaking initiatives to improve the quality of life for China’s 1.3 billion
people. Many authoritarian regimes also have trouble with succession, but China has managed to avoid them, as
legislation and much of the government’s decision-making process have been institutionalized. Taking this into
account, calling China an authoritarian state is an oversimplification and a result of the dichotomised approach that
has dominated Western political thinking since the Cold War.
In linking China’s economic success to authoritarianism, those observers discredit China’s current prosperity. But if
it were a result of authoritarian rule, China’s present success could not be labeled as such. Instead, the outcome
would be irrevocably tainted by repression and coercion, and detested by the people. This criticism will not hold:
Chinese people are enjoying more freedoms than ever before.
Source: East Asia Forum (with modifications)
According to the passage, if government intervention were the key to economic growth, China ________ 30 years
ago, when the state governed all aspects of society.
A. would have succeeded
B. would succeed
C. wouldn’t have succeeded
D. wouldn’t succeed
E. wouldn’t have been successful

13. International observers frequently link China’s economic success to authoritarianism.


But authoritarianism does not explain China’s economic success. If government intervention were the key to
economic growth, China would have succeeded 30 years ago, when the state governed all aspects of society. But
China began its economic reform precisely because the old system of an all-encompassing state-run economy did
not work.
The Chinese government has played an important role in promoting the country’s economic growth, but the root of
this contribution is not in authoritarianism. Instead, it is in the government’s disinterestedness toward society;
China’s policy makers have successfully taken a neutral stance when it comes to the divisions among different social
and political groups. Because of this, the government is able to allocate resources according to the productive
capacities of different groups, so economic growth can develop faster. A disinterested government can appear in
both authoritarian and democratic states, so long as the right social conditions and political arrangements are in
place.
While the Chinese political system may be authoritarian in its outlook, it still has a degree of responsiveness and
flexibility that is not entirely devoid of democratic elements. In the West, democracy is often equated with free
assembly and competitive elections. But this view disguises some of democracy’s more-substantial values, such as a
government’s level of accountability and responsiveness.
In China, the country’s officials are increasingly being held accountable for their actions — either through the formal
channels built into the establishment or through popular views in the media and over the internet. And in terms of
responsiveness, the government is undertaking initiatives to improve the quality of life for China’s 1.3 billion
people. Many authoritarian regimes also have trouble with succession, but China has managed to avoid them, as
legislation and much of the government’s decision-making process have been institutionalized. Taking this into
account, calling China an authoritarian state is an oversimplification and a result of the dichotomised approach that
has dominated Western political thinking since the Cold War.
In linking China’s economic success to authoritarianism, those observers discredit China’s current prosperity. But if
it were a result of authoritarian rule, China’s present success could not be labeled as such. Instead, the outcome
would be irrevocably tainted by repression and coercion, and detested by the people. This criticism will not hold:
Chinese people are enjoying more freedoms than ever before.
Source: East Asia Forum (with modifications)
What does "them" refer to in the sentence, "China has managed to avoid them"?
A. Democratic elements
B. Political succession
C. Authoritarian regime
D. Troubles with succession
E. Institutionalized processes

14. International observers frequently link China’s economic success to authoritarianism.


But authoritarianism does not explain China’s economic success. If government intervention were the key to
economic growth, China would have succeeded 30 years ago, when the state governed all aspects of society. But
China began its economic reform precisely because the old system of an all-encompassing state-run economy did
not work.
The Chinese government has played an important role in promoting the country’s economic growth, but the root of
this contribution is not in authoritarianism. Instead, it is in the government’s disinterestedness toward society;
China’s policy makers have successfully taken a neutral stance when it comes to the divisions among different social
and political groups. Because of this, the government is able to allocate resources according to the productive
capacities of different groups, so economic growth can develop faster. A disinterested government can appear in
both authoritarian and democratic states, so long as the right social conditions and political arrangements are in
place.
While the Chinese political system may be authoritarian in its outlook, it still has a degree of responsiveness and
flexibility that is not entirely devoid of democratic elements. In the West, democracy is often equated with free
assembly and competitive elections. But this view disguises some of democracy’s more-substantial values, such as a
government’s level of accountability and responsiveness.
In China, the country’s officials are increasingly being held accountable for their actions — either through the formal
channels built into the establishment or through popular views in the media and over the internet. And in terms of
responsiveness, the government is undertaking initiatives to improve the quality of life for China’s 1.3 billion
people. Many authoritarian regimes also have trouble with succession, but China has managed to avoid them, as
legislation and much of the government’s decision-making process have been institutionalized. Taking this into
account, calling China an authoritarian state is an oversimplification and a result of the dichotomised approach that
has dominated Western political thinking since the Cold War.
In linking China’s economic success to authoritarianism, those observers discredit China’s current prosperity. But if
it were a result of authoritarian rule, China’s present success could not be labeled as such. Instead, the outcome
would be irrevocably tainted by repression and coercion, and detested by the people. This criticism will not hold:
Chinese people are enjoying more freedoms than ever before.
Source: East Asia Forum (with modifications)
Which of the following questions cannot be answered based on the information provided in the passage?
A. What is the author's view on attributing China's economic success to authoritarianism?
B. How has China managed its succession challenges compared to other similar regimes?
C. What role does the Chinese government play in the country's economic growth?
D. What specific reforms did China implement to transform its state-run economy?
E. Does the passage suggest that China's government is responsive to the needs of its citizens?

15.
Username Comments
I believe it's time we start seriously discussing the
destigmatization of hard drugs. The current approach of
marimar3892 criminalization hasn't solved the problem and has led to a
cycle of incarceration. We should focus on harm reduction,
treatment, and addressing the root causes of addiction.
dwight_l While I understand the intentions behind destigmatization,
aren't there concerns about the potential increase in drug use
if hard drugs are destigmatized? Won't this send the wrong
message to society?
Destigmatization doesn't necessarily mean endorsement. It
means recognizing that addiction is a medical issue rather
cubicf3rn than a criminal one. We can provide better support and
resources for those struggling with addiction while
maintaining strict regulations on drug distribution.
I'm all for destigmatization, but what about the safety
francistallebout concerns? How can we ensure that drugs are not
contaminated or harmful if they're not regulated tightly?
With destigmatization, there should still be a focus on
regulation and quality control to protect public health. It's
finniasert
about finding a balance between reducing harm and ensuring
safety.

What is the main argument presented by marimar3892 regarding hard drugs?

A. Marimar3892 suggests that hard drugs should remain criminalized, emphasizing the importance of law
enforcement.
B. Marimar3892 argues that the focus should be on decreasing incarceration rates, addressing the social
consequences of drug criminalization.
C. Marimar3892 contends that destigmatization is essential to effectively tackle hard drug addiction, advocating
for a shift towards a health-centered approach.
D. Marimar3892 underscores that the root causes of addiction should not be ignored, emphasizing the need for
comprehensive solutions.
E. Marimar3892 expresses doubts about the effectiveness of treatments for addiction, questioning their impact.

16.
Username Comments
I believe it's time we start seriously discussing the
destigmatization of hard drugs. The current approach of
marimar3892 criminalization hasn't solved the problem and has led to a
cycle of incarceration. We should focus on harm reduction,
treatment, and addressing the root causes of addiction.
While I understand the intentions behind destigmatization,
aren't there concerns about the potential increase in drug
dwight_l
use if hard drugs are destigmatized? Won't this send the
wrong message to society?
Destigmatization doesn't necessarily mean endorsement. It
means recognizing that addiction is a medical issue rather
cubicf3rn than a criminal one. We can provide better support and
resources for those struggling with addiction while
maintaining strict regulations on drug distribution.
I'm all for destigmatization, but what about the safety
francistallebout concerns? How can we ensure that drugs are not
contaminated or harmful if they're not regulated tightly?
With destigmatization, there should still be a focus on
regulation and quality control to protect public health. It's
finniasert
about finding a balance between reducing harm and
ensuring safety.

What is dwight_l concerned about in relation to the destigmatization of hard drugs?

A. Dwight_l is concerned that destigmatization might inadvertently encourage drug use.


B. Dwight_l emphasizes the need for stricter regulations on drug distribution.
C. Dwight_l argues for a different approach to drug-related issues, but also sees the complexities involved.
D. Dwight_l questions the importance of addressing addiction as a health issue.
E. Dwight_l advocates for a more thorough examination of the underlying social causes relevant to drug use.

17.
Username Comments
I believe it's time we start seriously discussing the
destigmatization of hard drugs. The current approach of
marimar3892 criminalization hasn't solved the problem and has led to a cycle
of incarceration. We should focus on harm reduction,
treatment, and addressing the root causes of addiction.
While I understand the intentions behind destigmatization,
aren't there concerns about the potential increase in drug use if
dwight_l
hard drugs are destigmatized? Won't this send the wrong
message to society?
Destigmatization doesn't necessarily mean endorsement. It
means recognizing that addiction is a medical issue rather than
cubicf3rn a criminal one. We can provide better support and resources for
those struggling with addiction while maintaining strict
regulations on drug distribution.
I'm all for destigmatization, but what about the safety
francistallebout concerns? How can we ensure that drugs are not contaminated
or harmful if they're not regulated tightly?
With destigmatization, there should still be a focus on
regulation and quality control to protect public health. It's
finniasert
about finding a balance between reducing harm and ensuring
safety.

What concern does francistallebout raise regarding the destigmatization of hard drugs?

A. Francistallebout is concerned about the potential for increased drug use due to destigmatization.
B. Francistallebout highlights the lack of resources for addiction treatment as a significant issue.
C. Francistallebout supports the focus on harm reduction and its role in addressing addiction.
D. Francistallebout emphasizes the importance of ensuring safety measures are in place.
E. Francistallebout questions the effectiveness of treatment programs in dealing with addiction.

18.
Username Comments
I believe it's time we start seriously discussing the
destigmatization of hard drugs. The current approach of
marimar3892 criminalization hasn't solved the problem and has led to a
cycle of incarceration. We should focus on harm reduction,
treatment, and addressing the root causes of addiction.
While I understand the intentions behind destigmatization,
aren't there concerns about the potential increase in drug
dwight_l
use if hard drugs are destigmatized? Won't this send the
wrong message to society?
Destigmatization doesn't necessarily mean endorsement. It
means recognizing that addiction is a medical issue rather
cubicf3rn than a criminal one. We can provide better support and
resources for those struggling with addiction while
maintaining strict regulations on drug distribution.
I'm all for destigmatization, but what about the safety
francistallebout concerns? How can we ensure that drugs are not
contaminated or harmful if they're not regulated tightly?
With destigmatization, there should still be a focus on
regulation and quality control to protect public health. It's
finniasert
about finding a balance between reducing harm and
ensuring safety.

What is the primary focus of this online forum discussion?

A. The primary focus is to discuss the various perspectives on hard drug use without promoting its benefits.
B. The discussion centers on advocating for stricter regulations on drug use to address drug-related issues.
C. Participants discuss the potential consequences of destigmatizing hard drugs on society and individuals.
D. The primary debate revolves around evaluating the effectiveness of incarceration as a response to drug-related
offenses.
E. The discussion primarily emphasizes advocating for the criminalization of addiction to discourage drug use.

19.
Username Comments
I believe it's time we start seriously discussing the
destigmatization of hard drugs. The current approach of
marimar3892 criminalization hasn't solved the problem and has led to a cycle
of incarceration. We should focus on harm reduction,
treatment, and addressing the root causes of addiction.
While I understand the intentions behind destigmatization,
aren't there concerns about the potential increase in drug use if
dwight_l
hard drugs are destigmatized? Won't this send the wrong
message to society?
Destigmatization doesn't necessarily mean endorsement. It
means recognizing that addiction is a medical issue rather than
cubicf3rn a criminal one. We can provide better support and resources for
those struggling with addiction while maintaining strict
regulations on drug distribution.
I'm all for destigmatization, but what about the safety
francistallebout concerns? How can we ensure that drugs are not contaminated
or harmful if they're not regulated tightly?
With destigmatization, there should still be a focus on
regulation and quality control to protect public health. It's
finniasert
about finding a balance between reducing harm and ensuring
safety.
What does cubicf3rn emphasize as the essence of destigmatization in the context of hard drugs?

A. Cubicf3rn emphasizes that destigmatization implies an endorsement of drug use.


B. Cubicf3rn highlights the importance of a shift in how addiction is perceived.
C. Cubicf3rn suggests that destigmatization primarily focuses on stricter regulations for drug use.
D. Cubicf3rn advocates for a balance between reducing harm and ensuring safety.
E. Cubicf3rn argues that destigmatization necessitates an improvement for public safety.

20.
Username Comments
I believe it's time we start seriously discussing the
destigmatization of hard drugs. The current approach of
marimar3892 criminalization hasn't solved the problem and has led to a
cycle of incarceration. We should focus on harm reduction,
treatment, and addressing the root causes of addiction.
While I understand the intentions behind destigmatization,
dwight_l
aren't there concerns about the potential increase in drug
use if hard drugs are destigmatized? Won't this send the
wrong message to society?
Destigmatization doesn't necessarily mean endorsement. It
means recognizing that addiction is a medical issue rather
cubicf3rn than a criminal one. We can provide better support and
resources for those struggling with addiction while
maintaining strict regulations on drug distribution.
I'm all for destigmatization, but what about the safety
francistallebout concerns? How can we ensure that drugs are not
contaminated or harmful if they're not regulated tightly?
With destigmatization, there should still be a focus on
regulation and quality control to protect public health. It's
finniasert
about finding a balance between reducing harm and ensuring
safety.
Who among the participants in the discussion expresses the most significant concern regarding the potential
impacts of destigmatization on public health and safety?

A. marimar3892
B. dwight_l
C. cubicf3rn
D. francistallebout
E. finniasert

You might also like

pFad - Phonifier reborn

Pfad - The Proxy pFad of © 2024 Garber Painting. All rights reserved.

Note: This service is not intended for secure transactions such as banking, social media, email, or purchasing. Use at your own risk. We assume no liability whatsoever for broken pages.


Alternative Proxies:

Alternative Proxy

pFad Proxy

pFad v3 Proxy

pFad v4 Proxy