카테고리 보관물: 권장 문헌

[Canadian Journal of Philosophy] How Can We Know If You are Serious? Ethics Washing, Symbolic Ethics Offices, and the Responsible Design of AI Systems

Year and Volume: 2024 (Vol. 54).

Author(s): Justin B. Biddle; John P. Nelson; Olajide E. Olugbade.

Abstract: Many AI development organizations advertise that they have offices of ethics that facilitate ethical AI. However, concerns have been raised that these offices are merely symbolic and do not actually promote ethics. We address the question of how we can know whether an organization is engaging in ethics washing in this way. We articulate an account of organizational power, and we argue that ethics offices that have power are not merely symbolic. Furthermore, we develop a framework for assessing whether an organization has an empowered ethics office—and, thus, is not ethics washing via a symbolic ethics office.

Doi: https://doi.org/10.1017/can.2025.9

[Analysis] Artificial Achievements

Year and volume: 2024 (Vol. 84).

Author(s): Phillip Hintikka Kieval.

Abstract: State-of-the-art machine learning systems now routinely exceed benchmarks once thought beyond the ken of artificial intelligence (AI). Often these systems accomplish tasks through novel, insightful processes that remain inscrutable to even their human designers. Taking AlphaGo’s 2016 victory over Lee Sedol as a case study, this paper argues that such accomplishments manifest the essential features of achievements as laid out in Bradford’s 2015 book Achievement. Achievements like these are directly attributable to AI systems themselves. They are artificial achievements. This opens the door to a challenge that calls out for further inquiry. Since Bradford grounds the intrinsic value of achievements in the exercise of distinctively human perfectionist capacities, the existence of artificial achievements raises the possibility that some achievements might be valueless.

Doi: https://doi.org/10.1093/analys/anad052

[BJPS] Digital Literature Analysis for Empirical Philosophy of Science

Year and volume: 2023 (Vol. 74).

Author(s): Oliver M. Lean; Luca Rivelli; and Charles H. Pence.

Abstract: Empirical philosophers of science aim to base their philosophical theories on observations of scientific practice. But since there is far too much science to observe it all, how can we form and test hypotheses about science that are sufficiently rigorous and broad in scope, while avoiding the pitfalls of bias and subjectivity in our methods? Part of the answer, we claim, lies in the computational tools of the digital humanities, which allow us to analyse large volumes of scientific literature. Here we advocate for the use of these methods by addressing a number of large-scale, justificatory concerns—specifically, about the epistemic value of journal articles as evidence for what happens elsewhere in science, and about the ability of digital humanities tools to extract this evidence. Far from ignoring the gap between scientific literature and the rest of scientific practice, effective use of digital humanities tools requires critical reflection about these relationships.

Doi: https://doi.org/10.1086/715049

[Harvard Data Science Review] Environmental Intelligence: Redefining the Philosophical Premises of AI

Year and volume: 2025 (Vol. 7).

Author(s): Sabina Leonelli.

Abstract: As an alternative to the long history of interpreting artificial intelligence (AI) as the attempt to rationalize and mechanize human ingenuity, thereby transcending nature and its perceived limits, this article proposes an interpretation of the conceptual foundations of environmental intelligence (EI) as the effort to develop digital technology and data-intensive algorithmic systems to sustain and enhance life on this planet. Thus articulated, EI provides a framework to challenge and redefine the philosophical premises of AI in ways that can explicitly spur the responsible and sustainable development of computational technologies toward public interest goals.

Doi: https://doi.org/10.1162/99608f92.ac7c1504

[New Blackfriars] The Intellectual Animal

Year and volume: 2019 (Vol. 100).

Author(s): Candace Vogler.

Abstract: Properly interpreted, Aquinas supports a transformative rather than an additive understanding of how the human intellect relates to the capacities human beings share with other animals, an understanding founded in a metaphysics. The soul (‘life-form’) is the substantial form that maintains an organism as a single being throughout life, and Aquinas holds that the human soul is the only substantial form in the human being. He respects the variety of appetitive and apprehensive capacities displayed by different animals, and has a high view of the perceptive (even inductive) powers of the higher animals: they ‘share somewhat in reason’. It is no surprise that we cannot easily identify a rigid boundary between our intellectual powers and the cognitive and conative powers we share with other animals; rather, the powers not only interact, they qualify each other. As Stephen Brock put it, ‘Rationality is a mode of intellect … intrinsically connected to the life of the senses, and therefore to the sense-organs … and to matter itself.’

DOI: https://doi.org/10.1111/nbfr.12503

[Philosophy & Technology] Technology and Neutrality

Year and volume: 2023 (Vol. 36).

Author(s): Sybren Heyndels.

Abstract: This paper clarifies and answers the following question: is technology morally neutral? It is argued that the debate between proponents and opponents of the Neutrality Thesis depends on different underlying assumptions about the nature of technological artifacts. My central argument centres around the claim that a mere physicalistic vocabulary does not suffice in characterizing technological artifacts as artifacts, and that the concepts of function and intention are necessary to describe technological artifacts at the right level of description. Once this has been established, I demystify talk about the possible value-ladenness of technological artifacts by showing how these values can be empirically identified. I draw from examples in biology and the social sciences to show that there is a non-mysterious sense in which functions and values can be empirically identified. I conclude from this that technology can be value-laden and that its value-ladenness can both derive from the intended functions as well as the harmful non-intended functions of technological artifacts.

Doi: https://doi.org/10.1007/s13347-023-00672-1

[Philosophy of Science] Degrees of Value-Ladenness and Signal-to-Noise Ratio

Year and volume: 2025 (Vol. 92).

Author(s): Torsten Wilholt.

Abstract: Although fundamental arguments have been presented to support the value-laden nature of all scientific research, they appear to be difficult to apply to basic research in physics. To explain this, I argue that basic research in physics is, in a very specific respect, often value-laden to a lesser degree. To spell this out, I refer to the different signal-to-noise ratios that can be achieved in different fields of research. I also argue that having a very low degree of value-ladenness in the very specific respect that I identify does not mean that the research is not value-laden at all.

Doi: https://doi.org/10.1017/psa.2025.10142

[Studies in History and Philosophy of Science Part A] Existential Risk, Creativity & Well-Adapted Science

Year and volume: 2019 (Vol. 76).

Author(s): Adrian Currie.

Abstract: Existential risks, particularly those arising from emerging technologies, are a complex, obstinate challenge for scientific study. This should motivate studying how the relevant scientific communities might be made more amenable to studying such risks. I offer an account of scientific creativity suitable for thinking about scientific communities, and provide reasons for thinking contemporary science doesn’t incentivise creativity in this specified sense. I’ll argue that a successful science of existential risk will be creative in my sense. So, if we want to make progress on those questions we should consider how to shift scientific incentives to encourage creativity. The analysis also has lessons for philosophical approaches to understanding the social structure of science. I introduce the notion of a ‘well-adapted’ science: one in which the incentive structure is tailored to the epistemic situation at hand.

Doi: https://doi.org/10.1016/j.shpsa.2018.09.008

[Minds and Machines] Risk Analysis in Automated Misinformation Detection

Year and volume: 2026 (Vol.36).

Author(s): Adrian K. Yee.

Abstract: Machine learning models of misinformation detection are increasingly being used and yet their risks have been under analyzed, requiring insights from philosophy of science and political philosophy. A taxonomy of types of risks and simple models estimating them is provided that is sensitive to the value-laden features of judgments of misinformation and weighted by a potential item of misinformation’s impact on respective stakeholders. Failing to do so is incompatible with a variety of civil virtues that are desired by not only liberal democratic societies but authoritarian ones as well, suggesting the general applicability of this taxonomy of risks to contemporary and future societies.

Doi: https://doi.org/10.1007/s11023-026-09775-y

[Synthese] Why Mental Metaphors Do Not Help Us Understand Chatbot Mistakes

Year and volume: 2026 (Vol. 207).

Author(s): Markus Pantsar; Regina E. Fabry.

Abstract: The function of chatbots like OpenAI’s ChatGPT is based on detecting probabilistic patterns in the training data. This makes them vulnerable to generating factual mistakes in their outputs. Recently, it has become commonplace in philosophical, scientific, and popular discourses to capture such mistakes by metaphors that draw on discourses about the human mind. The two most popular metaphors at present are hallucinating and bullshitting. In this paper, we review, discuss, and criticise these mental metaphors. By applying conceptual metaphor theory, we provide numerous reasons why they do not succeed in providing us with a better understanding of factual chatbot mistakes. We conclude by calling for justifications of the epistemic feasibility and fruitfulness of the metaphors at issue. Furthermore, we raise the question what would be lost if we stopped trying to capture factual chatbot mistakes by mental metaphors.

Doi: https://doi.org/10.1007/s11229-026-05551-8