[Philosophy of Science] Degrees of Value-Ladenness and Signal-to-Noise Ratio

Year and volume: 2025 (Vol. 92).

Author(s): Torsten Wilholt.

Abstract: Although fundamental arguments have been presented to support the value-laden nature of all scientific research, they appear to be difficult to apply to basic research in physics. To explain this, I argue that basic research in physics is, in a very specific respect, often value-laden to a lesser degree. To spell this out, I refer to the different signal-to-noise ratios that can be achieved in different fields of research. I also argue that having a very low degree of value-ladenness in the very specific respect that I identify does not mean that the research is not value-laden at all.

Doi: https://doi.org/10.1017/psa.2025.10142

[Studies in History and Philosophy of Science Part A] Existential Risk, Creativity & Well-Adapted Science

Year and volume: 2019 (Vol. 76).

Author(s): Adrian Currie.

Abstract: Existential risks, particularly those arising from emerging technologies, are a complex, obstinate challenge for scientific study. This should motivate studying how the relevant scientific communities might be made more amenable to studying such risks. I offer an account of scientific creativity suitable for thinking about scientific communities, and provide reasons for thinking contemporary science doesn’t incentivise creativity in this specified sense. I’ll argue that a successful science of existential risk will be creative in my sense. So, if we want to make progress on those questions we should consider how to shift scientific incentives to encourage creativity. The analysis also has lessons for philosophical approaches to understanding the social structure of science. I introduce the notion of a ‘well-adapted’ science: one in which the incentive structure is tailored to the epistemic situation at hand.

Doi: https://doi.org/10.1016/j.shpsa.2018.09.008

[Minds and Machines] Risk Analysis in Automated Misinformation Detection

Year and volume: 2026 (Vol.36).

Author(s): Adrian K. Yee.

Abstract: Machine learning models of misinformation detection are increasingly being used and yet their risks have been under analyzed, requiring insights from philosophy of science and political philosophy. A taxonomy of types of risks and simple models estimating them is provided that is sensitive to the value-laden features of judgments of misinformation and weighted by a potential item of misinformation’s impact on respective stakeholders. Failing to do so is incompatible with a variety of civil virtues that are desired by not only liberal democratic societies but authoritarian ones as well, suggesting the general applicability of this taxonomy of risks to contemporary and future societies.

Doi: https://doi.org/10.1007/s11023-026-09775-y

[Synthese] Why Mental Metaphors Do Not Help Us Understand Chatbot Mistakes

Year and volume: 2026 (Vol. 207).

Author(s): Markus Pantsar; Regina E. Fabry.

Abstract: The function of chatbots like OpenAI’s ChatGPT is based on detecting probabilistic patterns in the training data. This makes them vulnerable to generating factual mistakes in their outputs. Recently, it has become commonplace in philosophical, scientific, and popular discourses to capture such mistakes by metaphors that draw on discourses about the human mind. The two most popular metaphors at present are hallucinating and bullshitting. In this paper, we review, discuss, and criticise these mental metaphors. By applying conceptual metaphor theory, we provide numerous reasons why they do not succeed in providing us with a better understanding of factual chatbot mistakes. We conclude by calling for justifications of the epistemic feasibility and fruitfulness of the metaphors at issue. Furthermore, we raise the question what would be lost if we stopped trying to capture factual chatbot mistakes by mental metaphors.

Doi: https://doi.org/10.1007/s11229-026-05551-8

[Frontiers in Research Metrics and Analytics] What, Me Worry? Research Policy and the Open Embrace of Industry-Academic Relations

Year and volume: 2021 (Vol. 6).

Author(s): Bennett Holman.

Abstract: The field of research policy has conducted extensive research on partnerships between industry and academics and concluded that such collaborations are generally beneficial. Such a view stands in stark contrast to the literature in the philosophy of science which almost wholly finds such collaborations corrosive to scientific inquiry. After reviewing the respective literatures, I propose explanations for these polarized views which support the claim that both disciplines have only a partial vantage point on the effects of industry-funded science. In closing, I outline how the research agendas of each discipline might remediate their respective shortcomings.

Doi: https://doi.org/10.3389/frma.2021.600706

[Philosophy Compass] The Promise and Perils of Industry-Funded Science

Year and volume: 2018 (Vol. 13).

Author(s): Bennett Holman; Kevin C. Elliott.

Abstract: Private companies provide by far the most funding for scientific research and development. Nevertheless, relatively little attention has been paid to the dynamics of industry-funded research by philosophers of science. This paper addresses this gap by providing an overview of the major strengths and weaknesses of industry research funding, together with the existing recommendations for addressing the weaknesses. It is designed to provide a starting point for future philosophical work that explores the features of industry-funded research, avenues for addressing concerns, and strategies for making research funded by the private sector as fruitful as possible.

Doi: https://doi.org/10.1111/phc3.12544

[Research Ethics] The Ambivalence of Multi-Purpose Design: On the Dual-Use and Misuse Risks of Humanitarian UAVs

Year and volume: 2025 (Vol 22).

Author(s): Martina Philippi.

Abstract: The development of UAVs with life-sign detection in the search-and-rescue (SAR) context challenges the assessment and mitigation of dual-use and misuse risks. Those technologies can be characterized as modularly constructed technologies (MCTs): They consist of highly specific components like sensors, UAV hardware, and often AI-supported software, and those components are designed in a way that makes them exchangeable and easy to implement. Therefore, the MCTs can be efficiently adapted for different purposes. From this situation, special dual-use challenges emerge. The recently finished project UAV-Rescue shall serve as an example for the development and assessment scenario of such an MCT from the SAR context. Referring to a recent paper from the context of autonomous driving, the contribution shows some mutual observations but goes further by (1) exploring the characteristics of MCTs that lead to special challenges in the assessment and mitigation of dual-use risks and (2) proposing a different way of dealing with these challenges. The central thesis is that MCTs cannot be addressed satisfactorily with a classic framework for dual-use classification and corresponding regulation. These are not just new types of technologies that also bear a dual-use risk among other risks, but a whole new type of highly complex technology that is designed to be adapted quickly and efficiently to different application scenarios. The paper argues that, even if it is difficult or impossible to mitigate those dual-use risks in MCTs with methods applied so far, it is highly important to provide a systematic analysis of the gains and losses that are caused by this technology. This is important to understand the irreversible impact of such developments in the sense of technology assessment on the one hand, and on the other hand to weigh costs against benefits in a circumspect and careful manner.

Doi: https://doi.org/10.1177/17470161251344321

[Synthese] Of Opaque Oracles: Epistemic Dependence on AI in Science Poses no Novel Problems for Social Epistemology

Year & volume: 2025 (Vol. 205).

Author(s): Jakob Ortmann.

Abstract: Deep Neural Networks (DNNs) are epistemically opaque in the sense that their inner functioning is often unintelligible to human investigators. Inkeri Koskinen has recently argued that this poses special problems for a widespread view in social epistemology according to which thick normative trust between researchers is necessary to handle opacity: if DNNs are essentially opaque, there simply exists nobody who could be trusted to understand all the aspects a DNN picks up during training. In this paper, I present a counterexample from scientific practice, AlphaFold2. I argue that for epistemic reliance on an opaque system, trust is not necessary, but reliability is. What matters is whether, for a given context, the reliability of a DNN has been compellingly established by empirical means and whether there exist trustable researchers who have performed such evaluations adequately.

Doi: https://doi.org/10.1007/s11229-025-04930-x

[arXiv] Report on the Conference on Ethical and Responsible Design in the National AI Institutes: A Summary of Challenges

Year & volume: 2024.

Author(s): Sherri Lynn Conklin, Sue Bae, Gaurav Sett, Michael Hoffmann, and Justin B. Biddle.

Abstract: In May 2023, the Georgia Tech Ethics, Technology, and Human Interaction Center organized the Conference on Ethical and Responsible Design in the National AI Institutes. Representatives from the National AI Research Institutes that had been established as of January 2023 were invited to attend; researchers representing 14 Institutes attended and participated. The conference focused on three questions: What are the main challenges that the National AI Institutes are facing with regard to the responsible design of AI systems? What are promising lines of inquiry to address these challenges? What are possible points of collaboration? Over the course of the conference, a revised version of the first question became a focal point: What are the challenges that the Institutes face in identifying ethical and responsible design practices and in implementing them in the AI development process? This document summarizes the challenges that representatives from the Institutes in attendance highlighted.

Doi: https://doi.org/10.48550/arXiv.2407.13926