카테고리 보관물: 권장 문헌

[Frontiers in Research Metrics and Analytics] What, Me Worry? Research Policy and the Open Embrace of Industry-Academic Relations

Year and volume: 2021 (Vol. 6).

Author(s): Bennett Holman.

Abstract: The field of research policy has conducted extensive research on partnerships between industry and academics and concluded that such collaborations are generally beneficial. Such a view stands in stark contrast to the literature in the philosophy of science which almost wholly finds such collaborations corrosive to scientific inquiry. After reviewing the respective literatures, I propose explanations for these polarized views which support the claim that both disciplines have only a partial vantage point on the effects of industry-funded science. In closing, I outline how the research agendas of each discipline might remediate their respective shortcomings.

Doi: https://doi.org/10.3389/frma.2021.600706

[Philosophy Compass] The Promise and Perils of Industry-Funded Science

Year and volume: 2018 (Vol. 13).

Author(s): Bennett Holman; Kevin C. Elliott.

Abstract: Private companies provide by far the most funding for scientific research and development. Nevertheless, relatively little attention has been paid to the dynamics of industry-funded research by philosophers of science. This paper addresses this gap by providing an overview of the major strengths and weaknesses of industry research funding, together with the existing recommendations for addressing the weaknesses. It is designed to provide a starting point for future philosophical work that explores the features of industry-funded research, avenues for addressing concerns, and strategies for making research funded by the private sector as fruitful as possible.

Doi: https://doi.org/10.1111/phc3.12544

[Research Ethics] The Ambivalence of Multi-Purpose Design: On the Dual-Use and Misuse Risks of Humanitarian UAVs

Year and volume: 2025 (Vol 22).

Author(s): Martina Philippi.

Abstract: The development of UAVs with life-sign detection in the search-and-rescue (SAR) context challenges the assessment and mitigation of dual-use and misuse risks. Those technologies can be characterized as modularly constructed technologies (MCTs): They consist of highly specific components like sensors, UAV hardware, and often AI-supported software, and those components are designed in a way that makes them exchangeable and easy to implement. Therefore, the MCTs can be efficiently adapted for different purposes. From this situation, special dual-use challenges emerge. The recently finished project UAV-Rescue shall serve as an example for the development and assessment scenario of such an MCT from the SAR context. Referring to a recent paper from the context of autonomous driving, the contribution shows some mutual observations but goes further by (1) exploring the characteristics of MCTs that lead to special challenges in the assessment and mitigation of dual-use risks and (2) proposing a different way of dealing with these challenges. The central thesis is that MCTs cannot be addressed satisfactorily with a classic framework for dual-use classification and corresponding regulation. These are not just new types of technologies that also bear a dual-use risk among other risks, but a whole new type of highly complex technology that is designed to be adapted quickly and efficiently to different application scenarios. The paper argues that, even if it is difficult or impossible to mitigate those dual-use risks in MCTs with methods applied so far, it is highly important to provide a systematic analysis of the gains and losses that are caused by this technology. This is important to understand the irreversible impact of such developments in the sense of technology assessment on the one hand, and on the other hand to weigh costs against benefits in a circumspect and careful manner.

Doi: https://doi.org/10.1177/17470161251344321

[Synthese] Of Opaque Oracles: Epistemic Dependence on AI in Science Poses no Novel Problems for Social Epistemology

Year & volume: 2025 (Vol. 205).

Author(s): Jakob Ortmann.

Abstract: Deep Neural Networks (DNNs) are epistemically opaque in the sense that their inner functioning is often unintelligible to human investigators. Inkeri Koskinen has recently argued that this poses special problems for a widespread view in social epistemology according to which thick normative trust between researchers is necessary to handle opacity: if DNNs are essentially opaque, there simply exists nobody who could be trusted to understand all the aspects a DNN picks up during training. In this paper, I present a counterexample from scientific practice, AlphaFold2. I argue that for epistemic reliance on an opaque system, trust is not necessary, but reliability is. What matters is whether, for a given context, the reliability of a DNN has been compellingly established by empirical means and whether there exist trustable researchers who have performed such evaluations adequately.

Doi: https://doi.org/10.1007/s11229-025-04930-x

[arXiv] Report on the Conference on Ethical and Responsible Design in the National AI Institutes: A Summary of Challenges

Year & volume: 2024.

Author(s): Sherri Lynn Conklin, Sue Bae, Gaurav Sett, Michael Hoffmann, and Justin B. Biddle.

Abstract: In May 2023, the Georgia Tech Ethics, Technology, and Human Interaction Center organized the Conference on Ethical and Responsible Design in the National AI Institutes. Representatives from the National AI Research Institutes that had been established as of January 2023 were invited to attend; researchers representing 14 Institutes attended and participated. The conference focused on three questions: What are the main challenges that the National AI Institutes are facing with regard to the responsible design of AI systems? What are promising lines of inquiry to address these challenges? What are possible points of collaboration? Over the course of the conference, a revised version of the first question became a focal point: What are the challenges that the Institutes face in identifying ethical and responsible design practices and in implementing them in the AI development process? This document summarizes the challenges that representatives from the Institutes in attendance highlighted.

Doi: https://doi.org/10.48550/arXiv.2407.13926

[AIES 2020 Proceedings] What’s Next for AI Ethics, Policy, and Governance? A Global Overview

Year & volume: 2020.

Author(s): Daniel Schiff, Justin Biddle, Jason Borenstein, and Kelly Laas.

Abstract: Since 2016, more than 80 AI ethics documents – including codes, principles, frameworks, and policy strategies – have been produced by corporations, governments, and NGOs. In this paper, we examine three topics of importance related to our ongoing empirical study of ethics and policy issues in these emerging documents. First, we review possible challenges associated with the relative homogeneity of the documents’ creators. Second, we provide a novel typology of motivations to characterize both obvious and less obvious goals of the documents. Third, we discuss the varied impacts these documents may have on the AI governance landscape, including what factors are relevant to assessing whether a given document is likely to be successful in achieving its goals.

Doi: https://doi.org/10.1145/3375627.3375804

[Nature Communications] Risks of AI Scientists: Prioritizing Safeguarding over Autonomy

Year & volume: 2025 (Vol. 16).

Author(s): Xiangru Tang; Qiao Jin; Kunlun Zhu; Tongxin Yuan; Yichi Zhang; Wangchunshu Zhou; Meng Qu; Yilun Zhao; Jian Tang; Zhuosheng Zhang; Arman Cohan; Dov Greenbaum; Zhiyong Lu; Mark Gerstein.

Abstract: AI scientists powered by large language models have demonstrated substantial promise in autonomously conducting experiments and facilitating scientific discoveries across various disciplines. While their capabilities are promising, these agents also introduce novel vulnerabilities that require careful consideration for safety. However, there has been limited comprehensive exploration of these vulnerabilities. This perspective examines vulnerabilities in AI scientists, shedding light on potential risks associated with their misuse, and emphasizing the need for safety measures. We begin by providing an overview of the potential risks inherent to AI scientists, taking into account user intent, the specific scientific domain, and their potential impact on the external environment. Then, we explore the underlying causes of these vulnerabilities and provide a scoping review of the limited existing works. Based on our analysis, we propose a triadic framework involving human regulation, agent alignment, and an understanding of environmental feedback (agent regulation) to mitigate these identified risks. Furthermore, we highlight the limitations and challenges associated with safeguarding AI scientists and advocate for the development of improved models, robust benchmarks, and comprehensive regulations.

Doi: https://doi.org/10.1038/s41467-025-63913-1

[Topoi] On the Philosophical Naivety of Engineers in the Age of Machine Learning

Year & volume: 2025.

Author(s): M.Z. Naser.

Abstract: This paper examines the paradoxical decline in engagement with philosophy of science among engineers precisely when machine learning (ML) systems are increasingly performing complex epistemological functions in engineering practice. We identify how philosophical naivety, characterized by the uncritical adoption of reductive frameworks regarding consciousness, intelligence, and ethics, creates tangible organizational and technical liabilities. We then demonstrate how conceptual limitations in engineers’ philosophical foundations lead to three primary flaws: 1) ontological misclassification of system capabilities, 2) ethical blind spots in ML system design and application, and 3) inadequate epistemological approaches and hidden philosophical commitments for interpreting model outputs. Thus, we argue that renewed engagement with the philosophy of science is not merely academic but necessary for engineers to maintain epistemic authority and responsibility in an era where engineering judgment is increasingly delegated to or mediated by ML systems. In response, we propose a technical-philosophical framework integrating perspectives from philosophy of mind, ethics, epistemology, and engineering to address these shortcomings systematically.

Doi: https://doi.org/10.1007/s11229-025-05044-0

[Erkenntnis] Reverse-Engineering Risk

Year & volume: 2025 (Vol. 90).

Author(s): Angela O’Sullivan; Lilith Mace.

Abstract: Three philosophical accounts of risk dominate the contemporary literature. On the probabilistic account, risk has to do with the probability of a disvaluable event obtaining; on the modal account, it has to do with the modal closeness of that event obtaining; on the normic account, it has to do with the normalcy of that event obtaining. The debate between these accounts has proceeded via counterexample-trading, with each account having some cases it explains better than others, and some cases that it cannot explain at all. In this article, we attempt to break the impasse between the three accounts of risk through a shift in methodology. We investigate the concept of risk via the method of conceptual reverse-engineering, whereby a theorist reconstructs the need that a concept serves for a group of agents in order to illuminate the shape of the concept: its intension and extension. We suggest that risk functions to meet our need to make decisions that reduce disvalue under conditions of uncertainty. Our project makes plausible that risk is a pluralist concept: meeting this need requires that risk takes different forms in different contexts. But our pluralism is principled: each of these different forms are part of one and the same concept, that has a ‘core-to-periphery’ structure, where the form the concept takes in typical cases (at its ‘core’) explains the form it takes in less typical cases (at its ‘periphery’). We then apply our findings to epistemic risk, to resolve an ambiguity in how ‘epistemic risk’ is standardly understood.

Doi: https://doi.org/10.1007/s11229-025-05044-0