首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 673 毫秒
1.
This paper discusses the ramifications of the problem of proving that the design of a symbol manipulating processor conclusively does what the designer intended. More specifically, it explores such questions as: (1) What different interpretations can be given to the expression “the intent of the process”? (2) Does the process, or should the process end? In either event, how do we prove it? (3) If the process does end, how do we prove that it does what was intended? This question may be meaningful even if the process does not end. (4) Is there a whole class of processes that stand or fall together? Can we adapt our proof of conclusiveness to cover the whole class? (5) Do the processes of the class yield the same or different results, and whichever it is, how do we prove it? The example of formal differentiation from the calculus is used to illustrate these problems; and question number four is developed in detail to illustrate the mixture of mathematical, logical, linguistic, computer science, procedural and even psychological insights involved. References are given for the methods of attack on the other questions.  相似文献   

2.
As healthcare organizations continue to be asked to do more with less, access to information is essential for sound evidence-based decision making. Business intelligence (BI) systems are designed to deliver decision-support information and have been repeatedly shown to provide value to organizations. Many healthcare organizations have yet to implement BI systems and no existing research provides a healthcare-specific framework to guide implementation. To address this research gap, we employ a case study in a Canadian Health Authority in order to address three questions: (1) what are the most significant adverse impacts to the organization's decision processes and outcomes attributable to a lack of decision-support capabilities? (2) what are the root causes of these impacts, and what workarounds do they necessitate? and (3) in light of the issues identified, what are the key considerations for healthcare organizations in the early stages of BI implementation? Using the concept of co-agency as a guide we identified significant decision-related adverse impacts and their root causes. We found strong management support, the right skill sets and an information-oriented culture to be key implementation considerations. Our major contribution is a framework for defining and prioritizing decision-support information needs in the context of healthcare-specific processes.  相似文献   

3.
To what extent should humans transfer, or abdicate, “responsibility” to computers? In this paper, I distinguish six different senses of ‘responsible’ and then consider in which of these senses computers can, and in which they cannot, be said to be “responsible” for “deciding” various outcomes. I sort out and explore two different kinds of complaint against putting computers in greater “control” of our lives: (i) as finite and fallible human beings, there is a limit to how far we can acheive increased reliability through complex devices of our own design; (ii) even when computers are more reliable than humans, certain tasks (e.g., selecting an appropriate gift for a friend, solving the daily crossword puzzle) are inappropriately performed by anyone (or anything) other than oneself. In critically evaluating these claims, I arrive at three main conclusions: (1) While we ought to correct for many of our shortcomings by availing ourselves of the computer's larger memory, faster processing speed and greater stamina, we are limited by our own finiteness and fallibility (rather than by whatever limitations may be inherent in silicon and metal) in the ability to transcend our own unreliability. Moreover, if we rely on programmed computers to such an extent that we lose touch with the human experience and insight that formed the basis for their programming design, our fallibility is magnified rather than mitigated. (2) Autonomous moral agents can reasonably defer to greater expertise, whether human or cybernetic. But they cannot reasonably relinquish “background-oversight” responsibility. They must be prepared, at least periodically, to review whether the “expertise” to which they defer is indeed functioning as he/she/it was authorized to do, and to take steps to revoke that authority, if necessary. (3) Though outcomes matter, it can also matter how they are brought about, and by whom. Thus, reflecting on how much of our lives should be directed and implemented by computer may be another way of testing any thoroughly end-state or consequentialist conception of the good and decent life. To live with meaning and purpose, we need to actively engage our own faculties and empathetically connect up with, and resonate to, others. Thus there is some limit to how much of life can be appropriately lived by anyone (or anything) other than ourselves.  相似文献   

4.
李飞  李达军  刘茜 《科研管理》2019,40(11):285
同行评议一直是学术界讨论的热点话题,但是在同行评议中普遍存在着非同行评议现象,这个现象没有受到学者们的应有重视。因此,针对这一现象,本文研究并回答了3个方面的问题:首先,非同行评议是不可避免的吗?其次,非同行评议带来的主要问题是什么?最后,如何成为一个优秀的非同行评议人?  相似文献   

5.
李飞  李达军  刘茜 《科研管理》2006,40(11):285-288
同行评议一直是学术界讨论的热点话题,但是在同行评议中普遍存在着非同行评议现象,这个现象没有受到学者们的应有重视。因此,针对这一现象,本文研究并回答了3个方面的问题:首先,非同行评议是不可避免的吗?其次,非同行评议带来的主要问题是什么?最后,如何成为一个优秀的非同行评议人?  相似文献   

6.
Some ideas have dramatically more impact than others – they may overturn existing paradigms or launch new areas of scientific inquiry. Where do such high impact ideas come from? Are some search processes significantly more likely to lead to breakthrough idea generation than others? In this research, we compare “high impact” papers from the social sciences with random-but-matched articles published in the same journals in the same years. We find that search scope, search depth, and atypical connections between different research domains significantly increase a paper's impact, even when controlling for the experience and prior publishing success of the author(s).  相似文献   

7.
If you look carefully at a football, you will notice that its surface is composed of hexagons andpentagons. Hexagons lie side by side while any pentagon is surrounded by five hexagons. How many corners and edges are there? Football players do not have to know that as long asthey do pass and shoot right; an architect or structural physicist, however, could readily give the answer.  相似文献   

8.
This paper examines the existing literature on new technology-based firms in an attempt to understand how such companies contribute to the vitality of an economy and to see which variables are critical in enabling them to do so. From recent research efforts, we have drawn together a set of hypotheses and organized them around four central questions: (1) What do new firms contribute to the economy? (2) What factors contribute to the success of such enterprises? (3) In what cultural contexts do new businesses succeed? and (4) Which government policies are effective in stimulating and supporting new companies? We found that existing studies suggest (1) that new technology-based firms contribute significantly to an economy in terms of exports, employment, taxes paid, research and development, and innovations; (2) that the founders of these new businesses tend to have a strong entrepreneurial heritage, a development- rather research-oriented background, and a high need to achieve and are young and highly educated; (3) that sector differences may be a more important influence on company location than are regional policies; and (4) that no one government program has proven itself to be more than marginally successful in stimulating industrial innovation.  相似文献   

9.
Can a player be held morally responsible for the choices that she makes within a videogame? Do the moral choices that the player makes reflect in any way on the player’s actual moral sensibilities? Many videogames offer players the options to make numerous choices within the game, including moral choices. But the scope of these choices is quite limited. I attempt to analyze these issues by drawing on philosophical debates about the nature of free will. Many philosophers worry that, if our actions are predetermined, then we cannot be held morally responsible for them. However, Harry Frankfurt’s compatibilist account of free will suggests that an agent can be held morally responsible for actions that she wills, even if the agent is not free to act otherwise. Using Frankfurt’s analysis, I suggest that videogames represent deterministic worlds in which players lack the ability to freely choose what they do, and yet players can be held morally responsible for some of their actions, specifically those actions that the player wants to do. Finally, I offer some speculative comments on how these considerations might impact our understanding of the player’s moral psychology as it relates to the ethics of imagined fictional events.  相似文献   

10.
In most communication networks, error probabilities 1?→?0 and 0?→?1 are equally likely to occur. However, in some optical networks, such as local and access networks, this is not the case. In these networks, the number of received photons never exceeds the number of transmitted ones. Hence, if the receiver operates correctly, only 1?→?0 errors can occur. Motivated by this fact, in this paper, we present a class of integer codes capable of correcting burst and random asymmetric (1?→?0) errors within a b-bit byte. Unlike classical codes, the proposed codes are defined over the ring of integers modulo 2b ?1. As a result, they have the potential to be implemented in software without any hardware assist.  相似文献   

11.

Self-driving cars promise solutions to some of the hazards of human driving but there are important questions about the safety of these new technologies. This paper takes a qualitative social science approach to the question ‘how safe is safe enough?’ Drawing on 50 interviews with people developing and researching self-driving cars, I describe two dominant narratives of safety. The first, safety-in-numbers, sees safety as a self-evident property of the technology and offers metrics in an attempt to reassure the public. The second approach, safety-by-design, starts with the challenge of safety assurance and sees the technology as intrinsically problematic. The first approach is concerned only with performance—what a self-driving system does. The second is also concerned with why systems do what they do and how they should be tested. Using insights from workshops with members of the public, I introduce a further concern that will define trustworthy self-driving cars: the intended and perceived purposes of a system. Engineers’ safety assurances will have their credibility tested in public. ‘How safe is safe enough?’ prompts further questions: ‘safe enough for what?’ and ‘safe enough for whom?’

  相似文献   

12.
In a series of three papers, we attempt to evaluate the past scientific performance of the three main particle accelerators at the Geneva-based European Organization for Nuclear Research (CERN) over the period since 1960, and to assess the future prospects for CERN and its users during the next ten to fifteen years.We concerned ourselves in the first paper (Paper I-Martin and Irvine [29]) with the position of the CERN accelerators in world high-energy physics relative to those at other large laboratories working in the field. We dealt primarily with the period from 1969 to 1978, and attempted to establish how the experimental output from the three principal CERN accelerators, taken as a whole, compared with that from other major facilities. In undertaking this comparative evaluation, we drew on the method of “converging partial indicators” used in previous studies of three Big Science specialties.In contrast, the second paper (Paper II - Irvine and Martin [24]) focused in detail on the scientific performance of each of the CERN accelerators taken individually. In particular, it asked, first, how the outputs from the CERN 28 GeV (giga or billion electron-volts) Proton Synchrotron compare with those from a very similar 33 GeV American accelerator at Brookhaven National Laboratory over the past two decades. Second, how great have been the experimental achievements of the Intersecting Storage Rings in world terms? And, third, how do the outputs from the CERN 400 GeV Super Proton Synchrotron and from a rival US machine at Fermi National Accelerator Laboratory compare? Attempts were then made to identify the main factors responsible for determining the relative scientific performance of each CERN machine.These factors are of relevance to the subject of this third paper (Paper III), which sets out to assess the future prospects for CERN and in particular for LEP, the large electron-positron collider scheduled for completion in the latter part of 1988. What are the construction requirements (financial and technical) associated with LEP, and how easily will they be met? How does the scientific potential of LEP compare with that of other major accelerators under construction or planned around the world? In the light of the previous record of the CERN accelerators, to what extent is this scientific potential likely to be realized? What spin-off is there likely to be from LEP to accelerator physics in general? Finally, how “flexible” is LEP -in other words, what is its potential for future development? The paper concludes with a discussion of the extent to which predictive techniques can be utilized in the formulation of scientific priorities, and of the problems in current science policy-making that such techniques might help address.  相似文献   

13.
In a series of three papers, we attempt to evaluate the past scientific performance of the three main particle accelerators at the Geneva-based European Organization for Nuclear Research (CERN) over the period since 1960, and to assess the future prospects for CERN and its users during the next ten to fifteen years.We concerned ourselves in a previous paper (Paper 1 - Martin and Irvine [51] with the position of the CERN accelerators in world high-energy physics relative to those at other large laboratories working in the field. We dealt primarily with the period from 1969 to 1978, and attempted to establish how the experimental output from the three principal CERN accelerators, taken as a whole, compared with that from other major facilities. In undertaking this comparative evaluation, we drew on the method of “converging partial indicators” used in previous studies of three Big Science specialties.In contrast, this paper (Paper II) focuses in detail on the scientific performance of each of the CERN accelerators taken individually. In particular, it asks, first, how the outputs from the CERN 28 GeV (giga or billion electron-volts) Proton Synchrotron compare with those from a very similar 33 GeV American accelerator at Brookhaven National Laboratory over the past two decades? Second, how great have been the experimental achievements of the Intersecting Storage Rings in world terms? And, third, how do the outputs from the CERN 400 GeV Super Proton Synchrotron and from a rival American machine at Fermi National Accelerator Laboratory compare? Attempts are then made to identify the main factors responsible for determining the relative scientific performance of each CERN machine.These factors are of relevance to the subject of a third paper (Paper III - Martin and Irvine [52]) which sets out to assess the future prospects for CERN and in particular for LEP, the large electron-positron collider scheduled for completion in the latter part of the 1980s. What are the construction requirements (financial and technical) associated with LEP, and how easily will they be met? How does the scientific potential of LEP compare with that of other major accelerators under construction around the world? And, in the light of the previous record of the CERN accelerators, to what extent is this potential likely to be realized? The paper concludes with a discussion of the extent to which predictive techniques can be utilized in the formulation of scientific priorities, and of the problems in current science policy-making that such techniques might help address.  相似文献   

14.
Patent trolls (or sharks) are patent holding individuals or (often small) firms who trap R&D intensive manufacturers in patent infringement situations in order to receive damage awards for the illegitimate use of their technology. While of great concern to management, their existence and impact for both corporate decision makers and policy makers remains to be fully analyzed from an academic standpoint. In this paper we show why patent sharks can operate profitably, why they are of growing concern, how manufacturers can forearm themselves against them, and which issues policy makers need to address. To do so, we map international indemnification rules with strategic rationales of small patent-holding firms and large manufacturers within a theoretical model. Our central finding is that the courts’ unrealistic consideration of the trade-offs faced by inadvertent infringers is a central condition for sharks to operate profitably.  相似文献   

15.
16.
Synthetic chemotactic micro/nanomotors are envisioned to actively ‘seek out’ targets by following specific chemicals, but they are mainly powered by bioincompatible fuels and only show pseudochemotaxis (or advanced chemokinesis) due to their weak self-reorientation capabilities. Here we demonstrate that synthetic ZnO-based Janus micromotors can be powered by the alternative biocompatible fuel of CO2, and further provide the first example of self-reorientation-induced biomimetic chemotaxis using them. The ZnO-based micromotors are highly sensitive to dissolved CO2 in water, which enables the corrosion of ZnO to continuously occur by providing H+ through hydration. Thus, they can autonomously move even in water exposed to air based on self-diffusiophoresis. Furthermore, they can sense the local CO2 gradient and perform positive chemotaxis by self-reorientations under the phoretic torque. Our discovery opens a gate to developing intelligent micro/nanomotors powered by, and sensitive to, biocompatible atmospheric or endogenous gaseous chemicals for biomedical and environmental applications.  相似文献   

17.
DiVernieri R 《Endeavour》2008,32(3):107-110
When you hear the phrase 'chemistry set', what is the first image that pops into your head? Is it one of a mad scientist in a dark laboratory full of pieces of glassware bubbling over? Is it your high school or college chemistry lab? For many, the phrase evokes a time of exploration and experimentation, a time where a child could be and do anything they could imagine. Although first and foremost to be a toy providing hours of entertainment, chemistry sets also allowed children to interact intimately with science. It is these interactions that led many to go on to become famous chemists and physicists. By studying these sets - their content, the imagery surrounding them and the people that used them - we can hopefully better understand how society viewed, and continues to view, science.  相似文献   

18.
Learning low dimensional dense representations of the vocabularies of a corpus, known as neural embeddings, has gained much attention in the information retrieval community. While there have been several successful attempts at integrating embeddings within the ad hoc document retrieval task, yet, no systematic study has been reported that explores the various aspects of neural embeddings and how they impact retrieval performance. In this paper, we perform a methodical study on how neural embeddings influence the ad hoc document retrieval task. More specifically, we systematically explore the following research questions: (i) do methods solely based on neural embeddings perform competitively with state of the art retrieval methods with and without interpolation? (ii) are there any statistically significant difference between the performance of retrieval models when based on word embeddings compared to when knowledge graph entity embeddings are used? and (iii) is there significant difference between using locally trained neural embeddings compared to when globally trained neural embeddings are used? We examine these three research questions across both hard and all queries. Our study finds that word embeddings do not show competitive performance to any of the baselines. In contrast, entity embeddings show competitive performance to the baselines and when interpolated, outperform the best baselines for both hard and soft queries.  相似文献   

19.
When software is written and then utilized in complex computer systems, problems often occur. Sometimes these problems cause a system to malfunction, and in some instances such malfunctions cause harm. Should any of the persons involved in creating the software be blamed and punished when a computer system failure leads to persons being harmed? In order to decide whether such blame and punishment are appropriate, we need to first consider if the people are “morally responsible”. Should any of the people involved in creating the software be held morally responsible, as individuals, for the harm caused by a computer system failure?This article provides one view of moral responsibility and then discusses some barriers to holding people morally responsible. Next, it provides information about the Therac-25, a computer-controlled medical linear accelerator, and its computer systems failures that led to deaths and injuries. Finally it investigates whether two key people involved in the Therac-25 case could reasonably be considered to have some degree of moral responsibility for the deaths and injuries. The conclusions about whether or not these people were morally responsible necessarily rest upon a certain amount of speculation about what they knew and what they did. These limitations, however, should not cause us to conclude that discussions of moral responsibility are fruitless. In some cases, determinations of moral responsibility may be made and in others the investigation is still worthwhile, as the article demonstrates.  相似文献   

20.
What do Social Networking Sites (SNS) ‘do to us’: are they a damning threat or an emancipating force? Recent publications on the impact of “Web 2.0” proclaim very opposite evaluative positions. With the aim of finding a middle ground, this paper develops a pragmatist approach to SNS based on the work of Richard Rorty. The argument proceeds in three steps. First, we analyze SNS as conversational practices. Second, we outline, in the form of an imaginary conversation between Rorty and Heidegger, a positive and negative ‘conversational’ view on SNS. Third, we deploy a reflection, again using Rortian notions, on that evaluation, starting from the concept of ‘self-reflectivity.’ Finally, the relations between these three steps are more detailedly investigated. By way of the sketched technique, we can interrelate the two opposing sides of the recent debates—hope and threat—and judge SNS in all their ambiguity.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号