首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
Social roboticists design their robots to function as social agents in interaction with humans and other robots. Although we do not deny that the robot's design features are crucial for attaining this aim, we point to the relevance of spatial organization and coordination between the robot and the humans who interact with it. We recover these interactions through an observational study of a social robotics laboratory and examine them by applying a multimodal interactional analysis to two moments of robotics practice. We describe the vital role of roboticists and of the group of preverbal infants, who are involved in a robot's design activity, and we argue that the robot's social character is intrinsically related to the subtleties of human interactional moves in laboratories of social robotics. This human involvement in the robot's social agency is not simply controlled by individual will. Instead, the human-machine couplings are demanded by the situational dynamics in which the robot is lodged.  相似文献   

2.
Robot ethics encompasses ethical questions about how humans should design, deploy, and treat robots; machine morality encompasses questions about what moral capacities a robot should have and how these capacities could be computationally implemented. Publications on both of these topics have doubled twice in the past 10 years but have often remained separate from one another. In an attempt to better integrate the two, I offer a framework for what a morally competent robot would look like (normally considered machine morality) and discuss a number of ethical questions about the design, use, and treatment of such moral robots in society (normally considered robot ethics). Instead of searching for a fixed set of criteria of a robot’s moral competence I identify the multiple elements that make up human moral competence and probe the possibility of designing robots that have one or more of these human elements, which include: moral vocabulary; a system of norms; moral cognition and affect; moral decision making and action; moral communication. Juxtaposing empirical research, philosophical debates, and computational challenges, this article adopts an optimistic perspective: if robotic design truly commits to building morally competent robots, then those robots could be trustworthy and productive partners, caretakers, educators, and members of the human community. Moral competence does not resolve all ethical concerns over robots in society, but it may be a prerequisite to resolve at least some of them.  相似文献   

3.
Can we build ‘moral robots’? If morality depends on emotions, the answer seems negative. Current robots do not meet standard necessary conditions for having emotions: they lack consciousness, mental states, and feelings. Moreover, it is not even clear how we might ever establish whether robots satisfy these conditions. Thus, at most, robots could be programmed to follow rules, but it would seem that such ‘psychopathic’ robots would be dangerous since they would lack full moral agency. However, I will argue that in the future we might nevertheless be able to build quasi-moral robots that can learn to create the appearance of emotions and the appearance of being fully moral. I will also argue that this way of drawing robots into our social-moral world is less problematic than it might first seem, since human morality also relies on such appearances.  相似文献   

4.
Should we grant rights to artificially intelligent robots? Most current and near-future robots do not meet the hard criteria set by deontological and utilitarian theory. Virtue ethics can avoid this problem with its indirect approach. However, both direct and indirect arguments for moral consideration rest on ontological features of entities, an approach which incurs several problems. In response to these difficulties, this paper taps into a different conceptual resource in order to be able to grant some degree of moral consideration to some intelligent social robots: it sketches a novel argument for moral consideration based on social relations. It is shown that to further develop this argument we need to revise our existing ontological and social-political frameworks. It is suggested that we need a social ecology, which may be developed by engaging with Western ecology and Eastern worldviews. Although this relational turn raises many difficult issues and requires more work, this paper provides a rough outline of an alternative approach to moral consideration that can assist us in shaping our relations to intelligent robots and, by extension, to all artificial and biological entities that appear to us as more than instruments for our human purposes.  相似文献   

5.
Information plays a major role in any moral action. ICT (Information and Communication Technologies) have revolutionized the life of information, from its production and management to its consumption, thus deeply affecting our moral lives. Amid the many issues they have raised, a very serious one, discussed in this paper, is labelled the tragedy of the Good Will. This is represented by the increasing pressure that ICT and their deluge of information are putting on any agent who would like to act morally, when informed about actual or potential evils, but who also lacks the resources to do much about them. In the paper, it is argued that the tragedy may be at least mitigated, if not solved, by seeking to re-establish some equilibrium, through ICT themselves, between what agents know about the world and what they can do to improve it.  相似文献   

6.
Among ethicists and engineers within robotics there is an ongoing discussion as to whether ethical robots are possible or even desirable. We answer both of these questions in the positive, based on an extensive literature study of existing arguments. Our contribution consists in bringing together and reinterpreting pieces of information from a variety of sources. One of the conclusions drawn is that artifactual morality must come in degrees and depend on the level of agency, autonomy and intelligence of the machine. Moral concerns for agents such as intelligent search machines are relatively simple, while highly intelligent and autonomous artifacts with significant impact and complex modes of agency must be equipped with more advanced ethical capabilities. Systems like cognitive robots are being developed that are expected to become part of our everyday lives in future decades. Thus, it is necessary to ensure that their behaviour is adequate. In an analogy with artificial intelligence, which is the ability of a machine to perform activities that would require intelligence in humans, artificial morality is considered to be the ability of a machine to perform activities that would require morality in humans. The capacity for artificial (artifactual) morality, such as artifactual agency, artifactual responsibility, artificial intentions, artificial (synthetic) emotions, etc., come in varying degrees and depend on the type of agent. As an illustration, we address the assurance of safety in modern High Reliability Organizations through responsibility distribution. In the same way that the concept of agency is generalized in the case of artificial agents, the concept of moral agency, including responsibility, is generalized too. We propose to look at artificial moral agents as having functional responsibilities within a network of distributed responsibilities in a socio-technological system. This does not take away the responsibilities of the other stakeholders in the system, but facilitates an understanding and regulation of such networks. It should be pointed out that the process of development must assume an evolutionary form with a number of iterations because the emergent properties of artifacts must be tested in real world situations with agents of increasing intelligence and moral competence. We see this paper as a contribution to the macro-level Requirement Engineering through discussion and analysis of general requirements for design of ethical robots.  相似文献   

7.
This article discusses mechanisms and principles for assignment of moral responsibility to intelligent robots, with special focus on military robots. We introduce the concept autonomous power as a new concept, and use it to identify the type of robots that call for moral considerations. It is furthermore argued that autonomous power, and in particular the ability to learn, is decisive for assignment of moral responsibility to robots. As technological development will lead to robots with increasing autonomous power, we should be prepared for a future when people blame robots for their actions. It is important to, already today, investigate the mechanisms that control human behavior in this respect. The results may be used when designing future military robots, to control unwanted tendencies to assign responsibility to the robots. Independent of the responsibility issue, the moral quality of robots’ behavior should be seen as one of many performance measures by which we evaluate robots. How to design ethics based control systems should be carefully investigated already now. From a consequentialist view, it would indeed be highly immoral to develop robots capable of performing acts involving life and death, without including some kind of moral framework.  相似文献   

8.
Telerobotically operated and semiautonomous machines have become a major component in the arsenals of industrial nations around the world. By the year 2015 the United States military plans to have one-third of their combat aircraft and ground vehicles robotically controlled. Although there are many reasons for the use of robots on the battlefield, perhaps one of the most interesting assertions are that these machines, if properly designed and used, will result in a more just and ethical implementation of warfare. This paper will focus on these claims by looking at what has been discovered about the capability of humans to behave ethically on the battlefield, and then comparing those findings with the claims made by robotics researchers that their machines are able to behave more ethically on the battlefield than human soldiers. Throughout the paper we will explore the philosophical critique of this claim and also look at how the robots of today are impacting our ability to fight wars in a just manner.  相似文献   

9.
This essay addresses the other side of the robot ethics debate, taking up and investigating the question “Can and should robots have rights?” The examination of this subject proceeds by way of three steps or movements. We begin by looking at and analyzing the form of the question itself. There is an important philosophical difference between the two modal verbs that organize the inquiry—can and should. This difference has considerable history behind it that influences what is asked about and how. Second, capitalizing on this verbal distinction, it is possible to identify four modalities concerning social robots and the question of rights. The second section will identify and critically assess these four modalities as they have been deployed and developed in the current literature. Finally, we will conclude by proposing another alternative, a way of thinking otherwise that effectively challenges the existing rules of the game and provides for other ways of theorizing moral standing that can scale to the unique challenges and opportunities that are confronted in the face of social robots.  相似文献   

10.
While social robots are the creation of human beings, it is not obvious what kinds of conversation people desire to have with computer-based devices. Progressive improvements in speech recognition, natural language parsing, and physical embodiment are making it technologically possible for social robots to engage with humans in essentially the full range of conversational modes that we do with one another. However, when we examine the variety of possible (human) linguistic functions, we discover reasons people may not wish for total verisimilitude when interacting linguistically with robots. Informational and empathetic functions are likely to be more welcomed than those involving social control or critique.  相似文献   

11.
This paper offers an ethical framework for the development of robots as home companions that are intended to address the isolation and reduced physical functioning of frail older people with capacity, especially those living alone in a noninstitutional setting. Our ethical framework gives autonomy priority in a list of purposes served by assistive technology in general, and carebots in particular. It first introduces the notion of “presence” and draws a distinction between humanoid multi-function robots and non-humanoid robots to suggest that the former provide a more sophisticated presence than the latter. It then looks at the difference between lower-tech assistive technological support for older people and its benefits, and contrasts these with what robots can offer. This provides some context for the ethical assessment of robotic assistive technology. We then consider what might need to be added to presence to produce care from a companion robot that deals with older people’s reduced functioning and isolation. Finally, we outline and explain our ethical framework. We discuss how it combines sometimes conflicting values that the design of a carebot might incorporate, if informed by an analysis of the different roles that can be served by a companion robot.  相似文献   

12.
Industrial and domestic robotics provide fascinating and relevant perspective insights into current and possible trajectories for the development of contemporary societies. While industrial robotics has found its place since the 1960s, domestic robotics wherein humans interact with social robots is still an unsettled area. After reviewing data on the diffusion of social robots and on their use, the historical tradition from which social robots come is discussed. This discussion is followed by an analysis of the penetration of social robots in everyday life and the importance of interdisciplinary research is highlighted.  相似文献   

13.
14.
Despite humanoid service robots having attracted considerable research attention, it remains unclear how consumers respond to some specific human characteristics of robots. Drawing from theories on social categorization and identification, we study the role of consumer perceived control as a psychological mechanism to explain how human-robot gender congruity alters consumers’ affective reactions (feelings of comfort in the service encounter and service brand attitudes). We also consider that such gender congruity effects may be contingent on the individual cultural value of masculinity. We demonstrate experimentally that human-robot gender congruity (vs. incongruity) elicits more positive affect, while masculinity moderates some of these effects. Moreover, perceived control mediates effects of gender congruity on affective reactions only for consumers high on masculinity. We offer three major theoretical contributions as we 1) focus on social identity theory to shed light on how human-robot gender congruity affects consumer behavior in service encounters, 2) demonstrate the role of perceptions of control as a psychological process variable to explain these effects, and 3) provide insights into the role of the cultural value of masculinity as a factor that shapes human-robot gender congruity effects.  相似文献   

15.
周楠 《科教文汇》2011,(5):23-24
"信"字是在《论语》中出现的频率较高的实词之一,它作为一种伦理观念在孔子的整个道德体系中占据着十分重要的地位。孔子认为,"信"伦理观念是贯穿于整个社会生活之中的,与"仁"、"行"、"政"等有着密切的关系。而探讨关于"信"的伦理观念对于当今时代更加具有十分重要的意义,无论是个人自立、朋友之间还是经济活动乃至治理国家方面,它都将产生积极的影响,因此,我们应该重视它的现代教育意义。  相似文献   

16.
Values such as respect for autonomy, safety, enablement, independence, privacy and social connectedness should be reflected in the design of social robots. The same values should affect the process by which robots are introduced into the homes of older people to support independent living. These values may, however, be in tension. We explored what potential users thought about these values, and how the tensions between them could be resolved. With the help of partners in the ACCOMPANY project, 21 focus groups (123 participants) were convened in France, the Netherlands and the UK. These groups consisted of: (i) older people, (ii) informal carers and (iii) formal carers of older people. The participants were asked to discuss scenarios in which there is a conflict between older people and others over how a robot should be used, these conflicts reflecting tensions between values. Participants favoured compromise, persuasion and negotiation as a means of reaching agreement. Roles and related role-norms for the robot were thought relevant to resolving tensions, as were hypothetical agreements between users and robot-providers before the robot is introduced into the home. Participants’ understanding of each of the values—autonomy, safety, enablement, independence, privacy and social connectedness—is reported. Participants tended to agree that autonomy often has priority over the other values, with the exception in certain cases of safety. The second part of the paper discusses how the values could be incorporated into the design of social robots and operationalised in line with the views expressed by the participants.  相似文献   

17.
This paper argues against the moral Turing test (MTT) as a framework for evaluating the moral performance of autonomous systems. Though the term has been carefully introduced, considered, and cautioned about in previous discussions (Allen et al. in J Exp Theor Artif Intell 12(3):251–261, 2000; Allen and Wallach 2009), it has lingered on as a touchstone for developing computational approaches to moral reasoning (Gerdes and Øhrstrøm in J Inf Commun Ethics Soc 13(2):98–109, 2015). While these efforts have not led to the detailed development of an MTT, they nonetheless retain the idea to discuss what kinds of action and reasoning should be demanded of autonomous systems. We explore the flawed basis of an MTT in imitation, even one based on scenarios of morally accountable actions. MTT-based evaluations are vulnerable to deception, inadequate reasoning, and inferior moral performance vis a vis a system’s capabilities. We propose verification—which demands the design of transparent, accountable processes of reasoning that reliably prefigure the performance of autonomous systems—serves as a superior framework for both designer and system alike. As autonomous social robots in particular take on an increasing range of critical roles within society, we conclude that verification offers an essential, albeit challenging, moral measure of their design and performance.  相似文献   

18.
信任是一个国家社会资本的重要构成要素,信任的养成赋予个人一定的道德义务,但道德原则约束力通过法律规则得以实现。本文以信托法的演进为视角,明晰"信任"是信托法的固有要素,追溯信托制度从纯粹对"人"的信任至对"制度"信任的嬗变根源,确认信托制度促进社会信任关系形成的事实。审视当代中国社会信任危机的三个维度,得出的结论是,法律不啻为化解社会信任危机的有效途径。  相似文献   

19.
Floridi and Sanders, seminal work, “On the morality of artificial agents” has catalyzed attention around the moral status of computer systems that perform tasks for humans, effectively acting as “artificial agents.” Floridi and Sanders argue that the class of entities considered moral agents can be expanded to include computers if we adopt the appropriate level of abstraction. In this paper we argue that the move to distinguish levels of abstraction is far from decisive on this issue. We also argue that adopting certain levels of abstraction out of context can be dangerous when the level of abstraction obscures the humans who constitute computer systems. We arrive at this critique of Floridi and Sanders by examining the debate over the moral status of computer systems using the notion of interpretive flexibility. We frame the debate as a struggle over the meaning and significance of computer systems that behave independently, and not as a debate about the ‘true’ status of autonomous systems. Our analysis leads to the conclusion that while levels of abstraction are useful for particular purposes, when it comes to agency and responsibility, computer systems should be conceptualized and identified in ways that keep them tethered to the humans who create and deploy them.  相似文献   

20.
石荣辉 《科教文汇》2011,(33):21-22
时政教育是中学思想品德教育不可缺少的一个重要组成部分,搞好时政教育,既是中学思想品德课程标准的要求,也是学生自身成长的需要。将时政教育融于思想品德课教学当中,有利于学生了解社会、了解中国、了解世界,拓宽视野,增长知识;有利于学生增强关心社会、热爱祖国的情感;有利学生感受身边变化,正确理解党的路线方针政策,增强拥护中国共产党的自觉性;有利于学生对课本知识的理解,并将书本知识与社会实际相结合,培养他们科学求是的态度;同时还有利于培养和提高学生观察问题、分析问题和解决问题的能力。所以,要求每位教师给予高度重视。  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号