首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 15 毫秒
1.
The advent of formal definitions of the simplicity of a theoryhas important implications for model selection. But what isthe best way to define simplicity? Forster and Sober ([1994])advocate the use of Akaike's Information Criterion (AIC), anon-Bayesian formalisation of the notion of simplicity. Thisforms an important part of their wider attack on Bayesianismin the philosophy of science. We defend a Bayesian alternative:the simplicity of a theory is to be characterised in terms ofWallace's Minimum Message Length (MML). We show that AIC isinadequate for many statistical problems where MML performswell. Whereas MML is always defined, AIC can be undefined. WhereasMML is not known ever to be statistically inconsistent, AICcan be. Even when defined and consistent, AIC performs worsethan MML on small sample sizes. MML is statistically invariantunder 1-to-1 re-parametrisation, thus avoiding a common criticismof Bayesian approaches. We also show that MML provides answersto many of Forster's objections to Bayesianism. Hence an importantpart of the attack on Bayesianism fails.
  1. Introduction
  2. TheCurve Fitting Problem
    2.1 Curves and families of curves
    2.2 Noise
    2.3 Themethod of Maximum Likelihood
    2.4 ML and over-fitting
  3. Akaike's Information Criterion(AIC)
  4. The Predictive Accuracy Framework
  5. The Minimum MessageLength (MML) Principle
    5.1 The Strict MML estimator
    5.2 Anexample: Thebinomial distribution
    5.3 Properties ofthe SMML estimator
    5.3.1  Bayesianism
    5.3.2  Languageinvariance
    5.3.3Generality
    5.3.4  Consistencyand efficiency
    5.4 Similarity to false oracles
    5.5 Approximationsto SMML
  6. Criticisms of AIC
    6.1 Problems with ML
    6.1.1  Smallsample biasin a Gaussian distribution
    6.1.2  Thevon Misescircular and von Mises—Fisherspherical distributions
    6.1.3  The Neyman–Scottproblem
    6.1.4  Neyman–Scott,predictive accuracyandminimum expected KL distance
    6.2 Otherproblems with AIC
    6.2.1  Univariate polynomial regression
    6.2.2  Autoregressiveeconometric time series
    6.2.3  Multivariatesecond-orderpolynomial modelselection
    6.2.4  Gapor no gap:a clustering-like problem forAIC
    6.3 Conclusionsfrom the comparison of MML and AIC
  7. Meeting Forster's objectionsto Bayesianism
    7.1 The sub-family problem
    7.2 Theproblem of approximation,or, which framework forstatistics?
  8. Conclusion
  1. Details of the derivation of the Strict MMLestimator
  2. MML, AIC and the Gap vs. No Gap Problem
    B.1 Expectedsize of the largest gap
    B.2 Performanceof AIC on thegap vs. no gap problem
    B.3 Performanceof MML in thegap vs. no gap problem
  相似文献   

2.
An assessment is offered of the recent debate on informationin the philosophy of biology, and an analysis is provided ofthe notion of information as applied in scientific practicein molecular genetics. In particular, this paper deals withthe dependence of basic generalizations of molecular biology,above all the ‘central dogma’, on the so-called‘informational talk’ (Maynard Smith [2000a]). Itis argued that talk of information in the ‘central dogma’can be reduced to causal claims. In that respect, the primaryaim of the paper is to consider a solution to the major difficultyof the causal interpretation of genetic information: how todistinguish the privileged causal role assigned to nucleic acids,DNA in particular, in the processes of replication and proteinproduction. A close reading is proposed of Francis H. C. Crick'sOn Protein Synthesis (1958) and related works, to which we owethe first explicit definition of information within the scientificpractice of molecular biology.
  1. Introduction
    1.1 The basicquestions of the information debate
    1.2 Thecausal interpretation(CI) of biological informationand Crick's‘central dogma’
  2. Crick's definitions of genetic information
  3. The main requirementfor (CI)
  4. Types of causation in molecular biology
    4.1 Structuralcausation in molecular biology
    4.2 Nucleicacids as correlativecausal factors
  5. The ‘central dogma’ withoutthe notion of information
  6. Concluding remarks
  相似文献   

3.
Going back at least to Duhem, there is a tradition of thinkingthat crucial experiments are impossible in science. I analyseDuhem's arguments and show that they are based on the excessivelystrong assumption that only deductive reasoning is permissiblein experimental science. This opens the possibility that someprinciple of inductive inference could provide a sufficientreason for preferring one among a group of hypotheses on thebasis of an appropriately controlled experiment. To be sure,there are analogues to Duhem's problems that pertain to inductiveinference. Using a famous experiment from the history of molecularbiology as an example, I show that an experimentalist versionof inference to the best explanation (IBE) does a better jobin handling these problems than other accounts of scientificinference. Furthermore, I introduce a concept of experimentalmechanism and show that it can guide inferences from data withinan IBE-based framework for induction.
  1. Introduction
  2. Duhem onthe Logic of Crucial Experiments
  3. ‘The Most BeautifulExperiment in Biology’
  4. Why Not Simple Elimination?
  5. SevereTesting
  6. An Experimentalist Version of IBE
    6.1 Physiologicaland experimentalmechanisms
    6.2 Explaining the data
    6.3IBE and the problemof untested auxiliaries
    6.4 IBE-turtlesall the way down
  7. Van Fraassen's ‘Bad Lot’ Argument
  8. IBE and Bayesianism
  9. Conclusions
  相似文献   

4.
What Are the New Implications of Chaos for Unpredictability?   总被引:1,自引:0,他引:1  
From the beginning of chaos research until today, the unpredictabilityof chaos has been a central theme. It is widely believed andclaimed by philosophers, mathematicians and physicists alikethat chaos has a new implication for unpredictability, meaningthat chaotic systems are unpredictable in a way that other deterministicsystems are not. Hence, one might expect that the question ‘Whatare the new implications of chaos for unpredictability?’has already been answered in a satisfactory way. However, thisis not the case. I will critically evaluate the existing answersand argue that they do not fit the bill. Then I will approachthis question by showing that chaos can be defined via mixing,which has never before been explicitly argued for. Based onthis insight, I will propose that the sought-after new implicationof chaos for unpredictability is the following: for predictingany event, all sufficiently past events are approximately probabilisticallyirrelevant.
  1. Introduction
  2. Dynamical Systems and Unpredictability
    2.1 Dynamical systems
    2.2 Natural invariant measures
    2.3Unpredictability
  3. Chaos
    3.1 Defining chaos
    3.2 Definingchaos via mixing
  4. Criticism of Answers in the Literature
    4.1 Asymptotic unpredictability?
    4.2 Unpredictability dueto rapid or exponential divergence?
    4.3 Macro-predictabilityand Micro-unpredictability?
  5. A General New Implication ofChaos for Unpredictability
    5.1Approximate probabilistic irrelevance
    5.2 Sufficiently pastevents are approximately probabilisticallyirrelevant for predictions
  6. Conclusion
  相似文献   

5.
Starting from a brief recapitulation of the contemporary debateon scientific realism, this paper argues for the following thesis:Assume a theory T has been empirically successful in a domainof application A, but was superseded later on by a superiortheory T*, which was likewise successful in A but has an arbitrarilydifferent theoretical superstructure. Then under natural conditionsT contains certain theoretical expressions, which yielded T'sempirical success, such that these T-expressions correspond(in A) to certain theoretical expressions of T*, and given T*is true, they refer indirectly to the entities denoted by theseexpressions of T*. The thesis is first motivated by a studyof the phlogiston–oxygen example. Then the thesis is provedin the form of a logical theorem, and illustrated by furtherexamples. The final sections explain how the correspondencetheorem justifies scientific realism and work out the advantagesof the suggested account.
  1. Introduction: Pessimistic Meta-induction vs. Structural Correspondence
  2. The Case of the Phlogiston Theory
  3. Steps Towards a SystematicCorrespondence Theorem
  4. The Correspondence Theorem and ItsOntological Interpretation
  5. Further Historical Applications
  6. Discussion of the Correspondence Theorem: Objections and Replies
  7. Consequences for Scientific Realism and Comparison with OtherPositions
    7.1 Comparison with constructive empiricism
    7.2Major difference from standard scientific realism
    7.3 Fromminimal realism and correspondence to scientific realism
    7.4Comparison with particular realistic positions
  相似文献   

6.
This paper is a review of work on Newman's objection to epistemicstructural realism (ESR). In Section 2, a brief statement ofESR is provided. In Section 3, Newman's objection and its recentvariants are outlined. In Section 4, two responses that arguethat the objection can be evaded by abandoning the Ramsey-sentenceapproach to ESR are considered. In Section 5, three responsesthat have been put forward specifically to rescue the Ramsey-sentenceapproach to ESR from the modern versions of the objection arediscussed. Finally, in Section 6, three responses are consideredthat are neutral with respect to one's approach to ESR and allargue (in different ways) that the objection can be evaded byintroducing the notion that some relations/structures are privilegedover others. It is concluded that none of these suggestionsis an adequate response to Newman's objection, which thereforeremains a serious problem for ESRists.
  1. Introduction
  2. EpistemicStructural Realism
    2.1 Ramsey-sentences and ESR
    2.2WESR andSESR
  3. The Objection
    3.1 Newman's version
    3.2 Demopoulosand Friedman'sand Ketland's versions
  4. Replies that Abandonthe Ramsey-Sentence Approach to ESR
    4.1Redhead's reply
    4.2French and Ladyman's reply
  5. Replies Designed to Rescue theRamsey-Sentence Approach
    5.1Zahar's reply
    5.2 Cruse's reply
    5.3 Melia and Saatsi's reply
  6. Replies that Argue thatSome Structures/Relations are Privileged
    6.1 A Carnapian reply
    6.2 Votsis' reply
    6.3 The Merrill/Lewis/Psillosreply
  7. Summary
  相似文献   

7.
This paper assesses Sarkar's ([2003]) deflationary account ofgenetic information. On Sarkar's account, genes carry informationabout proteins because protein synthesis exemplifies what Sarkarcalls a ‘formal information system’. Furthermore,genes are informationally privileged over non-genetic factorsof development because only genes enter into arbitrary relationsto their products (in virtue of the alleged arbitrariness ofthe genetic code). I argue that the deflationary theory doesnot capture four essential features of the ordinary conceptof genetic information: intentionality, exclusiveness, asymmetry,and causal relevance. It is therefore further removed from whatis customarily meant by genetic information than Sarkar admits.Moreover, I argue that it is questionable whether the accountsucceeds in demonstrating that information is theoreticallyuseful in molecular genetics.
  1. Introduction
  2. Sarkar's InformationSystem
  3. The Pre-theoretic Features of Genetic Information
    3.1Intentionality
    3.2 Exclusiveness
    3.3 Asymmetry
    3.4 Causalrelevance
  4. Theoretical Usefulness
  5. Conclusion
  相似文献   

8.
I argue in this article that there is a mistake in Searle'sChinese room argument that has not received sufficient attention.The mistake stems from Searle's use of the Church–Turingthesis. Searle assumes that the Church–Turing thesis licencesthe assumption that the Chinese room can run any program. Iargue that it does not, and that this assumption is false. Anumber of possible objections are considered and rejected. Myconclusion is that it is consistent with Searle's argument tohold onto the claim that understanding consists in the runningof a program.
1 Searle's Argument
1.1 The Church–Turingthesis
2 Criticism of Searle's Argument
3 Objectionsand Replies
3.1 The virtual brain machine objection
3.2The brain-basedobjection
3.3 The syntax/physics objection
3.4 The abstractionobjection
3.5 The ‘same conclusion’objection
3.6 The ‘unnecessary baggage’ objection
3.7The Chinese gym objection
3.8 The syntax/semantics objection
3.9 Turing's definition of algorithm
3.9.1 Consequences
3.9.2 Criticism of the defence
4 Conclusion
  相似文献   

9.
While there is no universal logic of induction, the probabilitycalculus succeeds as a logic of induction in many contexts throughits use of several notions concerning inductive inference. Theyinclude Addition, through which low probabilities representdisbelief as opposed to ignorance; and Bayes property, whichcommits the calculus to a ‘refute and rescale’ dynamicsfor incorporating new evidence. These notions are independentand it is urged that they be employed selectively accordingto needs of the problem at hand. It is shown that neither isadapted to inductive inference concerning some indeterministicsystems.
1 Introduction
2 Failure of demonstrations of universality
2.1 Working backwards
2.2 The surface logic
3 Framework
3.1 The properties
3.2 Boundaries
3.2.1 Universalcomparability
3.2.2 Transitivity
3.2.3 Monotonicity
4 Addition
4.1 The property: disbelief versus ignorance
4.2Boundaries
5 Bayes property
5.1 The property
5.2 Bayes' theorem
5.3Boundaries
5.3.1 Dogmatism of the priors
5.3.2 Impossibilityof prior ignorance
5.3.3 Accommodation of virtues
6Real values
7 Sufficiency and independence
8 Illustrations
8.1 All properties retained
8.2 Bayes propertyonly retained
8.3 Induction without additivity and Bayes property
9Conclusion
  相似文献   

10.
The traditional Bayesian qualitative account of evidential support(TB) takes assertions of the form ‘E evidentially supportsH’ to affirm the existence of a two-place relation ofevidential support between E and H. The analysans given forthis relation is C(H,E) =def Pr(H|E) > Pr(H). Now it is wellknown that when a hypothesis H entails evidence E, not onlyis it the case that C(H,E), but it is also the case that C(H&X,E)for any arbitrary X. There is a widespread feeling that thisis a problematic result for TB. Indeed, there are a number ofcases in which many feel it is false to assert ‘E evidentiallysupports H&X’, despite H entailing E. This is known,by those who share that feeling, as the ‘tacking problem’for Bayesian confirmation theory. After outlining a generalizationof the problem, I argue that the Bayesian response has so farbeen unsatisfactory. I then argue the following: (i) There exists,either instead of, or in addition to, a two-place relation ofconfirmation, a three-place, ‘contrastive’ relationof confirmation, holding between an item of evidence E and twocompeting hypotheses H1 and H2. (ii) The correct analysans ofthe relation is a particular probabilistic inequality, abbreviatedC(H1, H2, E). (iii) Those who take the putative counterexamplesto TB discussed to indeed be counterexamples are interpretingthe relevant utterances as implicitly contrastive, contrastingthe relevant hypothesis H1 with a particular competitor H2.(iv) The probabilistic structure of these cases is such thatC(H1, H2, E). This solves my generalization of the tacking problem.I then conclude with some thoughts about the relationship betweenthe traditional Bayesian account of evidential support and myproposed account of the three-place relation of confirmation.
1 The ‘tacking problem’ and the traditional Bayesianresponse
2 Contrastive support
3 Concluding comments
  相似文献   

11.
Many have found attractive views according to which the veracityof specific causal judgements is underwritten by general causallaws. This paper describes various variants of that view andexplores complications that appear when one looks at a certainsimple type of example from physics. To capture certain causaldependencies, physics is driven to look at equations which,I argue, are not causal laws. One place where physics is forcedto look at such equations (and not the only place) is in itshandling of Green's functions which reveal point-wise causaldependencies. Thus, I claim that there is no simple relationshipbetween causal dependence and causal laws of the sort oftenpictured. Rather, this paper explores the complexity of therelationship in a certain well-understood case.
1 Introduction
2 The Causal Covering-Law Thesis
3 The Laws of String Motion
4 Green's Functions and Causation
5 Green's Functions andBoundary Conditions
6 The Green's Function as a Violationof the Wave Equation
6.1The Green's Function and other Sensesof ‘Causal Law’:Temporal Propagation and LocalPropagation
7 The Distributional Wave Equation
8 Whyis not the Green's Function a ‘Causal Law’?
9Conclusion
  相似文献   

12.
Stochastic Einstein Locality Revisited   总被引:1,自引:0,他引:1  
I discuss various formulations of stochastic Einstein locality(SEL), which is a version of the idea of relativistic causality,that is, the idea that influences propagate at most as fastas light. SEL is similar to Reichenbach's Principle of the CommonCause (PCC), and Bell's Local Causality. My main aim is to discuss formulations of SEL for a fixed backgroundspacetime. I previously argued that SEL is violated by the outcomedependence shown by Bell correlations, both in quantum mechanicsand in quantum field theory. Here I reassess those verdictsin the light of some recent literature which argues that outcomedependence does not violate the PCC. I argue that the verdictsabout SEL still stand. Finally, I briefly discuss how to formulate relativistic causalityif there is no fixed background spacetime.
1 Introduction
2Formulating Stochastic Einstein Locality
2.1 Events and regions
2.2 The idea of SEL
2.3 Three formulations of SEL
2.3.1The formulations
2.3.2Comparisons
2.4 Implications betweenthe formulations
2.4.1 Conditions forthe equivalence of SELD1and SELD2
2.4.2 Conditions for theequivalence of SELS andSELD2
3 Relativistic Causality in the Bell Experiment
3.1 The background
3.1.1 The Bell experiment reviewed
3.1.2My previous position
3.2 A common common cause? The Budapestschool
3.2.1 Resuscitatingthe PCC
3.2.2 Known proofs of aBell inequality need a strongPCC
3.2.3 Two distinctions
3.2.4Szabó's model
3.2.5A common common cause is plausible
3.2.6 Bell inequalitiesfrom a weak PCC: the Bern school
3.3 SEL in the Bell experiment
3.3.1 PCC and SEL are connectedby PPSI
3.3.2 The need forother judgments
3.3.3 Weak vs.strong SELD
4 SEL in Algebraic Quantum Field Theory
4.1 The story so far
4.2 Questions
4.2.1 Our formulations
4.2.2 The BudapestandBern schools
5 SEL in DynamicalSpacetimes
5.1 SEL for metric structure?
5.2 SEL for causalsets?
5.2.1 The causal set approach
5.2.2Labelled causalsets; general covariance
5.2.3 Deducing thedynamics
5.2.4The fate of SEL
  相似文献   

13.
In this paper I argue—against van Fraassen's constructiveempiricism—that the practice of saving phenomena is muchbroader than usually thought, and includes unobservable phenomenaas well as observable ones. My argument turns on the distinctionbetween data and phenomena: I discuss how unobservable phenomenamanifest themselves in data models and how theoretical modelsable to save them are chosen. I present a paradigmatic casestudy taken from the history of particle physics to illustratemy argument. The first aim of this paper is to draw attentionto the experimental practice of saving unobservable phenomena,which philosophers have overlooked for too long. The secondaim is to explore some far-reaching implications this practicemay have for the debate on scientific realism and constructiveempiricism.
1 Introduction
2 Unobservable Phenomena
2.1 Dataand phenomena
2.2 What isa data model?
2.3 Data modelsand unobservable phenomena
3 Saving Unobservable Phenomena:An Exemplar
4 The October Revolution of 1974: From the J/to Charmonium
4.1 A new unobservable phenomenon at 3.1 Ge V
4.2 How thecharmonium model saved the new unobservable phenomenon
4.2.1The J/ as a baryon–antibaryon bound state
4.2.2TheJ/ as the spin-1 meson of a model with three charmedquarks
4.2.3 The J/ as a charmonium state
5 Concluding Remarks
  相似文献   

14.
A consensus exists among contemporary philosophers of biologyabout the history of their field. According to the receivedview, mainstream philosophy of science in the 1930s, 40s, and50s focused on physics and general epistemology, neglectinganalyses of the ‘special sciences’, including biology.The subdiscipline of philosophy of biology emerged (and couldonly have emerged) after the decline of logical positivism inthe 1960s and 70s. In this article, I present bibliometric datafrom four major philosophy of science journals (Erkenntnis,Philosophy of Science, Synthese, and the British Journal forthe Philosophy of Science), covering 1930–59, which challengethis view.
1 Introduction
2 Methods
3 Results
4 Conclusions
  相似文献   

15.
Maddy and Mathematics: Naturalism or Not   总被引:1,自引:0,他引:1  
Penelope Maddy advances a purportedly naturalistic account ofmathematical methodology which might be taken to answer thequestion `What justifies axioms of set theory?' I argue thather account fails both to adequately answer this question andto be naturalistic. Further, the way in which it fails to answerthe question deprives it of an analog to one of the chief attractionsof naturalism. Naturalism is attractive to naturalists andnonnaturalists alike because it explains the reliability ofscientific practice. Maddy's account, on the other hand, appearsto be unable to similarly explain the reliability of mathematicalpractice without violating one of its central tenets.
1 Introduction
2 Mathematical Naturalism
3 Desiderata and the attractionof naturalism
4 Assessment: Naturalism and names
4.1 Taking‘naturalism’seriously
4.2 Second philosophy (orwhat's in a name)
5 A way out?
6 Or out of the way?
  相似文献   

16.
The evidence from randomized controlled trials (RCTs) is widelyregarded as supplying the ‘gold standard’ in medicine—wemay sometimes have to settle for other forms of evidence, butthis is always epistemically second-best. But how well justifiedis the epistemic claim about the superiority of RCTs? This paperadds to my earlier (predominantly negative) analyses of theclaims produced in favour of the idea that randomization playsa uniquely privileged epistemic role, by closely inspectingthree related arguments from leading contributors to the burgeoningfield of probabilistic causality—Papineau, Cartwrightand Pearl. It concludes that none of these further argumentssupplies any practical reason for thinking of randomizationas having unique epistemic power.
1 Introduction
2 Why theissue is of great practical importance—the ECMOcase
3Papineau on the ‘virtues of randomization’
4 Cartwrighton causality and the ‘ideal’ randomizedexperiment
5 Pearl on randomization, nets and causes
6 Conclusion
  相似文献   

17.
The paper considers our ordinary mentalistic discourse in relationto what we should expect from any genuine science of the mind.A meta-scientific eliminativism is commended and distinguishedfrom the more familiar eliminativism of Skinner and the Churchlands.Meta-scientific eliminativism views folk psychology qua folksyas unsuited to offer insight into the structure of cognition,although it might otherwise be indispensable for our socialcommerce and self-understanding. This position flows from ageneral thesis that scientific advance is marked by an eschewalof folk understanding. The latter half of the paper argues that,contrary to the received view, Chomsky's review of Skinner offersnot just an argument against Skinner's eliminativism, but, morecentrally, one in favour of the second eliminativism.
1 Introduction
2 Preliminaries: What Meta-scientific Eliminativism is Not
3 Meta-scientific Eliminativism
3.1 Folk psychology and cognitivescience
4 Two Readings of Chomsky's Review of Skinner
5Issues of Interpretation
5.1 A grammar as a theory
5.2 Cartesianlinguistics
5.3 Common cause
6 Chomsky's Current View
  相似文献   

18.
Many standard philosophical accounts of scientific practicefail to distinguish between modeling and other types of theoryconstruction. This failure is unfortunate because there areimportant contrasts among the goals, procedures, and representationsemployed by modelers and other kinds of theorists. We can seesome of these differences intuitively when we reflect on themethods of theorists such as Vito Volterra and Linus Paulingon the one hand, and Charles Darwin and Dimitri Mendeleev onthe other. Much of Volterra's and Pauling's work involved modeling;much of Darwin's and Mendeleev's did not. In order to capturethis distinction, I consider two examples of theory constructionin detail: Volterra's treatment of post-WWI fishery dynamicsand Mendeleev's construction of the periodic system. I arguethat modeling can be distinguished from other forms of theorizingby the procedures modelers use to represent and to study real-worldphenomena: indirect representation and analysis. This differentiationbetween modelers and non-modelers is one component of the largerproject of understanding the practice of modeling, its distinctivefeatures, and the strategies of abstraction and idealizationit employs.
1 Introduction
2 The essential contrast
2.1 Modeling
2.2 Abstract directrepresentation
3 Scientific models
4 Distinguishing modeling from ADR
4.1 The first and secondstages of modeling
4.2 Third stage of modeling
4.3 ADR
5 Who is not a modeler?
6 Conclusion: who is a modeler?
  相似文献   

19.
In a recent issue of this journal, P.E. Vermaas ([2005]) claimsto have demonstrated that standard quantum mechanics is technologicallyinadequate in that it violates the ‘technical functionscondition’. We argue that this claim is false becausebased on a ‘narrow’ interpretation of this technicalfunctions condition that Vermaas can only accept on pain ofcontradiction. We also argue that if, in order to avoid thiscontradiction, the technical functions condition is interpreted‘widely’ rather than ‘narrowly’, thenVermaas, argument for his claim collapses. The conclusion isthat Vermaas' claim that standard quantum mechanics is technologicallyinadequate evaporates.
1 Introduction
2 The Narrow Interpretation
3 The Wide Interpretation
4 The Teleportation Scheme
5Conclusions
  相似文献   

20.
What belongs to quantum theory is no more than what is neededfor its derivation. Keeping to this maxim, we record a paradigmaticshift in the foundations of quantum mechanics, where the focushas recently moved from interpreting to reconstructing quantumtheory. Several historic and contemporary reconstructions areanalyzed, including the work of Hardy, Rovelli, and Clifton,Bub and Halvorson. We conclude by discussing the importanceof a novel concept of intentionally incomplete reconstruction.
1 What is Wrong with Interpreting Quantum Mechanics
2 Reconstructionof Physical Theory
2.1 Schema
2.2 Selectionof the first principles
2.3 Status of the first principles
3 Examples of Reconstruction
3.1 Early examples of reconstruction
3.2 Hardy's reconstruction
3.3 Rovelli's reconstruction
3.4 The CBH reconstruction
3.5 Intentionally incompletereconstructions
4 Conclusion
  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号