Disputed term/author/ism | Author |
Entry |
Reference |
---|---|---|---|
Arbitrariness | Field | I 24 Identity/Identification/Field: in many areas, there is the problem of the continuous arbitrariness of identifications. - In mathematics, however, it is stronger than with physical objects. I 181 Solution: Intensity relations between pairs or triples, etc. of points. Advantage: that avoids attributing intensities to points and thus an arbitrary choice of a numerical scale for intensities. III 32 Addition/Multiplication: not possible in Hilbert's geometry. - (Only with arbitrary zero and arbitrary 1) Solution: intervals instead of points. II 310 Non-Classical Degrees of Belief/Uncertainty/Field: E.g. that every "decision" about the power of the continuum is arbitrary is a good reason to not assume classical degrees of belief. - (Moderate non-classical logic: That some instances of the sentence cannot be asserted by the excluded third party). III 31 Figure/Points/Field: no Platonist will identify real numbers with points on a physical line. - That would be too arbitrary ("what line?"). - What should be zero - what is supposed to be 1? III 32 f Hilbert/Geometry/Axioms/Field: multiplication of intervals: not possible, because for that we would need an arbitrary "standard interval". Solution: Comparing products of intervals. Generalization/Field: is then possible on products of spacetime intervals with scalar intervals. ((s) E.g. temperature difference, pressure difference). Field: therefore, spacetime points must not be regarded as real numbers. III 48 FieldVsTensor: is arbitrarily chosen. Solution/Field: simultaneity. III 65 Def Equally Divided Region/Equally Split/Evenly Divided Evenly/Equidistance/Field: (all distances within the region equal: R: is a spacetime region all of whose points lie on a single line, and that for each point x of R the strict st-between (between in relation to spacetime) two points of R lies, there are points y and z of R, such that a) is exactly one point of R strictly st-between y and z, and that is x, and -b) xy P-Cong xz (Cong = congruent). ((s) This avoids any arbitrary (length) units - E.g. "fewer" points in the corresponding interval or "the same number", but not between temperature and space units. Field: But definitely in mixed products are possible.Then: "the mixed product... is smaller than the mixed product..." Equidistance in each separate region: scalar/spatio-temporal. III 79 Arbitrariness/Arbitrary/Scales Types/Scalar/Mass Density/Field: mass density is a very special scalar field which, due to its logarithmic structure, is "less arbitrary" than the scale for the gravitational potential. >Objectivity, >Logarithm. Logarithmic structures are less arbitrary. Mass density: needs more fundamental concepts than other scalar fields. Scalar field: E.g. height. >Field theory. |
Field I H. Field Realism, Mathematics and Modality Oxford New York 1989 Field II H. Field Truth and the Absence of Fact Oxford New York 2001 Field III H. Field Science without numbers Princeton New Jersey 1980 Field IV Hartry Field "Realism and Relativism", The Journal of Philosophy, 76 (1982), pp. 553-67 In Theories of Truth, Paul Horwich Aldershot 1994 |
Artificial Neural Networks | Norvig | Norvig I 728 Artificial Neural Networks/Norvig/Russell: Neural networks are composed of nodes or units (…) connected by directed links. A link from unit i to unit j serves to propagate the activation ai from i to j. Each link also has a numeric weight wi,j associated with it, which determines the strength and sign of the connection. Just as in linear regression models, each unit has a dummy input a0 =1 with an associated weight w0,j . Norvig I 729 Perceptrons: The activation function g is typically either a hard threshold (…), in which case the unit is called a perceptron, or a logistic function (…), in which case the term sigmoid perceptron is sometimes used. Both of these nonlinear activation function ensure the important property that the entire network of units can represent a nonlinear function (…). Forms of a network: a) A feed-forward network has connections only in one direction—that is, it forms a directed acyclic graph. Every node receives input from “upstream” nodes and delivers output to “downstream” nodes; there are no loops. A feed-forward network represents a function of its current input; thus, it has no internal state other than the weights themselves. b) A recurrent network, on the other hand, feeds its outputs back into its own inputs. This means that the activation levels of the network form a dynamical system that may reach a stable state or exhibit oscillations or even chaotic behavior. Layers: a) Feed-forward networks are usually arranged in layers, such that each unit receives input only from units in the immediately preceding layer. b) Multilayer networks, which have one or more layers of hidden units that are not connected to the outputs of the network. Training/Learning: For example, if we want to train a network to add two input bits, each a 0 or a 1, we will need one output for the sum bit and one for the carry bit. Also, when the learning problem involves classification into more than two classes—for example, when learning to categorize images of handwritten digits—it is common to use one output unit for each class. Norvig I 731 Any desired functionality can be obtained by connecting large numbers of units into (possibly recurrent) networks of arbitrary depth. The problem was that nobody knew how to train such networks. This turns out to be an easy problem if we think of a network the right way: as a function hw(x) parameterized by the weights w. Norvig I 732 (…) we have the output expressed as a function of the inputs and the weights. (…) because the function represented by a network can be highly nonlinear—composed, as it is, of nested nonlinear soft threshold functions—we can see neural networks as a tool for doing nonlinear regression. Norvig I 736 Learning in neural networks: just as with >Bayesian networks, we also need to understand how to find the best network structure. If we choose a network that is too big, it will be able to memorize all the examples by forming a large lookup table, but will not necessarily generalize well to inputs that have not been seen before. Norvig I 737 Optimal brain damage: The optimal brain damage algorithm begins with a fully connected network and removes connections from it. After the network is trained for the first time, an information-theoretic approach identifies an optimal selection of connections that can be dropped. The network is then retrained, and if its performance has not decreased then the process is repeated. In addition to removing connections, it is also possible to remove units that are not contributing much to the result. Parametric models: A learning model that summarizes data with a set of parameters of fixed size (independent of the number of training examples) is called a parametric model. No matter how much data you throw at a parametric model, it won’t change its mind about how many parameters it needs. Nonparametric models: A nonparametric model is one that cannot be characterized by a bounded set of parameters. For example, suppose that each hypothesis we generate simply retains within itself all of the training examples and uses all of them to predict the next example. Such a hypothesis family would be nonparametric because the effective number of parameters is unbounded- it grows with the number of examples. This approach is called instance-based learning or memory-based learning. The simplest instance-based learning method is table lookup: take all the training examples, put them in a lookup table, and then when asked for h(x), see if x is in the table; (…). Norvig I 738 We can improve on table lookup with a slight variation: given a query xq, find the k examples that are nearest to xq. This is called k-nearest neighbors lookup. ((s) Cf. >Local/global/Philosophical theories.) Norvig I 744 Support vector machines/SVM: The support vector machine or SVM framework is currently the most popular approach for “off-the-shelf” supervised learning: if you don’t have any specialized prior knowledge about a domain, then the SVM is an excellent method to try first. Properties of SVMs: 1. SVMs construct a maximum margin separator - a decision boundary with the largest possible distance to example points. This helps them generalize well. 2. SVMs create a linear separating hyperplane, but they have the ability to embed the data into a higher-dimensional space, using the so-called kernel trick. 3. SVMs are a nonparametric method - they retain training examples and potentially need to store them all. On the other hand, in practice they often end up retaining only a small fraction of the number of examples - sometimes as few as a small constant times the number of dimensions. Norvig I 745 Instead of minimizing expected empirical loss on the training data, SVMs attempt to minimize expected generalization loss. We don’t know where the as-yet-unseen points may fall, but under the probabilistic assumption that they are drawn from the same distribution as the previously seen examples, there are some arguments from computational learning theory (…) suggesting that we minimize generalization loss by choosing the separator that is farthest away from the examples we have seen so far. Norvig I 748 Ensemble Learning: > href="https://philosophy-science-humanities-controversies.com/listview-details.php?id=2497863&a=$a&first_name=&author=AI%20Research&concept=Learning">Learning/AI Research. Norvig I 757 Linear regression is a widely used model. The optimal parameters of a linear regression model can be found by gradient descent search, or computed exactly. A linear classifier with a hard threshold—also known as a perceptron—can be trained by a simple weight update rule to fit data that are linearly separable. In other cases, the rule fails to converge. Norvig I 758 Logistic regression replaces the perceptron’s hard threshold with a soft threshold defined by a logistic function. Gradient descent works well even for noisy data that are not linearly separable. Norvig I 760 History: The term logistic function comes from Pierre-Francois Verhulst (1804–1849), a statistician who used the curve to model population growth with limited resources, a more realistic model than the unconstrained geometric growth proposed by Thomas Malthus. Verhulst called it the courbe logistique, because of its relation to the logarithmic curve. The term regression is due to Francis Galton, nineteenth century statistician, cousin of Charles Darwin, and initiator of the fields of meteorology, fingerprint analysis, and statistical correlation, who used it in the sense of regression to the mean. The term curse of dimensionality comes from Richard Bellman (1961)(1). Logistic regression can be solved with gradient descent, or with the Newton-Raphson method (Newton, 1671(2); Raphson, 1690(3)). A variant of the Newton method called L-BFGS is sometimes used for large-dimensional problems; the L stands for “limited memory,” meaning that it avoids creating the full matrices all at once, and instead creates parts of them on the fly. BFGS are authors’ initials (Byrd et al., 1995)(4). The ideas behind kernel machines come from Aizerman et al. (1964)(5) (who also introduced the kernel trick), but the full development of the theory is due to Vapnik and his colleagues (Boser et al., 1992)(6). SVMs were made practical with the introduction of the soft-margin classifier for handling noisy data in a paper that won the 2008 ACM Theory and Practice Award (Cortes and Vapnik, 1995)(7), and of the Sequential Minimal Optimization (SMO) algorithm for efficiently solving SVM problems using quadratic programming (Platt, 1999)(8). SVMs have proven to be very popular and effective for tasks such as text categorization (Joachims, 2001)(9), computational genomics (Cristianini and Hahn, 2007)(10), and natural language processing, such as the handwritten digit recognition of DeCoste and Schölkopf (2002)(11). As part of this process, many new kernels have been designed that work with strings, trees, and other non-numerical data types. A related technique that also uses the kernel trick to implicitly represent an exponential feature space is the voted perceptron (Freund and Schapire, 1999(12); Collins and Duffy, 2002(13)). Textbooks on SVMs include Cristianini and Shawe-Taylor (2000)(14) and Schölkopf and Smola (2002)(15). A friendlier exposition appears in the AI Magazine article by Cristianini and Schölkopf (2002)(16). Bengio and LeCun (2007)(17) show some of the limitations of SVMs and other local, nonparametric methods for learning functions that have a global structure but do not have local smoothness. Ensemble learning is an increasingly popular technique for improving the performance of learning algorithms. Bagging (Breiman, 1996)(18), the first effective method, combines hypotheses learned from multiple bootstrap data sets, each generated by subsampling the original data set. The boosting method described in this chapter originated with theoretical work by Schapire (1990)(19). The ADABOOST algorithm was developed by Freund and Schapire Norvig I 761 (1996) (20)and analyzed theoretically by Schapire (2003)(21). Friedman et al. (2000)(22) explain boosting from a statistician’s viewpoint. Online learning is covered in a survey by Blum (1996)(23) and a book by Cesa-Bianchi and Lugosi (2006)(24). Dredze et al. (2008)(25) introduce the idea of confidence-weighted online learning for classification: in addition to keeping a weight for each parameter, they also maintain a measure of confidence, so that a new example can have a large effect on features that were rarely seen before (and thus had low confidence) and a small effect on common features that have already been well-estimated. 1. Bellman, R. E. (1961). Adaptive Control Processes: A Guided Tour. Princeton University Press. 2. Newton, I. (1664-1671). Methodus fluxionum et serierum infinitarum. Unpublished notes 3. Raphson, J. (1690). Analysis aequationum universalis. Apud Abelem Swalle, London. 4. Byrd, R. H., Lu, P., Nocedal, J., and Zhu, C. (1995). A limited memory algorithm for bound constrained optimization. SIAM Journal on Scientific and Statistical Computing, 16(5), 1190-1208. 5. Aizerman, M., Braverman, E., and Rozonoer, L. (1964). Theoretical foundations of the potential function method in pattern recognition learning. Automation and Remote Control, 25, 821-837. 6. Boser, B., Guyon, I., and Vapnik, V. N. (1992). A training algorithm for optimal margin classifiers. In COLT-92. 7. Cortes, C. and Vapnik, V. N. (1995). Support vector networks. Machine Learning, 20, 273-297. 8. Platt, J. (1999). Fast training of support vector machines using sequential minimal optimization. In Advances in Kernel Methods: Support Vector Learning, pp. 185-208. MIT Press. 9. Joachims, T. (2001). A statistical learning model of text classification with support vector machines. In SIGIR-01, pp. 128-136. 10. Cristianini, N. and Hahn, M. (2007). Introduction to Computational Genomics: A Case Studies Approach. Cambridge University Press. 11. DeCoste, D. and Schölkopf, B. (2002). Training invariant support vector machines. Machine Learning, 46(1), 161–190. 12. Freund, Y. and Schapire, R. E. (1996). Experiments with a new boosting algorithm. In ICML-96. 13. Collins, M. and Duffy, K. (2002). New ranking algorithms for parsing and tagging: Kernels over discrete structures, and the voted perceptron. In ACL-02. 14. Cristianini, N. and Shawe-Taylor, J. (2000). An introduction to support vector machines and other kernel-based learning methods. Cambridge University Press. 15. Schölkopf, B. and Smola, A. J. (2002). Learning with Kernels. MIT Press. 16. Cristianini, N. and Schölkopf, B. (2002). Support vector machines and kernel methods: The new generation of learning machines. AIMag, 23(3), 31–41. 17. Bengio, Y. and LeCun, Y. (2007). Scaling learning algorithms towards AI. In Bottou, L., Chapelle, O., DeCoste, D., and Weston, J. (Eds.), Large-Scale Kernel Machines. MIT Press. 18. Breiman, L. (1996). Bagging predictors. Machine Learning, 24(2), 123–140. 19. Schapire, R. E. (1990). The strength of weak learnability. Machine Learning, 5(2), 197–227. 20. Freund, Y. and Schapire, R. E. (1996). Experiments with a new boosting algorithm. In ICML-96. 21. Schapire, R. E. (2003). The boosting approach to machine learning: An overview. In Denison, D. D., Hansen, M. H., Holmes, C., Mallick, B., and Yu, B. (Eds.), Nonlinear Estimation and Classification. Springer. 22. Friedman, J., Hastie, T., and Tibshirani, R. (2000). Additive logistic regression: A statistical view of boosting. Annals of Statistics, 28(2), 337–374. 23. Blum, A. L. (1996). On-line algorithms in machine learning. In Proc.Workshop on On-Line Algorithms, Dagstuhl, pp. 306–325. 24. Cesa-Bianchi, N. and Lugosi, G. (2006). Prediction, learning, and Games. Cambridge University Press. 25. Dredze, M., Crammer, K., and Pereira, F. (2008). Confidence-weighted linear classification. In ICML- 08, pp. 264–271. |
Norvig I Peter Norvig Stuart J. Russell Artificial Intelligence: A Modern Approach Upper Saddle River, NJ 2010 |
Determinism | Feynman | I 540 Determinism/Knowledge/Indeterminateness/Feynman: even if the world were consistently classically determined (QM did not apply), we could not predict the behavior of the individual particles: the smallest initial error quickly becomes a great uncertainty. If any precision is given, no matter how accurate, then you can specify a time that is long enough that our predictions are not valid for such a long time. For example, with an accuracy of 1 to a billion, it is not about millions of years, time only depends on the error logarithmically. We will lose all information after a very short time. >Initial conditions. It is therefore not fair to say that we should have realized from the freedom of the human mind that "quantum mechanics" would have meant the redemption from a mechanistic universe. >Quantum mechanics. Uncertainty Principle/Indeterminacy/Feynman: in practical terms it already existed in classical physics. >Uncertainty relation. |
Feynman I Richard Feynman The Feynman Lectures on Physics. Vol. I, Mainly Mechanics, Radiation, and Heat, California Institute of Technology 1963 German Edition: Vorlesungen über Physik I München 2001 Feynman II R. Feynman The Character of Physical Law, Cambridge, MA/London 1967 German Edition: Vom Wesen physikalischer Gesetze München 1993 |
Information | Shannon | Brockman I 155 Information/Shannon/Kaiser: In Shannon’s now-famous formulation(1), the information content of a string of symbols was given by the logarithm of the number of possible symbols from which a given string was chosen. Shannon’s key insight was that the information of a message was just like the entropy of a gas: a measure of the system’s disorder. >Systems, >Entropy, >Noise. Brockman I 154 (…) mathematician Warren Weaver explained that in Shannon’s formulation, “the word information . . . is used in a special sense that must not be confused with its ordinary usage. In particular, information must not be confused with meaning.(2) Linguists and poets might be concerned about the “semantic” aspects of communication, Weaver continued, but not engineers like Shannon. Rather, “this word ‘information’ in communication theory relates not so much to what you do say, as to what you could say.” (2) >Communication theory. 1. Claude Shannon, A Mathematical Theory of Communication, Bell System Technical Journal (1948), Vol. 27/3 2. Warren Weaver, ’Recent Contributions to the Mathematical Theory of Communication,” in Claude Shannon and Warren Weaver, The Mathematical Theory of Communication (Urbana: University of Illinois Press, 1949), 8. Kaiser, David “”information” for Wiener, for Shannon, and for Us” in: Brockman, John (ed.) 2019. Twenty-Five Ways of Looking at AI. New York: Penguin Press. |
Brockman I John Brockman Possible Minds: Twenty-Five Ways of Looking at AI New York 2019 |
Neo-Fisher Effect | Uribe | Uribe I 4 Def Fisher-Effect/Uribe: A large body of empirical and theoretical studies argue that a transitory positive disturbance in the nominal interest rate causes a transitory increase in the real interest rate, which in turn depresses aggregate demand and inflation (…) (see, for example, I 5 Christiano, Eichenbaum, and Evans, 2005)(1). Similarly, a property of virtually all modern models studied in monetary economics is that a transitory increase in the nominal interest rate has no effect on inflation in the long run. By contrast, if the increase in the nominal interest rate is permanent, sooner or later, inflation will have to increase by roughly the same magnitude, if the real interest rate, given by the difference between the nominal rate and expected inflation, is not determined by nominal factors in the long run (...). This one-to-one long-run relationship between nominal rates and inflation is known as the Fisher effect. Def Neo-Fisher Effect/Uribe: The neo-Fisher effect says that a permanent increase in the nominal interest rate causes an increase in inflation not only in the long run but also in the short run. I 6 The Fisher effect, however, does not provide a prediction of when inflation should be I 8 expected to catch up with a permanent increase in the nominal interest rate. It only states that it must eventually do so. Uribe I 8 Neo-Fisher Effect/Empirical Model/New-Keynesian Model/Inflation/Interest/Uribe: Empirical model: The empirical model aims to capture the dynamics of three macroeconomic indicators (…): the logarithm of real output per capita (…), the inflation rate (…), expressed in percent per year, and the nominal interest rate (…), expressed in percent per year. [Uribe] assume[s] that [the three indicators above] are driven by four exogenous shocks: a nonstationary (or permanent) monetary shock (…), a stationary (or transitory) monetary shock (…), a nonstationary nonmonetary shock (…) and a stationary nonmonetary shock (…). I 16 [Uribe] estimate[s] the empirical model on quarterly U.S. data spanning the period 1954:Q3 to 2018:Q2. I 18 The main result [from the empirical model] is that the adjustment of inflation to its higher long-run level takes place in the short run. In fact, inflation increases by 1 percent on impact and remains around that level thereafter. On the real side of the economy, the permanent increase in the nominal interest rate does not cause a contraction in aggregate activity. Indeed, output exhibits a transitory expansion. This effect could be the consequence of low real interest rates resulting from the swift reflation of the economy following the permanent interest-rate shock. Because of the faster response of inflation relative to that of the nominal interest rate, the real interest rate falls by almost 1 percent on impact and converges to its steady-state level from below, implying that the entire adjustment to a permanent interest-rate shock takes place in the context of low real interest rates. I 22 How important are nonstationary monetary shocks? The relevance of the neo-Fisher effect depends not only on whether it can be identified in actual data, (…) but also on whether permanent monetary shocks play a significant role in explaining short-run movements in the inflation rate. I 23 [T]he empirical model assigns a significant role to this type of monetary disturbance [the nonstationary monetary shock], especially in explaining movements in nominal variables. In comparison, the stationary monetary shock explains a relatively small fraction of movements in the three macroeconomic indicators included in the model. I 25 [To summarize] the estimated empirical model predicts that a permanent increase in the nominal interest rate causes an immediate increase in inflation and transitional dynamics characterized by low real interest rates, and no output loss. >Terminology/Uribe New-Keynesian Model: In this section the presence of a neo-Fisher effect in the context of an estimated standard optimizing model in the neo-Keynesian tradition [is investigated]. [The model] is driven by six shocks: permanent and transitory interest-rate shocks, permanent and transitory productivity shocks, a preference shock, and a labor-supply shock. I 37 [Q]ualitatively, the responses implied by the New-Keynesian model concur with those implied by the empirical model (…). An increase in the nominal interest rate that is understood to be permanent by private agents (…) causes an increase in inflation in the short run, without loss of aggregate activity. By contrast, an increase in the nominal interest rate that is interpreted I 39 to be transitory (…) causes a fall in inflation and a contraction in aggregate activity. [I]n response to a permanent increase in the nominal interest rate inflation not only begins to increase immediately, but does so at a rate faster than the nominal interest rate. As a result, the real interest rate falls. By contrast, a temporary increase in the nominal interest rate causes a fall in inflation and an increase in the real interest rate. A natural question is why inflation moves faster than the interest rate in the short run when the monetary shock is expected to be permanent. The answer has to do with the presence of nominal rigidities and with the way the central bank conducts monetary policy. In response to a permanent I 40 monetary shock that increases the nominal interest rate by one percent in the long run, the central bank raises the short-run policy rate quickly but gradually. At the same time, firms know that, by the Fisher effect, the price level will increase by one percent in the long run, and that they too will have to increase their own price in the same proportion in the long run, to avoid making losses. Since firms face quadratic costs of adjusting prices, they find it optimal to begin increasing the price immediately. Since all firms do the same, inflation itself begins to increase as soon as the shock is announced. 1. Christiano, Lawrence J., Martin Eichenbaum, and Charles L. Evans, “Nominal Rigidities and the Dynamic Effects of a Shock to Monetary Policy,” Journal of Political Economy 113, 2005, 1-45. Martín Uribe (2019): The Neo-Fisher Effect: Econometric Evidence from Empirical and Optimizing Models. In: NBER Working Paper No. 25089. |
Uribe I Martin Uribe The Neo-Fisher Effect: Econometric Evidence from Empirical and Optimizing Models. NBER Working Paper No. 25089 2019 |
Objectivity | Field | I 272f Def Objectivity/Mathematics/Gyro/Putnam/Field: objectivity should consist in that we believe only the true axioms. Problem: the axioms also refer to the ontology. >Axioms, >Ontology. I 274 Objectivity does not have to be explained in terms of the truth of the axioms - this is not possible in the associated modal propositions. >Modalities, >Propositions. I 277 Objectivity/mathematics/set theory/Field: even if we accept "ε" as fixed, the platonic (!) view does not have to assume that the truths are objectively determinated. - Because there are other totalities over which the quantifiers can go in a set theory. >Platonism, >Quantifiers, >Set theory. Putnam: further: there is no reason to keep "ε" fixed. FieldVsPutnam: confusion of the view that the reference is fixed (e.g. causally) with the view that it is defined by a description theory that contains the word "cause". II 316 Objectivity/truth/Mathematics/Field: Thesis: even if there are no mathematical objects, why should it not be the case that there is exactly one value of n for which An - modally interpreted - is objectively true? II 316 Mathematical objectivity/Field: for it we do not need to accept the existence of mathematical objects if we presuppose the objectivity of logic. - But objectively correct are only sentences of mathematics which can be proved from the axioms. >Provability, >Correctness. II 319 Mathematical concepts are not causally connected with their predicates. E.g. For each choice of a power of the continuum, we can find properties and relations for our set theoretical concepts (here: vocabulary) that make this choice true and another choice wrong. Cf. >Truthmakers. II 320 The defense of axioms is enough to make mathematics (without objects) objective, but only with the broad notion of consistency: that a system is consistent if not every sentence is a consequence of it. II 340 Objectivity/quantity theory/element relation/Field: to determine the specific extension of "e" and "quantity" we also need the physical applications - also for "finity". --- III 79 Arbitrariness/arbitrary/scalar types/scalar field/mass density/Field: mass density is a very special scalar field which is, because of its logarithmic structure, less arbitrary than the scale for the gravitational potential - ((s) > objectivity, > logarithm.) Logarithmic structures are less arbitrary. Mass density: needs more basic concepts than other scalar fields. Scalar field: E.g. height. >Field theory. |
Field I H. Field Realism, Mathematics and Modality Oxford New York 1989 Field II H. Field Truth and the Absence of Fact Oxford New York 2001 Field III H. Field Science without numbers Princeton New Jersey 1980 Field IV Hartry Field "Realism and Relativism", The Journal of Philosophy, 76 (1982), pp. 553-67 In Theories of Truth, Paul Horwich Aldershot 1994 |
Order | Wiener | II 32 Order/Wiener: the more likely a schema type is, the less order it contains, because order is (...) a lack of coincidence. >Chance, >Coincidence. The usual measure of the degree of order of a schema group selected from a larger group is the negative logarithm of the probability of the smaller group, if we assume the probability of the larger group equal to one. >Probability, >Likelihood. The positive logarithm of probability is the measure of disorder. >Entropy. |
WienerN I Norbert Wiener Cybernetics, Second Edition: or the Control and Communication in the Animal and the Machine Cambridge, MA 1965 WienerN II N. Wiener The Human Use of Human Beings (Cybernetics and Society), Boston 1952 German Edition: Mensch und Menschmaschine Frankfurt/M. 1952 |
Similarity | Gould | I 43 Similarity/Evolution/Gould: geometry: triangles, parallelograms and hexagons are the only flat figures that can completely fill the room. The logarithmic spiral is the only curve that does not change its shape as it grows. Gould: this is how similarities in independent developments can be explained with a small number of possible solutions. I 257 Similarity/Gould: similarity is empirically not mysterious, but in terms of its causes it is anything but clear: I 258 Definition homologous similarity in common precursors: two organisms may have the same feature because they got it from a common ancestor. (This is Darwin's word for "close relatives".) Example: Homology: the front limbs of humans, horses, guinea pigs and bats, are inherited from a common precursor. Definition analogue similarity: analogue similarity means that there is no common precursors but two organisms have a common feature that represents the result of a separate but similar evolutionary change in independent lines of development. It is the spectre of genealogists, because it confuses our naïve notion that what is similar must have similar causes. For example, the wings of birds, bats and butterflies. These have no common precursor, two of them had wings! I 259 We know in the broadest sense how homologies are determined, because analogies have their limits: they can produce striking external and functional similarities, but they cannot change thousands of complex and independent parts in the same way. At a certain level of complexity, similarities must be homologous. In addition, genetic changes often have far-reaching effects on the external appearance of adult organisms. Therefore, a similarity that looks too scary and too complex to occur more than once can actually be a simple and repeatable change. Important: we do not compare the correct organisms with each other, but only their descendants! How can we recognize their original structure? Gould IV 174 Similarity/Darwin: "Our classification encompasses more than mere similarity relationships, this "more" is an ancestral relationship. It is the cause of order in nature.(1) >Evolution, >Explanation, >Darwinism. 1. Ch. Darwin. (1859): On the origin of species by means of natural selection. London: John Murray. |
Gould I Stephen Jay Gould The Panda’s Thumb. More Reflections in Natural History, New York 1980 German Edition: Der Daumen des Panda Frankfurt 2009 Gould II Stephen Jay Gould Hen’s Teeth and Horse’s Toes. Further Reflections in Natural History, New York 1983 German Edition: Wie das Zebra zu seinen Streifen kommt Frankfurt 1991 Gould III Stephen Jay Gould Full House. The Spread of Excellence from Plato to Darwin, New York 1996 German Edition: Illusion Fortschritt Frankfurt 2004 Gould IV Stephen Jay Gould The Flamingo’s Smile. Reflections in Natural History, New York 1985 German Edition: Das Lächeln des Flamingos Basel 1989 |
Terminology | Uribe | Uribe I 4 Terminology/Uribe: Def Fisher-Effect/Uribe: A large body of empirical and theoretical studies argue that a transitory positive disturbance in the nominal interest rate causes a transitory increase in the real interest rate, which in turn depresses aggregate demand and inflation (…) (see, for example, I 5 Christiano, Eichenbaum, and Evans, 2005)(1). Similarly, a property of virtually all modern models studied in monetary economics is that a transitory increase in the nominal interest rate has no effect on inflation in the long run. By contrast, if the increase in the nominal interest rate is permanent, sooner or later, inflation will have to increase by roughly the same magnitude, if the real interest rate, given by the difference between the nominal rate and expected inflation, is not determined by nominal factors in the long run, entry. This one-to-one long-run relationship between nominal rates and inflation is known as the Fisher effect. Def Neo-Fisher Effect/Uribe: The neo-Fisher effect says that a permanent increase in the nominal interest rate causes an increase in inflation not only in the long run but also in the short run. I 6 The Fisher effect, however, does not provide a prediction of when inflation should be I 8 expected to catch up with a permanent increase in the nominal interest rate. It only states that it must eventually do so. Empirical model: The empirical model aims to capture the dynamics of three macroeconomic indicators (…). 1. The logarithm of real output per capita: yt 2. The inflation rate: πt, expressed in percent per year 3. The nominal interest rate: it, expressed in percent per year [Uribe] assume[s] that yt, πt, and it are driven by four exogenous shocks: a nonstationary (or permanent) monetary shock (X m/t), a stationary (or transitory) monetary shock (z m/t), a nonstationary nonmonetary shock (X n/t) and a stationary nonmonetary shock (z n/t). I 16 [Uribe] estimate[s] the empirical model on quarterly U.S. data spanning the period 1954: Q3 to 2018: Q2. The proxy for yt is the logarithm of real GDP seasonally adjusted in chained dollars of 2012 minus the logarithm of the civilian non-institutional population 16 years old or older. The proxy for πt is the growth rate of the implicit GDP deflator expressed in percent per year. In turn, the implicit GDP deflator is constructed as the ratio of GDP in current dollars and real GDP both seasonally adjusted. The proxy for it is the monthly Federal Funds Effective rate converted to quarterly frequency by averaging and expressed in percent per year. >Neo-Fisher Effect/Uribe. 1. Christiano, Lawrence J., Martin Eichenbaum, and Charles L. Evans, “Nominal Rigidities and the Dynamic Effects of a Shock to Monetary Policy,” Journal of Political Economy 113, 2005, 1-45. Martín Uribe (2019): The Neo-Fisher Effect: Econometric Evidence from Empirical and Optimizing Models. In: NBER Working Paper No. 25089. |
Uribe I Martin Uribe The Neo-Fisher Effect: Econometric Evidence from Empirical and Optimizing Models. NBER Working Paper No. 25089 2019 |
Disputed term/author/ism | Author Vs Author |
Entry |
Reference |
---|---|---|---|
Harré, H.R. | Nozick Vs Harré, H.R. | II 121 Inegalitarian Theories/IGT/Inegalitarianism/Existence/Explanation/Nozick: IGT: they assume that a situation or a small number of states are privileged or natural, and therefore require no explanation, while other states or situations have to be explained as deviations from them. E.g. Newton considered idleness or uniformity of movement of the natural state, and everything else had to be explained by the assumption of forces. Aristotle: Idleness. Nozick: but that is not limited to theories of motion. (Footnote). IGT: distinguish two classes of states or situations: 1) those requiring an explanation 2) those that do not need an explanation, and do not allow one! IGT: are particularly suitable for questions such as: "why does X exist and Y not?" That also means that there is rather a non-N state (not nothing) than an N state. IGT: leave two questions unanswered: 1) Why should N be the natural state, and not perhaps a different species, a species N'? 2) Given N be the natural state, why are there forces that are assumed to be F and should provide deviations, and not other forces, perhaps '? Natural State/Nozick: to assuming something as nZ also means attributing a specific content to it! But here one should be careful with a priori arguments in favor of certain content. II 122 Declaration/R.Harré: Thesis: that something remains the same does not need to be explained: that is the most fundamental principle. (1970, p 248) NozickVsHarré: But do we not need an explanation of why one thing is considered as the same for the purposes of this principle, but another is not? The principle is trivialized if we say that what is always assumed as not needing no explanation, is thought to be constant with respect to a set of concepts that are fitting. ((s) circular). IGT: the question of "Why is there something and not rather nothing?" is set against the backdrop of an assumed IGT. If there was nothing, the question would have to be asked just as well (even there were nobody to ask it). "Why is there nothing instead of something?" Problem: then any causal factor that is in question for the nothing is itself a deviation from nothing! Then there can be no explanation as to why these forces F exist which does not introduce these Fs itself as explanatory factors (circular). II 123 Nothing/Nozick: now we might assume that there is a special force that produces nothingness, a "nothinging power". In the film "Yellow Submarine" there is a vacuum cleaner that absorbs everything and also absorbs itself in the end. Then there is a "pop" and a multicolored scenery emerges. According to this view, nothingness has produced something by destroying itself. Nozick: perhaps nothingness only destroys a little and still leaves room for a force for real nothing. Let us imagine a nothinging force that operates at an angle of 45°, and alternative stronger and weaker forces ...+... II 124 the nothinging force will eventually take over itself and slow itself down or this is somehow prevented... Problem: even if there was an original nothinging force, the question is still, at which point it became effective and at what angle it operated! Somehow, a 45° curve seems less random, but that is only because of our representation system: on logarithmic graph paper it looks completely random! |
No I R. Nozick Philosophical Explanations Oxford 1981 No II R., Nozick The Nature of Rationality 1994 |
Social Darwinism | Verschiedene Vs Social Darwinism | VIII 462 Relation Selection/Dawkins: frequent mistake students do: to assume that animals would have to count how many relatives they are currently rescuing. VIII 187 But also Marshall Sahlins' mistakes: SahlinsVsSociobiology: VsRelation Selection: for this the animals would even have to have linguistic abilities to determine the "relationship coefficient r": r = (ego,cousins) = 1/8. DawkinsVsSahlins: For example, a snail shell is a perfect logarithmic spiral, but where does the snail have its logarithmic table? Group Selection/Wynne-Edwards: Thesis: individual animals reduce their own birth rate selflessly for the benefit of the group. DawkinsVs. VIII 190 Animals/Death/Birth Rate/Dawkins: free-living animals almost never die of old age, but of diseases, hunger or predators. If they were to control their birth rate, there would be no starvation. |
|