In , for the first time the concept of opposition was introduced which has attracted a lot of research efforts in the last decade. Variety of soft computing algorithms such as, optimization methods, reinforcement learning, artificial neural networks, and fuzzy systems have already utilized the concept of OBL to improve their performance. View via Publisher. Alternate Sources. Save to Library. Create Alert. Share This Paper. It is not the Editor's intention to recast these existing methods, rather to elucidate that, while diverse, they all share the commonality of opposition - in one form or another, either implicitly or explicitly.

Therefore they have attempted to provide rough guidelines to understand what makes concepts "oppositional". The Table of Contents. Implicit: adversarial search and reasoning see Chapter 6. Explicit: understanding or compromising between opposing arguments, considering opposite reasonings see Chapter 5.

Of course, this generalizes all searches, but the reader should keep important aspects such as constraint satisfation, multiple objectives, etc in mind. Implicit: Competitive coevolution see Chapter 9. In other words, given some initial hypothesis, learning allows for its modification as more data are observed. This contrasts with search and reasoning in that learning is not concerned with explaining why the data is observed, only to perform some action when it occurs. Often learning can be seen as an optimization problem, 26 H.

Rahnamayan however, here we distinguish them because they do not always imply one another. Implicit: model selection using opposing viewpoints of a problem see Chapter Explicit: considering two opposing actions or states during active learning see Chapter Basic definitions of type-I and type-II opposites along with categorization of oppositional algorithms were provided as well. Generally, OBC provides a straightforward framework for extension of many existing methodologies as this has been demonstrated for neural nets, evolutionary algorithms, reinforcement learning and ant colonies.

The OBC extensions of existing machine learning schemes seem to primarily accelerate search, learning and optimization processes, a characteristic highly desirable for hyperdimensional complex problems. Opposition-based computing, however, encompasses multiple challenges as well. First and foremost, a solid and comprehensive formalism is still missing.

- Property rights and the environment: social and ecological issues.
- The Headless Horseman!
- The Trial.
- Search for the Buried Bomber (Dark Prospects).

Besides, the definition of opposition may not be straightforward in some applications such that the need for opposition mining algorithms should be properly addressed in future investigations. On the other side, the cost of calculating opposite entities can exceed the time saving so that the OBC-extended algorithm becomes slower than the parent algorithm.

Extensive research is still required to establish control mechanisms to regulate the frequency of opposition usage within the parent algorithms. Due to our imperfect understanding of interplay between opposite entities, this work will most likely be a preliminary investigation. Hence, more comprehensive elaborations with a solid mathematical understanding of opposition remain subject to future research. The motivation of this chapter was not to just provide a framework for oppositional concepts claiming to cover all possible forms of opposition due to the diversity of oppositeness this seems to be extremely difficult, if not impossible , but also to establish a general umbrella under which other, existing techniques can be classified and studied.

This should not be understood to just put a new label on existing methodologies and algorithms but to consciously comprehend the oppositional nature of problems and exploit a-priori knowledge in light of inherent antipodality and complementarity of entities, quantities and abstractions. Mario Ventresca has designed and developed OBC versions of backpropagation algorithm and simulated annealing, and provided some formalism to establish the OBC definitions.

Shahryar Rahnamayan has introduced opposition-based 2 Opposition-Based Computing 27 differential evolution ODE with practical and theoretical contributions to some of the formalism in this work. The authors would like to thank Dr. Don McLeish for valuable discussions with respect to the relationship between antithetic variates and opposite quantities. We also are grateful to Maryam Shokri and Dr. Farhang Sahba for their contributions to the OBC research. Houghton Mifflin 2. Carlin, B. Cervellera, C. Chapra, S.

Cheng, V. Pattern Recognition 37 7 , — 6. Jordanov, I. In: Foo, N. Canadian AI LNCS, vol. Springer, Heidelberg 7. Kimura, S. In: Genetic and Evolutionary Computation Conference, pp. Kucherenko, S. In: Global Optimization, pp. Computational Optimization and Applications 30, — Li, M. Maaranen, H. Computers and Mathematics with Applications 47 12 , — Journal of Global Optimization 37, — Nguyen, Q. In: Congress on Evolutionary Computation, pp. Nguyen, X. In: Genetic and Evolutionary Computation Conference, p.

Niederreiter, H. Computing 22 2 , — Pant, M. In: Congress on Evolutionary Computation to appear, Rahnamayan, S. Computers and Mathematics with Applications 53 10 , — 28 H. Rahnamayan Applied Soft Computing 8 2 , — Sarker, B. Computers and Industrial Engineering 37 4 , — Stewart, N. Journal of Mathematical Psychology 49 5 , — Tizhoosh, H.

Yaochen, Z. A pair of antithetic random numbers are variates generated with negative dependence or correlation. There are several reasons why one would wish to induce some form of negative dependence among a set of random numbers. If we are estimating an integral by Monte Carlo, and if the inputs are negatively associated, then an input in an uninformative region of the space tends to be balanced by another observation in a more informative region. A similar argument applies if our objective is Monte Carlo maximization of a function: when the input values are negatively associated, then inputs far from the location of the maximum tend to be balanced by others close to the maximum.

In both cases, negative association outperforms purely random inputs for sufficiently smooth or monotone functions. In this article we discuss various concepts of negative association, antithetic and negatively associated random numbers, and low-discrepancy sequences, and discuss the extent to which these improve performance in Monte Carlo integration and optimization. They are one of a large number of variance-reduction techniques common in MonteCarlo methods for others, see [12]. McLeish we do not need the assumption of independence, but only that each of the random variables Xi have probability density function f x so it is natural to try and find joint distributions of variables Xi which will improve the estimator, i.

## SIAM Journal on Scientific Computing

One of the simplest approaches to this problem is through the use of antithetic random numbers. The solution is quite simple. Since we are constrained to leave the marginal distribution alone, we can only change the dependence between X1 and X2 and thus vary the term F x1 , x2 in 3. Our problem becomes the minimization of F x1 , x2 , if possible for all values of x1 , x2 with a constraint on the marginal distributions of X1 and X2. In this case, the joint cumulative distribution function is 1 This definition requires that F x be continuous and strictly increasing. The definition of inverse transform that applies to a more general c.

This strategy for generating negative dependence among random variables is referred to as the use of antithetic or opposite random numbers. Definition 1. The concept of antithetic random numbers as a tool for reducing the variance of simulations appears in the early work of Hammersley see [8] and [7]. The extent of the variance reduction achieved with antithetic random numbers is largely controlled by the degree of symmetry in the distribution of the estimator.

For a more general function, of course, the variance of the estimator is unlikely to be zero but as long as the function h is monotonic, we are guaranteed that the use of antithetic random numbers is at least as good in terms of variance as using independent random numbers, i.

In general random variables X1 , Random variables X1 , Notice for exchangeable random variables Xi all with marginal c. This is not always achievable but it is possible to get arbitrarily close to this value for uniformly distributed random variables. The following iterative Latin hypercube sampling algorithm see [2] allows us to get arbitrarily close in any dimension n: Iterative Latin Hypercube Sampling Algorithm 0 0 0 1.

Un all independent U [0, 1]. The basic questions are: how should we select the points ui and how much improvement can we expect over a crude independent sample? We begin with a general result. Here each of the random vectors U1 , McLeish Proof of formula 3. This approximation 3. The potential improvement over independent Ui is considerable. To generate such as set of ui we will use the notation x mod 1 to represent the fractional part of the real number x. For example in Figure 3. There is nothing particularly unique about the placement of points suggested by 3. There are a variety of other potential algorithms for filling space evenly so that the sizes of holes are small, often referred to as low-discrepancy sequences under the general topic Quasi-Monte Carlo Methods.

See for example [14]. We will discuss briefly a special case of these methods, the Halton sequence later. We might, however, also try using antithetic random numbers as a device to get close to an optimum. In fact for a particular function, it is possible that a pair of antithetic random numbers are worse than a pair of independent random numbers. McLeish However if the function g is monotone, it is a somewhat different story, because in this case the antithetic random numbers are more likely to be closer to the maximum, at one of the two ends of the interval.

In particular, for one-dimensional monotone functions g and continuous random variables Xi , the use of antithetic inputs provides 7. For strictly monotone g, P max g U1 , g U2 3. Return to the context of Theorem 1.

## Mario Ventresca - Google Scholar Citations

Theorem 3. For strictly monotone g, P g U1 3 4 Proof. Assume without loss of generality that g is monotonically increasing. The easiest way of distributing m2 points at random in the unit square is with a shifted lattice, i. McLeish 1 0. We will compare various other possibilities in 2-dimensions with this one. A more efficient design for filling spaces in 2 dimensions is with a triangular tessellation. Suppose the points u1 , u2 , By this we mean that each vertex ui except possibly for those on the boundary is the corner of 6 triangles in the triangularization see Figure 3.

Suppose the side of each triangle is of length y. We use this relationship to determine y from the number of points n. Again in this case it is not difficult to determine the distribution of the squared distance D2 from a random point in the unit square to the nearest vertex of a triangle. Assume the triangle has sides of length y.

We want to determine the distribution of the distance between a random point in the triangle and the closest vertex. The sides of the triangle are as in Figure 3. By symmetry we can restrict to a point in the smaller right-angle triangle ABC. Determining the c. A simple example of this is the Halton sequence. We begin with a special case in one-dimension, the Van der Corput sequence see for example [12, Section 6. Write n using its binary expansion. Determine the number in [0,1 ] that has this as its binary decimal expansion. The intervals are recursively split in half in the sequence 12 , 14 , 34 , 18 , 58 , 38 , 78 , The Halton sequence is the multivariate extension of the Van der Corput sequence.

In higher dimensions, say in d dimensions, we choose d distinct primes, b1 , b2 , Since it appears to fill space more uniformly than independent random numbers in the square, it should also reduce the value of D2 and consequently increase the probability of improvement over crude Monte Carlo. The first points of the Halton sequence of dimension 2 42 D. McLeish Table 1. Comparison of pimp for various methods of placing ui n crude 4 16 64 1 2 1 2 1 2 1 2 antithetic Halton pairs 1 0. As we have seen, we can improve on the performance of these methods if the function to be optimized is known to be monotonic.

The methods described above are largely black-box methods that do not attempt to adapt to the shape of the function. However there are alternatives such as Markov Chain Monte Carlo methods see [6] and [15] that sequentially adapt to the function shape, drawing more observations in regions for example near the maximum and fewer where the function is relatively small.

We wish to maximize a non-negative function g x over a domain which might well be high-dimensional. The idea is essentially quite simple-we move around the space with probabilities that depend on the value of the function at our current location and at the point we are considering moving to. It is apparent from the form of 3. If we run this algorithm over a long period of time then it is possible to show that the limiting distribution of xt is proportional to the distribution g x so that values of xt will tend to be clustered near the maximum of the function.

This algorithm is often referred to as the Metropolis-Hastings algorithm see [15], [13] and [9]. For moderate sample sizes e. For higher dimensions a Halton sequence or another low-discrepancy sequence is a reasonable compromise between ease of use and efficiency. If we wish to adapt our sequence to the function g to be optimized there is a very rich literature on possible methods, including Markov-Chain Monte Carlo algorithms such as the Metropolis-Hastings algorithm see [15] , and in this case, at the expense of some additional coding, we can achieve much greater efficiencies.

Alam, K. A — Theory Methods 10 12 , — 2. Craiu, R. Cuadras, C. Multivariate Anal. Premiere Partie 5. A 14 3 , 53—77 6. Gilks, W. Hammersley, J. Methuen, London 8. Cambridge Philos. Hastings, W. Biometrika 5, 97— Hoeffding, W. University Berlin 5, — Lehmann, E. Wiley, New York Metropolis, N. Journal of Chemical Physics 21, — Applied Mathematics, vol. SIAM, Philadelphia Robert, C. From the middle of the twentieth century contradictory logics have been growing in number, scope, and applications. These are logics which allow for some propositions to be true and false at the same time within a given formal system.

From psychology to physics one finds many examples of areas of the mind and of physical reality which exhibit an unmistakable degree of antinomicity. Quantum mechanics is a prime example. Now, it is commonly thought that antinomies depend essentially of negation. This is not so. In fact, there are many antinomies whose contradictory character is created by the fusing of opposite assertions into one.

Indeed, antinomicity should be considered an instance of the more general notion of opposition. A logic of opposition is essential to deal with this general case. But what about terms? The nouns, adjectives, and verbs that form part of statements? Terms are neither true nor false, they have to be fed to a predicate in order to obtain a true or false sentence. Yet, terms can be antinomic in a very important sense. We often define a term A in terms of an opposite term B, but only to find that in turn B has to be defined in terms of A.

I sustain that circularity should be looked at positively and put to good use. This chapter deals with i the relations that exist between antinomicity, opposition, and circularity; with ii ways of dealing explicitly with the antinomic character of certain terms, a character which becomes obvious every time we think of them reflectively; then with iii the answer to the question of what changes are necessary to make in the usual constructions of logical models to enable us to expose antinomic terms explicitly: formalization must undergo substantial changes if we are to make systematically obvious the fact that, whether we know or not, we do think regularly in terms of opposites.

Nowadays there are many such logics developed from different starting points, all with a positive attitude toward contradictions, and a number of them having significant applications, most notably to quantum mechanics, a discipline whose H. Asenjo contradictory conclusions keep growing in extraordinary and puzzling ways, notable examples: nonlocality and the Bose-Einstein Condensate. Classical formal logics are really not adequate to deal with such conclusions. These words are often given somewhat distinct connotations with respect to one another but for the purpose of this work such differences are not essential and will be disregarded.

Many examples exist of this kind of paradoxical statement given in ordinary or technical languages. It is often believed that antinomies depend essentially on the use of negation. This is not the case. Antinomies can be obtained without negation just by putting together into a single conjunction two opposite statements: antinomies generated by negation are only a special case.

- การอ้างอิงต่อปี!
- Numerical Electromagnetics: The FDTD Method;
- Citations per year.
- Breakthrough: Stories and Strategies of Radical Innovation;
- Mario Ventresca.
- Eight Critical Questions for Mourners: And the Answers That Will Help You Heal.

The logic of antinomies should be broadly understood as a special logic of opposition [1]. In this chapter I do not want to revisit the ideas of the work I just referred to, nor to look into the opposition of sentences in particular; rather, I shall look into the opposition of terms in themselves, the terms that occur as subject of a sentence.

I shall be interested especially in the kind of opposition that involves circularity, that is, the cases - much more common than one would think in which opposite terms hinge semantically on one another to the point of making each term unintelligible without thinking of its opposite counterparts at the same time. However, there are situations and processes in which to disassemble the complex only obfuscates the picture. There are properties that emerge when small parts are assembled into a larger whole and which were not present in any of the parts taken in isolation, properties that become lost when we insist in splitting the whole into its smallest possible components.

This unfortunate attitude is present also in the way we comprehend ideas, especially our most basic concepts, the so-called categories of thought. We like to keep categories simple so that our thinking is kept simple as well. But simple thoughts are often not adequate for the understanding of a complex reality. Clarity can be deceiving. Even in the realm of pure thought we cannot escape complexity if we really want to comprehend how our mind works in order to go effectively beyond platitudes. Charles Peirce remarked that on first approximation we do take ideas at a neutral state, simple, unadorned [18].

We all start at this uncomplicated first approximation. One is one and many is many, and we do not care to go beyond this clear-cut approach. But when the mind begins to function reflectively the picture changes. We need to become explicitly aware of this semantic situation and incorporate circularity as a regular way of building our repertoire of basic ideas. Nothing can be defined thoroughly without resorting to circularity at some point.

Categories, in particular, can only be defined in circular ways. This should be accepted 4 Opposition and Circularity 47 as a normal way our mind functions. In the present considerations we want to focus especially on circular concepts formed by the semantic conjunction of an idea and its opposite. Both categories emerge from our naive perceptive experience, and we usually take them as neutral, simple ideas.

Matter-and-energy is another antinomic term. Energy is characterized by its effect on matter, and matter is known to be energy waiting to be released, the source of nuclear power, for example. Whole-and-parts constitutes one more example of a very fruitful opposition, indeed, a very productive antinomy. Becoming clearly implies change, but change is never absolute: this would be tantamount to chaos; something always remains identificably the same during the most radically transforming process.

Real-and-potential, universal-and-particular, activity-and-passivity also add to the endless list of antinomic terms. But the complexity of thought does not stop at the first conjunction of opposite concepts. Let us look again at the one-and-many circle: it involves a second circle. Choosing-and-gathering is another circle hidden inside the meaning of one-and-many, a circle composed of two active gerunds, pointing at two opposite movements of the mind acting in conjunction.

The semantic situation we have just described has been recognized already by a number of authors see [7] and [10] , although it has not been taken as a logical theme by itself, nor has it been analyzed, and much less acknowledged as taking sometimes the form of a cascade of antinomic terms in successive inclusion deep inside. The antinomy is the force that makes up the semantic field [10]. Asenjo and these oppositions are ready to present themselves to the consciousness of the speaking subject. By the last expression we mean the condition of certain concepts, statements, or paragraphs of obscurely containing other significant connotations than those set in clear explicitness.

The implicit content cries to be uncovered and may consist of related ideas, situations, or memories buried under the surface of our consciousness, important some times, insignificant occasionally. Marcel Proust has left several classical descriptions of the semantic phenomenon we are referring to and of the process of recovering those half-forgotten remembrances that suddenly claim for our attention promoted by a casual reference or event [19].

This is a common occurrence: often words and sentences are gravid with related meanings whose whole import escapes us momentarily but that we can usually uncover with a well-directed reflection. We want to focus now on the parallel treatment of antinomic terms and predicates. Predicates are subject to a similar kind of circularity. A situation or a memory may be bitter and sweet at the same time, which can clearly be expressed by an antinomic sentence that states such opposition by the propositional conjunction of the two sentences stating bitterness and sweetness separately.

The same applies to the relation of membership in set theory. The set of all sets which are not members of themselves is and is not a member of itself. We want now to introduce symbols to systematize this new situation. We already noted in passing that terms as well as predicates may have more than one opposite. No confusion can be derived from this multiple use of the same symbol, syntactically or semantically. Here we shall use the same symbol to indicate the conjunction of terms and predicates.

Semantically, sentences become true or false when interpreted in a given structure, which then becomes a model of the sentence or its negation. A sentence may be true in one structure and false in another. Constant terms are interpreted in classical model theory each by a specific individual in the domain of interpretation of the given structure - the universe of discourse. Variable terms range over such domain. Of course, different constant terms have different interpretation in different domains. For us, terms will not all be simple, or in other words, atomic.

As for predicates, classically each is assigned a subset of the domain of interpretation if the predicate calls for only one term. This subset is precisely the collection of all individuals that satisfy the predicate. If the predicate is an n-ary relation with n greater than or equal to 2, the interpretation of such relation is, of course, the collection of all n-tuples of individuals that satisfy the relation.

Each is the sum of all the predicates, including all the relations, that the term satisfies by itself or occurring in pairs, triples, etc. In the parallel real world, concrete things can be said equally to be made up of all the properties and relations they exhibit. Relations, in particular, incorporate at the core of each actual entity the presence of other objects to which the entity is related, a presence that may transcend time and location.

To sum up: as we try to analyze a thing, what we do is in effect to mentally unfold property after property and relation after relation as though we were peeling off an onion. We shall then accept as a given that a term that represents a concrete or abstract object is composed entirely of predicates unary, binary, ternary, etc.

In order to make clear the difference between P t and t P - expressions which superficially may sound synonymous - let us set up a sequence of logical types or levels. Asenjo Type 0 is the level of terms and predicates by themselves, neither predicate formulas nor term formulas occur at this level. These formulas may contain P or t as variables; if not, or if P and t do not occur free, the formulas are called sentences.

The usual propositional connectives, as well as quantifiers over terms belong to type 1, where the classical and the antinomic predicate logics can both be developed with their associated semantic interpretations, models, etc. We want now to place the term formulas t P in a different type. For this purpose, we will make use of the notion of negative types introduced by Hao Wang with an entirely different objective [21].

He used negative types only as a device to show the independence of the axiom of infinity in set theory, not to analyze terms. Here, we want to use negative types precisely to formalize the structure of terms and of their interpretations. Since thought begins and ends with individuals, and since logic and mathematics originate in abstractions performed on individuals, it is reasonable to make room for a more concrete way of considering terms by looking into their inside, that is, the predicates that make them.

Each predicate P1 , P There are examples in mathematics of such a kind of universe of discourse. The so-called Riemann surfaces to represent functions of complex variables is one such example [8]. In this type of representation an indefinite number of planes, or just regions, are superimposed successively each over the next to represent the inverse values of such functions in a single-valued manner.

We shall interpret here each term by a Riemann surface of sorts, that is, by the collection of folds that successively interpret the predicates that satisfy t. Each predicate P that satisfies t is in turn interpreted by exactly one fold of the onion; again, each fold is the set of individuals that satisfies P , in type 1, which naturally includes t itself as an individual from type 1.

Negative types are then constructed on the basis of what is already built in the corresponding positive types. See example at the end of the next section. The focus is on predication, and the individuals that interpret the terms as well as the terms themselves are devoid of content, points in the universe of discourse.

Yet, it is the term that exhibits the predicates, not vice versa. In type 1 only predicate formulas can be true, false, true and false, or neither true nor false. At such level, it does not make sense to speak of the truth of a term. Nature provides individuals, not genera, the latter being a product of the mind. These individuals come assembled in folded stages, leaf over leaf; we analyze them accordingly, leaf by leaf. Concrete terms are therefore true because they always have real, identifiable properties, and because they display the impact of all the other real terms to which they relate.

This makes true predication possible, and the sentence t P true. No real predication can be asserted of a unicorn. No real object represented by a term t can be false, this would mean that for all predicates P, t P is false. Thus, the concept of number started with thinking about the natural numbers, then it was successively extended to the integers, the rational numbers, the real numbers, the complex numbers, etc. Given that the truth of a sentence depends on the model in which the sentence is interpreted, it is not surprising that specific true sentences related to an expanded concept may not be true in the expanded models without effecting important reinterpretations of their meaning, if that is at all possible.

Asenjo When two concepts originally separated by opposition merge by conjunction into one complex idea in a merger, each concept becomes an example of concept expansion but in a different sense. This applies to all circles, not only antinomic ones. What distinguishes circles from other types of concept expansion is that the said expansion does not take place by a broadening of the scope of application of each of its components horizontally, so to say - but by deepening its original meaning - vertically.

The expansion consists in an intrinsic enlargement of the meaning of the concepts involved which adds new semantic levels and resonances to each of them. Through renewed antinomic actions like choosing-and-gathering, identifying-and-distinguishing, etc. This bias obfuscates our understanding of the true nature of thought. Predicates do not float by themselves in a heaven of Ideas, they exist in an individual, or nowhere. We think necessarily with relatively vague concepts, concepts whose contour we seldom can circumscribe clearly. The integrity of the analysis of my concept is always in doubt, and a multiplicity of examples can only make it probable [14].

Mathematical concepts are of course the exception to the primacy of vagueness. But in general, we make our way toward definiteness in a universe of thoughts which remain to the end incurably vague. The apparent definiteness of language deceives us into believing that we have captured reality in the flesh when we describe it with words. Language is finite and discrete, reality is the opposite. We approach reality with atomic thoughts, but there are no atoms in either world or mind.

### Weitere Formate

The realizations just pointed out lead us to reflect that opposition itself must not be taken as being always clear-cut except in the most abstract thought. Between a term and its opposite there is a continuous spectrum of possible contrasts in between. There is in actuality a degree of opposition, a degree that goes continuously from total opposition, to no opposition at all.

Let us do it in two alternative ways. The predicate Opp is symmetric, that is, if t1 Oppa t2 , then t2 Oppb t1 as well, but the opposition of t2 to t1 may have a different degree b. Both approaches open the door to a fuzzy logic of opposition which we shall not pursue here. In type 1 specific systems usually deal with a finite number of constant predicates and an infinite number of terms. In classical set theory for example, we have only one twoplace constant predicate, membership, in fuzzy set theory, we have as many predicates of membership as membership has degrees.

In a parallel way, in type-1, if we allow for degrees of opposition, for each term t we have as many opposite terms as degrees of opposition, in principle infinite. The same applies to predicates. We subconsciously tend to think that there is a supremacy of unity over multiplicity in thought, that unity is generally a desirable objective to reach. But complex numbers for instance do not form a unity; they are each a single entity, made up of an irreducible pair, a multiple object - a many taken as a many, not as a one. So are circles of any kind also irreducible multiplicities, including the conjunction of antinomic mental processes such as identifying-and-distinguishing, etc.

We know already that it makes a substantial difference to gather distinguishable items first and then choose from them, or to identify items to then gather them into a many - we end up with opposite objects of thought. We usually pay no attention to the order in which we carry out our movements of the mind, but they do branch creatively in different, often opposite directions toward opposite ends.