• Tidak ada hasil yang ditemukan

Appendix A Composition and Decomposition of Patterns

N/A
N/A
Protected

Academic year: 2023

Membagikan "Appendix A Composition and Decomposition of Patterns"

Copied!
37
0
0

Teks penuh

It is difficult to argue that a theory is not a derivational theory in the strict sense, because the role(s) of derivation in a derivational theory must have counterparts in any theory whose proponents claim to be non-derivational. This is done to show that it is reasonable to view syntax as a system of concurrent events, defined by reference only to priority. We define an exhaustive (discontinuity-resistant) segmentation of a string, based on its weak interpretation.

The exhaustive segmentation of a string S is the partially ordered set (V, <), whose top element is the weak interpretation of S. The minimal syntax for string S is the weakest substructure in the exhaustive segmentation of S, with any benchmark to determine which substructure is the weakest. For example, Bill is the anchor of Bill = Bill V (O) (≠ S V Bill, which encodes Bill as an object).

It is designed to facilitate seeing how subpatterns are assembled into a pattern, on the one hand, and how a pattern is decomposed into subpatterns, on the other hand. It is possible (and perhaps quite reasonable) to reinterpret the concatenation operator in terms of precedence by identifying “+” as “<”. To put this in a general way, it is useful to appeal to the following notation.

It seems clear that such a task could not be efficiently computed without parallel computing devices such as neural networks.

From itemic to schematic encoding

Under this interpretation, the task in question is a function that "reduces" binary relations in abstract matrices like (25) by checking all the binary relations one by one.

More on Pattern Composition and Decomposition

  • Details of co-occurrence matrix
  • Pattern matching as “relaxation”
  • Syntax encoded in vertical and horizontal modes
  • Words as schemas encoding precedence-sensitive dependency
  • Diagramming co-occurrence matrix
  • How words glue with each other

The co-occurrence matrix is ​​so called because it is always in the form of an n 3 n matrix, the given basic patterns consist of n units. In co-occurrence matrices, the i-th row encodes the i-th unit of the "base" form. I interpret pattern matching to be a relaxation in the sense of Arbib (1989), consistent with the interpretation that subpatterns express multiple constraints that need to be "relaxed."

In horizontal mode, each word scheme is conceived as a “declarative” statement of co-occurrence constraints (hence the name co-occurrence matrix). Technically, co-occurrence matrices such as (19) are optimizations of more abstract n3n matrices, obtained mechanically by diagonalization. This matrix encodes a set of pairwise relations ri,j from the ith unit to the jth with relational label r.

The arrow LR (⇒) encodes a demanding relation in which the ith entity requires the jth to be there with the relation label R. The connections in the upper half of this diagram correspond to ri,j (i < j) in the upper right triangle of M Connections in the lower halves of this diagram correspond to ri,j (i > j) in the lower left triangle of M.

Rather, they are mnemonics of co-occurring constraints, which vary from word to word. It is impossible to indicate in terms of the co-occurrence matrix what content they have. It is lexicography that should be responsible for this, and I argue that pattern matching offers a well-articulated candidate for describing the interface between lexicography and syntactic analysis.

Specification of syntactic structure is, in a crucial sense, merely a selection of "nodes" in the network. What is more controversial is that according to [word grammar], the same is true of our knowledge of words, so the sub-network responsible for words is just part of the overall 'large set of associations'. In this connectionist system, "[ph]rases are," explains Hudson (1998: 2), "implicit in the dependencies but play no role in the grammar." This also provides a reason, at least conceptually, to dispense with an independent component to generate sentence markers such as (15)a = [A [B.

Figure A.3 Figure A.4
Figure A.3 Figure A.4

Role of Overlaps among Subpatterns

What are to be superposed?

Conditions for the emergence of surface patterns

Whatever sense this kind of reinterpretation makes, it seems circular to me. Note that in this and other examples below, pattern 0 does not need to be defined independently (e.g. by base rules) if the subpatterns are already defined. It should be noted that the minimum subpattern length for overlays is two, and conversely, the maximum (in this case) is five, since there is no subpattern.

Note also that (33), where the subpatterns do not overlap, and (37), where one and only one subpattern is equal to the whole, are two special cases in opposite directions. It is easy to see that the length (L) of each subsample and the number (N) of subsamples needed to compose the whole are in the ratio N + L > max(N). I argued earlier that a pattern emerges when sub-patterns interact, even in the simplest of ways.

My claim is supported by what and (36) succinctly show in light of how pattern 0 emerges from the interaction of subpatterns. Some, if not all, of the subpatterns are quite larger than the minimum size (of lexical units). Some, if not all, of the sub-patterns are quite smaller than the maximum size (of the entire pattern).

Some, if not all, subpatterns are properly smaller than the maximum size (of the whole pattern)

  • Efficiency-motivated pervasiveness of triplets
  • What is the lexicon, and where is it?
  • Scale Effects in Syntax
    • Recognition of phrasal units
    • Morphological statements scattered in syntax
    • Getting sequences of derivation out of analysis
  • Morphology as Integrated into Syntax (Rather than Sep- arated from it)
    • Functional composition
    • Notion of scattered morphology
    • Pattern matching analysis in contrast with head-movement analysis
  • Subpatterns Are More than Subcategorization Frames
    • Argument 1
    • Need for well-constrained context-sensitivity
    • Argument 2
    • Note on the correlation between precedence and dominance
  • Concluding Remarks

Rather, it is better to say that triples of the form X-R-Y are just a special case of n-ary dependencies, but they are "optimal" in that neural equilibrium. It is quite interesting to note that the set of subpatterns in (40), which are finite units and not substrings, is equivalent to the standard concept of what is called a "lexicon", in which lexical units are listed without being linked to each. others. Note first that it is better to say that the most basic analysis of Bill undergoing surgery is (45) rather than complex (19).

The analysis given in (19) would be sufficient for conventional purposes, but is clearly too crude for a morphological analysis. It is assumed that syntactic analysis should be scale-sensitive, in the sense that it should be performed relative to an appropriately determined unit size. The simultaneous event matrix in (52) asserts that an operation is O of undergo(es) if and only if it is O of bottom rather than of go.

So, it is possible to replace the analysis with the following, if there is a real need. It may be conceptually necessary, but it is not at all practically or even factually necessary. It is now clear, I think, that pattern-matching analysis succeeds in integrating morphological issues into syntax, rather than separating it from syntax.

Therefore, it is not surprising even if affixes such as undergo have their own 'argument structure', given the fact that they are relational in nature. In a sense, it is not unreasonable to think that the idea of ​​syntactic pattern is a conceptual revision of the idea of ​​subcategorization framework. For more clarity, it is useful to note that patterns such as Bill V (O), S undergo O, S V an operation are schemas not only in the sense popular in cognitive linguistics, but also in the sense used in Arbib, et al. (1987), Arbib (1989).

But it is clearly possible, as Diehl (1981) rightly claims, to eliminate the base component entirely in favor of the lexical component, insofar as so-called selection constraints are to be included in the grammar. But, as Diehl (1981) points out, it is not only possible but also empirically plausible to eliminate the base component in favor of the lexical component, insofar as the grammar must exclude sentences of the following type. Of course, it is possible to assume that the sentences here are grammatical, but they are simply unacceptable.

If the expressions are ungrammatical and therefore unacceptable, then it is impossible to distinguish expressions like (67), as well as those in (69), from the following ones. The question is largely open-ended, but I arbitrarily assume it isn't, simply because it's unmotivated.

Figure A.5 Figure A.6
Figure A.5 Figure A.6

Gambar

Figure A.3 Figure A.4
Figure A.5 Figure A.6

Referensi

Dokumen terkait

So it can be concluded that firm size, return on assets, operating profit margin, earnings per share and public ownership simultaneously significant effect on Income Smoothing.. It can