Nevertheless, I would like to think that an account of computation and implementation in the spirit of the description I give in the article can play a number of useful explanatory roles. In the article, I first describe what the implementation of computation means in terms of causal organization: roughly, the computation of a physical system is performed when the formal structure of the computation is reflected in the causal structure of the system. physical system. Abstract objects play a central role in the mathematical theory of computation, while concrete processes play a central role in computation "in nature". The really interesting question and the focus of my article is the bridge between the two.
In any case, for the reasons stated in the target article, I don't think the objection is much of a concern. As in the target article, what matters in practice is not so much whether a system computes as what it computes and what this computational structure explains. It is not clear that anything like this is part of standard practice when implementing computations in nature.
I think it is clear that Egan's functional conception does not capture standard practice of computing in nature:. here, input-output dependencies play a useful role in characterizing what we want systems to do, but in software design and implementation, the use of algorithms is where much of the central action is found.
Are Combinatorial-State Automata an Adequate Formalism?
Be that as it may, functional programs are certainly important abstract objects in computer science, and any complete account of computation should accommodate their role. But it's not clear to me why this is a problem when it comes to implementation. So it is not clear that the CSA translation account gives incorrect results as a representation of what it means to implement a TM.
So we have not yet uncovered issues for the CSA translation account for TM implementation. CSAs really serve as a useful illustration of a computational model and an accompanying statement of implementation. This account of TM implementation will not explicitly mention CSAs, but it will nevertheless be a causal account, and one to which the main points of my discussion will apply.
In general, my description of CSA implementation can be extended to other models of computation. However, if there are more serious problems, we can simply modify the CSA-style implementation account to handle the ACT-R implementation. And crucially, the theory of ASMs includes a description of when a low-level ASM implements a high-level ASM.
We can then give a description of the conditions under which physical systems implement low-level models, much along the lines given in the target article. Combined with the ASM framework's account of when low-level models implement high-level computations, this will give a general account of when computations of all kinds are implemented. It helps us answer difficult questions (as raised in the previous section) about the relationship between high-level computational structures, such as functional programs, with the account of computation and implementation given in the target article.
My account of implementation
I think the ASM framework is just what is needed to make the target article framework more general and powerful. First, I offer an approximate gloss: a physical system performs a computation when the causal structure of the physical system mirrors the formal structure of the computations. Scheutz points out that the role of temporal individuation can be treated atemporally simply by adding a clock to the system that plays the role of time.
Scheutz can respond in turn by adding n clocks to the system, one for each physical element of the system. Presumably he will handle these the same way I handle I/O FSAs: by adding an input memory to each of the $n$ components (where all n are required to have the same state) and then continue as above. This definition does not require that physical substates can be arbitrarily recombined: if some recombinations of physical substates are physically impossible, then they will be irrelevant to evaluating the truth of the counterfactual.
Perhaps we can implement a Turing machine on a PC in such a way that the contents of the tape can be stored in RAM or on the hard drive. However, it will certainly not be physically impossible to combine the RAM implementation of the first square with the hard disk implementation of the second square. Instead, it simply assumes that the output is produced by some part of the system.
Instead, the output will be produced from a state S of the original system, independent of the input number and memory. Moreover, the construction does not satisfy the corresponding counterfactual conditional, even short of the causal requirement. There will be possible system states that recombine the state S of the original system with arbitrary states of the input dial and memory.
All these states will produce the same output, when in reality many states of the dial and input memory will have to produce different outputs to implement the FSA. One can then map a disjunction of states from the output input memory to formal outputs in the familiar way.
Computational sufficiency
Because of this, observers cannot see where it is present or absent—so it is less obvious to us that cognition is absent in simulated systems than that flight is absent. I'm not saying it's logically impossible, but I think it's much less likely than the alternative. One is about competing control of the same outputs - but in the case I have in mind, the systems will have quite different outputs, so there's no problem here.
Another concerns supervenience - but it is a well-known point from the metaphysical literature that two separate systems (the statue and the lump), for example, can depend superveniently on the same piece of matter. Shagrir's third objection is that theorists will deny that there are two minds here - but I think that in cases like the one I have in mind, they will have no problem. So it is far from obvious that there will be such cases where the CSAs support two different minds.
On the objection of explanation, I agree that it is far from obvious that theorists would allow that there are two minds here. I think it is far from obvious that there are such calculations, and if there are, it is far from clear why, if they would support a conscious mind alone, they would not also support the same conscious mind in the context of a whole brain must support. It is also worth noting that cases of this kind pose the same kind of obstacle to the (local) micro-physical supervenience thesis, saying that physically identical systems have the same mental states, as they do to the computational sufficiency thesis .
We might say that a mind-supporting computation c is derivatively implemented when it is implemented in virtue of another mind-supporting computation c′ being implemented, where c′ implements c. In the above case, the prefrontal cortex implements the relevant computation derivatively in the context of a whole brain, but implements it in a non-derivative manner when taken in isolation. Before I go any further, it's worth addressing a related concern, often raised in my report on implementation. Still, the objections continue to arise, so it's worth taking a moment to formulate a counterargument.
Computational Explanation
Second, there are forms of explanation in cognitive science that my model of computation does not capture well. Egan and Rescorla focus particularly on explanatory forms in cognitive science that my computational model does not capture well. Egan focuses on function-theoretic explanation: for example, Marr's explanation of edge detection in terms of computing the Laplacian of the Gaussian of the retinal array.
Rescorla focuses on representational explanation: for example, Bayesian models in perceptual psychology cast in terms of confirming hypotheses about the scene before one. In any case, however we label them, I take representation and function-theoretic explanation – such as social explanation, purposive explanation and other forms of explanation – to be higher-level forms of explanation that are, in my view, quite compatible with computational explanation. The claim is simply that in so far as neural properties are explanatory relevant, it is by virtue of the role they play in determining a system's causal organization.”.
But it doesn't really make clear how important these forms of explanation are compared to representational explanations and the like. There are many forms of explanation in cognitive science, and computational explanation is, in my view, no more important than, for example, neurobiological or representational explanations. He suggests that we have a clear need for neural and representational explanations, but that the role of the intermediate level is not clear.
I also note that when we combine the framework of the target article with the framework of abstract state machines, as suggested earlier, we will be able to use it to capture various more abstract (and therefore more general) levels of explanation. This may at least do little to make sense of the claims of a "general framework" in the target article. Computational explanation (as I interpret it) is just one form of explanation in cognitive science among many, characteristic especially for its role in mechanistic explanation and its generality.