As for Minsky Papert, I’ll defer to your knowledge, as I’m shaky on the concept of linearity in neural networks

As for Minsky Papert, I’ll defer to your knowledge, as I’m shaky on the concept of linearity in neural networks

Brian – I saw Randy’s Science paper and (probably like yourself) was very surprised

I understand that they made the mistake of only considering single-layer networks when pouncing on perceptrons; if they had considered multi-layered networks they would have seen that things like XOR are possible. Linearity and analog systems notwithstanding, I can say with the hindsight of a huge generational gap that it just seems silly to me that they didn’t consider multi-layered networks.

A consistent thread in your comment is that some differences are merely “implementational” or “architectural” details, and thus are actually unimportant or otherwise superficial. IMO, that attitude is scientifically dangerous (how can you know for sure?) and *very* premature (when we have an artificially intelligent digital computer, I’ll be convinced).

Just as the brain only has hardware (as you said, there is no software that is the mind running on top), the only thing that counts when programming a mind is the software. The high-level software need not be concerned with memory registers when representing knowledge, and “pointers” can be implemented on a system that uses only non-volatile memory.

I think the only real problem here is whether or not digital computers can simulate the continuous nature of the brain. If it is the case that a discrete state machine is not hindered by this, then the brain’s architecture with all of the intricacies of neuronal activity can be implemented to the fullest extent with no other problem (although we’d of course want to abstract away as much complexity as possible). However, if digital computers cannot simulate continuous structures with sufficient robustness, then I think AI would have to start putting more research into analog circuits. But I don’t think we currently have enough evidence yet to make the case for either.

So yes, brains and PCs have different architectures, but that doesn’t mean you necessarily cannot implement a mind on a computer.

We can know for sure because all modern day digital computers are Turing-equivalent, meaning any program implemented on one can by implemented on another and be computationally equivalent despite differences in system design

Thomas Kurt – I North Dakota secured personal loans agree with both of your comments, but again my point here was to contrast brains with modern PCs (think of a standard dell laptop). If I am guilty of a straw man fallacy, then this is doubly true of those who take issue with this post for the reason that the brain e as a “computer” (i.e., an information processing device). I never argued that the brain is not an information processing device, but for some reason many think that’s what this post is about (e.g., Tyler, whose points I completely agree with – b/c he’s not criticizing my opinion, but rather one that I’ve never actually seen anyone endorse. But perhaps we could push shreeharsh in that direction?)

Paul: I’ve been following your posts on cognitive/epigenetic/developmental robotics for quite a while, and I am also very interested in it. On the other hand, while there is substantial reason to believe that embodiment is important, a lot of the arguments used to support this claim are far too philosophical for my taste (and indeed I am currently collecting experimental evidence against one of the strongest claims for the importance of embodiment in developmental psychology). You’ll notice the evidence I present in #10 actually pertains to immersion rather than embodiment (a logical fallacy I permitted myself;). I believe embodiment is important, but I don’t think it’s actually been proven.

On the other hand, he himself admitted in a recent prosem that the FPGA metaphor for PFC “may or may not” be deeply informative. So I have some difficulty taking that paper’s perspective very far.