Supposedly highly intelligent “do-gooders” keep telling us that Artificial Intelligence (A-I) is going to replace the human brain. It ain’t going to happen. I have written you before that George Guilder has by far the most intelligent brain alive today as respects technology. He told us what the internet was going to be and do before we even knew what it was……and he told us in advance about all of the other great tech advances before they ever happened. And he personally knows and keeps in contact with all the other great tech minds. In this note that he sent me yesterday he proves that while A-I is useful, it cannot replace the human brain. Do read to his conclusion at the end………….
The Genesis of Synaptics and the Future of Computing
Dear Ronald, January 12, 2022
The intuitive mind is a sacred gift and the rational mind is a faithful servant. We have created a society that honors the servant and has forgotten the gift.
– Albert Einstein
During the holidays we had the opportunity to sit down with Federico Faggin’s recently published book, Silicon. Long-time readers of this newsletter are undoubtedly familiar with Faggin’s name and his pioneering work in semiconductors.
Federico has had many impressive technological accomplishments during his career. At the top of the list are leading the team at Intel (INTC) that developed the first microprocessor and collaborating with Caltech physicist Carver Mead on neuromorphic chips at Synaptics (SYNA). These accomplishments were highlighted in my books, Microcosm and The Silicon Eye.
We published a Monthly Report on Synaptics last summer and added the stock to the Paradigm Portfolio. Faggin no longer is involved with the company, but his innovative spirit is alive and well there. The company is prospering under the leadership of CEO and tech veteran, Michael Hurlston.
As Faggin recounts in his book, the technological vision he and Mead shared at Synaptics back in the mid-1980s had evolved to include general-purpose building blocks for making sensory systems based on neuromorphic integrated circuits (ICs). Bringing the vision to life entailed defining a family of chips for resolving generic pattern recognition problems based on learning rather than programming.
The key, said Faggin, was to address this class of problems with a small family of mostly analog chips. The general idea was to combine various numbers of four or five different types of chips to build a variety of pattern recognizers, just as is done today with memory chips for which the amount and organization depend on the complexity of the program and type of data needed.
Faggin points out that the operation of the entire system would be orchestrated by a general-purpose microprocessor or microcontroller. This goal, however, was easier said than done. They needed an overall architecture for neural networks that did not yet exist.
To develop the technology, the team at Synaptics first concentrated on solving several different pattern recognition problems for potential customers, while in parallel developing the basic VLSI technology for neural networks capable of continuous learning, along with imaging technology for vision systems.
One of the early custom projects at Synaptics was the design of a character recognition chip for Verifone to optically read the magnetic ink character set at the bottom of bank checks. This would help achieve higher accuracy than was possible with magnetic reading for which those characters had been explicitly designed. This chip was called the I-1000. Getting Verifone on board early on was a coup for Synaptics The company was a world leader in payment systems.
The I-1000 was a highly sophisticated chip containing several pieces, including an optical imager, two neural networks, several analog-to-digital converters for the output data, and the control logic to interface with a conventional microcontroller. The combination of the Synaptics I-1000 with a properly programmed microcontroller realized the entire electronics of the check reader.
As Faggin recounts in his book, the development of the I-1000 chip taught the Synaptics team many useful lessons about the design of neural networks. It also led him into the study of the subject of consciousness and prompted him to ask the question of whether it was possible to make a conscious computer.
Faggin surmised that if consciousness arises from the brain, then a computer could be conscious as well, as least in principle. Taken by great curiosity, he began to ponder how he could make a conscious computer.
As he thought about it and reflected deeply on the characteristics of consciousness, he encountered a great obstacle: the complete lack of understanding scientists have about the nature of sensations and feelings. Consciousness, says Faggin matter-of-factly, is a fundamentally unsolved problem.
He observes that a machine can recognize a rose by its “emissions” through emulating natural processes, but it does not feel anything. Humans, by contrast, feel the aroma or scent as well as recognize the rose as the source of that feeling. In other words, where the name of the recognized object is another symbol, the scent of the rose is not a symbol, it is something else. It is, says Faggin, a sentient experience that connects us with our emotions and knowledge.
A computer that identifies a rose by its aroma only mechanically captures the pattern of electrical signals produced by appropriate sensors of the rose’s aromatic molecules (the chemical symbols). The computer is not aware of the scent of the rose, even though it may respond in various ways to the rose symbol.
Thus, says Faggin, the computer blindly responds to a rose the way it has been programmed to, or in the way it has automatically learned. Crucially, the computer can neither be aware nor consciously know anything. Hence, the comprehension brought by consciousness is not accessible to a computer.
Herein, notes Faggin, lies the fundamental limitation of artificial intelligence (AI).
Faggin’s insights on the limitation of AI are kindred with those I expressed in my book, Gaming AI. As I noted, the best, most complex and most subtle analog computer remains the human brain. AI poses no threat to it whatsoever.
I encourage you to pick up a copy of Faggin’s new book… and Gaming AI, too, if you haven’t already.