Thinking Machines – Science Fiction or Science Fact?
by Marjorie F. Baldwin
When Ben Wallace invited me to do one of these guest blogs, I choked. I mean, I wanted the opportunity, but what the heck was I going to write about exactly? Then I recalled the words of the venerable John W. Campbell: Science fiction should be equal parts science and fiction—but do notice which word comes first! That’s when I knew. I’d just take one of the areas of science I used in my technothriller books (The Phoenician Series) and talk about that. Funny thing is, Ben sent me an email just minutes after this suggesting the same exact thing. Great minds think alike!
One area of science I’ve speculated on throughout all of the books in the series is “memory mapping.” I start out in the opening scene of the first chapter of the first book I’m editing with a reference to someone’s mind having been “Adjusted” (and I can tell you, it just gets more involved from there!) Not only do I explore mapping a human’s memory—and Adjusting it—but I delve into the age-old scifi dream of taking a human mind (every thought, every memory, every little detail about the person’s psyche that makes them unique) and putting that “mind” into another body, presumably a better body. I don’t use robot bodies, though, as they aren’t really better than a human, are they? Phoenician bodies, now, they’re better, but you’ll have to read the books to find out how and why.
I had a lot of areas to look at: the question of identity, of how thought works, of how memory is stored, and more. I chose to focus on memory mapping back in the 1970s when I was a teenager and first read about the work Alan Turing had done. I was a big fan of Asimov’s robot books so I looked into non-fiction to tell me about “Thinking Machines” at least, inasmuch as a kid in the 1970s could learn about such things. Turing was the first one, I think, to allude to the fact that human thought, essentially, is just a series of electrical impulses, so if we could “map” those impulses and “imprint” them onto a computer circuit (this was the 60s, okay? that was very scifi of him to suggest), we could reproduce human thought on a computer. Turing called these mythical machines “Thinking Machines.” We now call them “supercomputers,” but for a while they were called “Turing Machines.” There are even tests one can do to prove—or disprove—sentience. The Turing test remains the standard today.
I actually interviewed with a company called Thinking Machines in the Kendall Square part of Cambridge, Massachusetts, back in the 1980s. I was still a secretary, hadn’t yet gone to college, but they headhunted me. To this day, I have no clue how they found me. I got a call at my day job—working at Teradyne, an Automatic Test Equipment (ATE) company dominating the semi-conductor test industry. My boss was not amused when I told him (on three different occasions!) that I was interviewing at Thinking Machines. I’m nothing if not honest.
I was a little disappointed to discover their MIT-intellectual snobbery prevented hiring me. My boss was delighted, though I think that’s when he first realized I’d been writing these books using the Teradyne computers during my lunch hour. Hey, it was the 1980s. The “fancy” 8088 Intel processor machines were expensive! Hundreds of dollars, at least (LOL). By the time I went to college ten years later to study engineering, Thinking Machines was out of business but three of my series books were completed and dozens of engineers at Teradyne who’d been reading them were sad to see me go. Karma’s a bitch, huh?
Thinking Machines—the Turing concept from the 60s—haven’t ever been fully-realized in our daily lives. The closest we’ve come is to build parallel processors by using multiple processor chips (like multiple desktop PCs) on one printed circuit board. Then we sort of daisy-chain them all together to pretend the machine can do multiple tasks at once, in parallel.
Since they aren’t communicating with each other or sharing data, they aren’t truly “thinking” the way we envision thought. We tend to imagine thought as taking multiple sensory inputs and coalescing them all into one cohesive concept. A thought. A series of thoughts, actually. For instance, let’s say we enter a room and look for a place to sit down. We notice a vacant chair on the other side of the room. What’s happened? A lot! Automatically, our brains began processing data when we entered the room. We incorporated the fact that there is a room, something is in it called a “chair,” that the chair looks solid, it is X number of feet away, it is not currently in use, it is close or far—distance being a relative concept, we humans naturally relate it in our minds and notice if there’s another chair—closer, making it “better” or further making it “worse.” The list goes on. We automatically incorporate huge amounts of data into a single, cohesive thought, There’s an empty chair across the room I could sit down in. And we do it in a split second.
How? Electrical impulses in our brain fly across cells in a regular, but complex manner. Regular enough that someone like Turing—or myself—insisted the pattern could be mapped. Complex enough that no one’s been able to do it. Yet. When we do, I’m certain in my gut, someone’s going to figuring out how to create a map of your memory—or remap it, as I call it, and then some nefarious evil-doer will find a way to Adjust your thinking. Adjustments are just modified electrical pathways, but that’s not a very nice thing to do to a person, is it? It’s just so easy!
Our brains leave little chemical trails whenever we have a thought, like ants do in the physical world to find their way around. We call these chemical trails memory tracks. When we dredge up a memory, we retrace those tracks. The more often we trace and retrace, the more solidly emplaced is the memory. My proposition, with Adjustments, is that it would be possible to fake a memory track, to force the chemical reaction that would form the trail and then nudge the brain’s natural pattern onto the false trail so the pathway gets reinforced. It’s not that far-fetched. Or that hard—except we have no clue what one trail does versus another. The reality is, we have no clue what any of the brain’s pathways do! We like to claim we know this piece of grey matter does this and that one does something else. Reality is, we don’t actually know for sure.
Robots are way kewel to read about in scifi, but they aren’t nearly as glamorous (yet) in real life. In reality, we still have to turn to studying humans to figure out how “thought” works, and as it turns out, back in the 1950s someone named Jose Delgado did some very exciting experiments. Specifically, he tried to “map” human thought (horrifyingly he experimented on both humans and apes; hey, it was the 50s). Delgado “proved” (there’s still debate today about his proof but his brain chips are, in fact, used by epilepsy patients to control seizures so he wasn’t a total quack) that human thought really is just a series of impulses—many millions of parallel series but just impulses nonetheless. He didn’t successfully map anything but the neural networks computer scientists later explored (and now have microminiaturized to the atomic level!) were based on much of Delgado’s work in mapping the human brain and how the brain “thinks” (how the electrical impulses behave when the conscious being is subjected to known stimulii). Delgado managed to repeat his experimental results so as far as I’m concerned, he proved his theory. He was just about 100 years before his time, maybe 200 years!
In Conditioned Response, I’m going to “create” a Thinking Machine. Predictably, I called it the “Conditioned Human Response” machine, Series E, or “CHR-E.” Or for us humans, Charlie. He’s going to “install” the memory map of selected data from the machine into the organic grey matter and then…think. Like Delgado suggested.
To learn more about how Charlie—or any other thinking machine—works, in my far-future world, be sure to check out my chapters as well as some of the links I’ve provided below for further reading. Big thanks to Ben for letting me blather on and promote my scifi technothriller books in the process.
Marjorie F. Baldwin (or Friday) is a pen name under which will be published a series of scifi technothriller books set in the far future, involving alien/human cultures, genetic engineering at multiple levels, artificial lifeforms of several kinds, and memory manipulation at its best and worst. The books are scheduled to begin release in December, 2011 via Smashwords. You can read sample chapters by visiting the Harper Collins writer’s community web site, Authonomy and get news about the series on The Phoenician Series Blog. Communicate directly with Friday by “Liking” The Phoenician Series on Facebook or following her on twitter (@phoenicianbooks).
Suggested Further Reading
Inc. Magazine explains why Thinking Machines (the company) failed to succeed, despite the explosion of the market they created (and despite Marvin Minksy’s personal involvement!) The article also makes an eerie connection to genetics engineering (eerie in that I had the same exact thought before I even knew about the connection).
Scientific American Magazine has a fabulous article from September, 2005 where they delved into Delgado’s experiments of the 1950s, just prior to the era when lobotomies became “standard” treatment for those few horrifying years of mental health care. They even have pictures (only of the Great Apes he experimented on; none of his human subjects are pictured *whew*)
Scientific American Magazine also moderated a discussion (validating respondents and publishing on those replies from certified and credentialed researchers) on the question of neural network computers becoming “smart” – or as I’ve been putting it, evolving into “Thinking Machines.”
You’ll be correct if you suspect that some of the SciAm readership wondered if Skynet could or would become real one day. Little did they know it already exists! (see this Wiki article to read about the UK military satellite constellation amusingly dubbed “Skynet” and yes, the Brits at Paradigm knew what they were doing when they named it. The wankers.