Above is this week’s False Flag Weekly News with John Carter of Postcards from Barsoom. It opens with a conversation about the late Ted Kaczynski’s ideas. Below is an abridged, lightly-edited transcript of a longer conversation with America’s #1 Kaczynski expert, David Skrbina, recorded in 2018.
David Skrbina: Was the Unabomber Right All Along?
Kevin Barrett: I'm bringing on a favorite regular guest, Professor David Skrbina of the University of Michigan at Dearborn. He is an expert on the philosophy of technology, specifically the metaphysics of technology, and he's written a book with that title. He is also an expert on panpsychism, the philosophy that says that everything in the universe has a degree of consciousness. And he has edited the Unabomber that is Ted Kaczynski's post-prison book, Technological Slavery, which wasn't the title that Kaczynski wanted, but he was willing to live with it. Anyway, let's get to it. Oh, and I almost forgot about the the The Jesus Hoax: another fascinating conspiracy theory there. And of all his work, that's the only one that I don't really buy into. But the rest of it, I find hard to refute. So anyway, welcome, David Skrbina. How are you doing?
David Skrbina: Hey, great, thanks, Kevin. I guess liking three out of four is not too bad.
Kevin Barrett: You're doing pretty well. I dismiss an awful lot of what I hear as BS, and that's not what I do to most of what I hear from you. And even The Jesus Hoax is absolutely worth considering and reading and thinking about. So, where should we start? The most recent one that I looked at was Ted Kaczynski's book Technological Slavery. I think you have a point when you say in your introduction that Ted Kaczynski — whatever you think about his choice of people that he went around murdering, and I'm not particularly a fan of that — but whatever you may think about some of his work and his deeds, his writing is very much worth considering. And nobody in the media and the intelligentsia seems willing to admit that.
David Skrbina: That's very true. He's a very logical, intelligent guy. He's very well read. He knows what he's saying. He makes very clear and lucid points. I think anybody who's read the manifesto has an idea of how logically he can think. He has some really compelling ideas. They're not entirely original ideas, but that's okay. There's a pretty good history of critiques of the technological system. And Kaczynski is one of the latest ones and he's one of the harshest ones. But that's okay. He justifies his position. He takes a very strong line. And truthfully, there's a lot to be said for his argument.
Kevin Barrett: Right. And for people who haven't read it — though I would think a high percentage of my listeners probably have at some point read the so called Unabomber Manifesto, which (just to refresh people's memories) was published in The Washington Post and New York Times back in the 90s. And it was actually his brother seeing the publication of his manifesto and saying, "Hey, I think I think that's my brother that wrote that", and then getting in touch with the FBI, that led to his being arrested, charged and railroaded in a sham trial — that absolutely disgusting trial in which he was not given the chance to defend himself the way he wanted to. But anyway, the manifesto argues that we're not suited to the level of technology that we have now, that technology is out of control and making our lives miserable. It's only going to get worse. And we need a revolution to break down and knock out this kind of technological enslavement that we face today. And I think it's a strong argument as far as pointing out that technology has created so many problems that one almost would wish that it would largely disappear. But as far as the notion that we can have some kind of revolution against it, that this is a practical thing — and if you then match that up with his bombing people — it starts to seem a little tenuous.
David Skrbina: Right. So you have a very unnatural system, right? It's unnatural for people. It's unnatural for the planet. And that's why it seems to be causing so much trouble. Nobody has any evolutionary history with an advanced technological system of the kind that we deal with on a regular basis. What we have is a kind of... Some people would say that it's like an autonomous system. It's like a self-growing system where it's unfortunately almost beyond our control, where things happen and we don't really plan them to happen that way, but they just happen nonetheless. It's like the thing has its own momentum. It's a very, very strange process, actually.
Kevin Barrett: Yes. And the downside often outweighs the upside. We're told: "Oh, new technology promises wonderful things." And over and over we find that the bad side outweighs the good.
David Skrbina: Exactly right. You maybe solved one problem, but you introduced three new ones. Or you you were expecting one thing, but in the long run, you got the opposite of what you were expecting. We see that happen a lot with technological solutions. So it's a very complex thing that we cannot predict. We cannot control it. And it's growing rapidly, beyond our ability to to to dictate what happens. So if you're confronted with that kind of situation, and the system poses a kind of existential risk to people either through toxins in the environment or through global warming or through super AI or — there are several disaster scenarios that could happen. If you look at those nightmare scenarios, and you're dealing with a system whose process you cannot control, you're left with very few options. And one of those options is to try to bring the system down now while we still have a kind of a modicum of control. And I think that's the basic point of his argument.
Kevin Barrett: Indeed. And my problem with it is: Is he really trying to find a path to get from where we are now to a better place? Where would that better place be? Obviously very few people today would wish to go back to Paleolithic (old Stone Age) technology. There are a few out there, but they're mostly trendy paleos who like to drink their coconut oil flavored coffee and things like that. I'm one to a certain extent. But I'm not sure I'm ready for the real Paleolithic. So we probably would want our path to lead us to something a little bit less extreme than that. What would that path be exactly?
David Skrbina: And I guess that's kind of the point I've been arguing. I've tried to argue along a similar line. But I've tried to take a more rational approach where you recognize the need to do this. It may be you take systematic, rational steps to deconstruct your system, to try to unwind this process, in some kind of managed way where it's not a chaotic thing and where it maybe takes a long time. People talk about a revolution against the system as if it's going to happen overnight or in a week or something. There's no reason that it has to happen that way. It could take decades or a century if we tackle it in a rational way. And then if you can do that rationally, then you can sort of dial it back slowly over time and people will slowly get used to things or used to going without things. And then you can sort of gauge your progress as you're going that way. It's not like suddenly tomorrow, boom, you're back in the Stone Age. That does not have to be the case.
Kevin Barrett: And actually, one hopes it wouldn't. Because, of course, almost everybody would die if it happened overnight.
David Skrbina: Exactly right. That would be terribly catastrophic. So the rational way is to try to do it slowly over a long period of time, recognizing the need to do it and unwinding or deconstructing the system in a controlled way. That's the smart way to do it. The problem is, there's not a lot of evidence that we're capable of doing the smart thing. We just seem to keep plowing ahead no matter what. I guess my hope is it will take a minor disaster to kind of snap people into awareness that we need to do something, and then maybe we'll muster the strength to start to take action. Because if it's not a minor catastrophe, it's going to be a major catastrophe. And then lots of people are going to die and it's going to be really bad.
Kevin Barrett: Well, I hope PNAC isn't listening to you because they might go out there and stage a false flag minor catastrophe. Maybe there's some ethical argument for that, for all I know. They're followers of Strauss's interpretation of Plato and his noble lie. So the neocons think, "hey, we have a huge problem with Islam" (by "we" they actually mean Israel). "So let's put out this platonic noble lie that everybody has a huge problem with Islam. How will we do that? We'll stage an attack on America."
I would hate to see something like that happen in the technological sphere. But yeah, as far as a graduated approach to (rolling back technology), I think that makes sense. And I think a landmark would be the conscious decision to give up some particular technology. Because that's never happened in modern Western culture. The Chinese supposedly did to some extent consciously eschew the military use of gunpowder for a while. There are other examples like that. They consciously chose not to use gigantic warships to conquer and colonize and so on. But in modern Western history, if it's there, we do it. And that's a terrible precedent. And I wonder if breaking that in some big way could actually start things moving in the right direction.
There are technologies that are obviously candidates for absolute banning, such as germline genetic engineering. If that goes online, we're really screwed. Parents would have to pay a lot of money to some corporation to tweak the genes of their children. If you don't make your kid a high IQ Mozart level genius musician and seven foot tall basketball player and all this stuff, your kid is going to be last in the class. And so the rich are going to make their kids first in the class. Everybody else is going to cough up money to try to keep up. And all these kids are going to grow up saying, what am I, a Frankenstein monster? This is a scenario for absolute hell. So there's one technology that it should be possible to just ban. If I were a lawmaker, I would obviously ban that one. And then maybe we could start thinking about whether other technologies should be banned as well. And once that's thinkable, maybe we could start moving in the direction you're suggesting.
David Skrbina: Yeah, right. I agree that it would be nice to start with a specific, obvious case like that. That's certainly true. And it's not just in genetic engineering. There are a lot of aspects of technology where it's a competitive scenario: If you try to dial back in one particular area, the defenders of the system will jump in and say, "Well, somebody else is going to do it. Our competitor or our enemy will take this technology and they will develop it. So we have to do it, we have no choice." And you see this argument over and over again when you look at the technology literature. And it's kind of ironic how people claim that we have some kind of control, but then they'll keep saying, "but we have no choice. We have to advance it. We have no choice." You see this over and over again. So yeah, there are lots of lots of horrendous scenarios out there. Military technologies are terrible. God knows what the military is working on now, killer nano drones or God knows what. And, you know, if that stuff gets out...
Kevin Barrett: What could possibly go wrong with that? Autonomously guided killer nano drones that make the decision about who to kill on their own? What could possibly go wrong?
David Skrbina: I know. You can only imagine what the military is working on. And you only hear probably 1% of what they're actually doing. So, yeah, it's really a tough situation. You'd like to take a few things and try to snap the public into awareness. It almost has to be a global thing. If you want to try to get rid of the competitive issue, you almost have to appeal to the — I hate to go to the UN, but some kind of international appeal to try to get people collectively to dial back on the most dangerous technologies. And if we can do that, as you say, then maybe we have a pathway to to dialing back other things. And then maybe we have a hope of survival.
Kevin Barrett: Sorry. I just had a long pause there because I have this horrible pop-up window that jumps in front of the unmute button. What a moment for that to happen. And that's just one of so many illustrations of the loss of control over our technological gizmos. It seems that with each new generation they are coming up with more ways to deprive us of agency, whether it's something as seemingly benign as taking away the manual window controls so you can roll up a window in a car. They all used to be roll-up windows and you would have a little lever and you'd do it with your muscle power. And then they had these automatic ones. And the next thing you know, the car is automatically locking itself when you get out, perhaps locking in your keys that are sitting there on the front seat. And I find myself going through life screaming "I didn't want to do that, you stupid machine. Why are you making these decisions for me?" And it's getting worse all the time, even at that level. But of course, the military level that you're mentioning is the worst one. You're right that this is a real challenge to those folks out there who are terrified by the idea of a global government. And I am among them in many respects. If we had a global government, we would have nowhere to run. Right now I can at least go on Iran's Press TV and tell the truth. I used to be able to go on Russia Today and tell the truth, until the global syndicate managed to get to them. And once there's a global government in charge of the media, I probably will have to just shut up and, I don't know, start bombing people like Ted Kaczynski. Just kidding, Mr. NSA spy! But yeah, it is frustrating.
David Skrbina: Yeah, exactly. I mean, that's why the globalists of the world need this advanced technology, because it's tremendously to their advantage. So they don't want to talk about any dialing back or retrenching or relinquishment or any of these kind of things. They don't want to talk about that at all. Because it really gets to the root of their power over people and over the whole planet. You can't exercise that control without very advanced high-power technology at your disposal. So they're going to fight that process all the way, because that's the basis of their power. Radical guys like Kaczynski understand that. And they're going to say, "hey, you can't reason with these people because they're lunatics. So what you have to do is take the fight to them and actively start undermining the system (in some ill-defined way) because they won't listen to reason when it comes to this issue."
Kevin Barrett: And as far as that idea for revolution goes — and actually I haven't finished Technological Slavery, I'm maybe two-thirds through it — so I haven't fully finished the part in which he describes the revolution that he's calling for. But thus far it doesn't sound very realistic.
And I also think it's interesting that Ted Kaczynski is among the most individualistic people in a relatively individualistic culture. Western culture, especially the USA, where we're hanging on to our guns and championing the Second Amendment while the rest of the world rolls its eyeballs... We are rugged individualists here, and Kaczynski is the most rugged of the lot. And even if he managed to convince enough people in the US to follow this kind of approach, I do wonder whether China, for example, which has a very different history and culture, which is a lot more collectivist, would be equally open to these ideas.
David Skrbina: That's a good question. Right. I don't know enough about Chinese or Oriental attitudes towards technology to tell you whether they'd be better or worse. It's it's hard to say at this point. But Kaczynski has a plan for the revolution against the system. But he's restricted because of the conditions of writing within prison. He sent me many letters over the years. Everything that he sends out is screened by the Bureau of Prisons. So he has to be very careful what he writes about.
Kevin Barrett: So if he has a great idea for a revolution, we'll never know about it.
David Skrbina: Unfortunately, that's true. If it's too specific — I guess I don't really know exactly what the rules are — but I do know that he is not allowed to talk about violence in any way, bombings, killing people, I mean, anything like that.
Kevin Barrett: He makes that point (in Technological Slavery).
David Skrbina: That was really a forbidden topic at that point. The reason was that if he tried to write something like that, the Bureau of Prisons would have used it as a rationale to cut off all communication with the outside world.
Kevin Barrett: The way I read those passages of Technological Slavery, it sounds like he's being self-consciously ironic. I was kind of surprised they let it get through. There's at least one point in the book where he says something like, "well, of course I couldn't advocate for violence because if I did, the Bureau of Prisons wouldn't let you read it. So, no, no, I'm certainly not advocating violence." But the tone was such that I doubted his sincerity.
David Skrbina: You're right. He has to word things carefully. But it's not really a question of sincerity. I think it's just how he has to word things to get the message out. But the other interesting point is the manifesto itself, which was not subject to any kind of oversight, because that was obviously prior to the prison situation. In the manifesto, he was free to write anything he wanted. And even there he did not say "go out and start sending mail bombs, start killing people." There's nothing in the manifesto about conducting a violent revolution. In fact, he says a couple of times in the manifesto that it may or may not be violent. So he's pretty wide open on different means, different options that people might take to undermine the system. And it could be something simple like purchasing choices, individual personal choices, consumer choices. And it could be passive resistance. It could be civil disobedience, all the way up to very active and direct action sorts of things. So you can imagine there's a whole range of things that people conceivably could do that would count as a kind of a revolt against the system. And some things are obvious. Some things he just leaves unsaid. And I think maybe that's the best way he could do it.
Kevin Barrett: Right. Well, it's such a huge issue and such a difficult issue for imagining solutions. But I think he's made an important contribution. And I think it was useful of you to edit his book.
...
Kevin Barrett: There's a book that I read a while back called Darwin Among the Machines by George Dyson, who argues that the technology we see overtaking us is doing that precisely because it is capable of mind and it probably already has one and it's probably already had one for quite some time. And as far as I can determine from from my understanding of what Dyson is saying, he's essentially telling us that we are doomed to be replaced by our machines, which by Darwinian mechanisms are going to surpass and supplant us, sort of like what we did to Neanderthals only a whole lot more quickly and and definitively.
David Skrbina: Well, right. I mean, that's that's a very interesting argument. It's an old argument. In fact, the title of that book from Dyson comes from an essay by Samuel Butler, who was a British essayist who wrote a short essay of that title in 1859. And he was just becoming aware of Darwinian evolution. And he immediately looks to the mechanical world, to the world of machines. And they're simple machines, 1850s level technology. But even then, he can see that the machines are evolving faster than the biological systems, the biological organisms, at the time. So even Butler, even back 150 plus years ago, could see that. So maybe there's an evolutionary process going on and the machines are going a lot faster than we are. And he said it won't take long before they're going to roll right by us. And Butler was thinking, "Well, shoot, maybe they'll keep us around as pets and maybe it'll be sort of tolerable." But I think that was a very optimistic outlook, actually.
Kevin Barrett: Yeah. And more people are jumping on that bandwagon these days. Some of them are saying it's inevitable. The British independent scientist who came up with the Gaia theory with Lynn Margulis, James Lovelock, has in his most recent stuff shifted his alarmism (to AI) from global warming. For a while he was ten times worse than Al Gore, saying "we're going to broil the Earth, it may turn into the surface of Venus due to global warming." And then he stopped worrying about it, he says, because long before that happens, the artificial intelligence-nanotechnology-artificial life organisms we've created are going to kill us off. And they'll probably be able to inhabit a planet whose surface is like Venus just fine. So let's not worry about it. And of course, my reaction to that is, what?! Not worry about it?!
David Skrbina: No, you're right. I saw that. That was just a recent thing that Lovelock came out with. I'm not sure I buy his approach. He's getting a little up there in age. You've got to wonder about his ability to think clearly. But the threat from AI is definitely real. We've heard that in recent years from Stephen Hawking, from Bill Gates, from Elon Musk. A lot of these guys have inside knowledge of what's going on and they're worried. And it's probably even worse than that. So if those guys are worried I'm definitely worried. Yeah, we've got some big, big problems that we'll be facing in the not too distant future.
Kevin Barrett: And that's where Panpsychism is not going to make us any more optimistic. Quite the opposite, because there are folks who say that, well, maybe there's some special kind of soul or consciousness that we living creatures have that these computers could never possibly have, and therefore they aren't really going to ever become our competitors. And the opposite view is the one that Dyson takes in Darwin Among the Machines in which he says, Well, just put this whole issue of consciousness aside. You don't need consciousness. These machines, whether or not they are quote unquote conscious or intentional, they're just going to keep doing what they do, and at some point they're going to get out of control. And does it really matter to us whether we call them conscious when they start sucking up all of the available energy in their environment and competing with those things that compete with them, like us, and then putting them out of business and taking over more and more and more of the available energy sources of the universe. Who cares whether they're conscious or not? That's just what they're going to do. But the Panpsychism argument is even more pessimistic, because then they are just as conscious as we are, which means that their potential for supplanting us is even greater than people like Dyson would suggest.
David Skrbina: Yes, that's right. If that's true, there really is a consciousness behind these systems, which means there is a kind of real intentionality and a kind of will and a belief system and a moral code which is very different from our sense of morals. So I think that poses a huge problem, if in fact that's true. And yeah, every way you look at it, it looks like a bad situation. So the sooner people realize that and the sooner we begin to think about taking action — that's our best hope for survival.
Kevin Barrett: Right. And there is the defeatist position out there. The optimistic defeatist position, as I recall, has been expressed by Rudy Rucker, who's a science fiction writer (and mathematician). I've enjoyed his books — Wetware was one of them — and he is essentially embracing Panpsychism or some variety of it and imagining these artificially intelligent robot creatures or cyborgs or what have you that are absolutely the equals of humans in every meaningful way. And Rucker is then just kicking back and smoking a joint or whatever he was smoking — I think he was smoking something for sure — and saying, Oh, that's all right, whatever. It's cool. It's all cool.
That's almost a bizarre Frankenstein's monster caricature of Islamic optimism, where we Muslims have this kind of incorrigible optimism because our prophet, peace upon him, was so victorious, even though he had such a tough situation. We have this "everything is always by Allah, so it's ultimately okay" attitude built in. Well, Rudy Rucker has this weird cyberpunk version of that, but somehow that kind of (optimistic) defeatism strikes me as inadequate because it seems to me that we really are facing something that's malevolent in some sense, something that maybe shouldn't be — that we've messed with creation in a way that we shouldn't have.
David Skrbina: That's right. I think that's certainly the case. It's a question of whether it's a consciously evil system or if it's just doing what it does. I think that's probably what Dyson was suggesting, that these things are just going to do what they're going to do and it's going to turn out to be horribly negative for humanity and probably the rest of the planet. So from our perspective, certainly, it's an evil outcome. And to do nothing or just to sort of sit back and say, well, that's the way it goes, or “it's God's will” or “that's destiny” — to me that's just suicide. If you've got a hope for survival, you've got to fight back and you've got to defend yourself. And that's kind of been the way evolution has gone: Organisms fight for their own well-being and in their own defense. And it may shift gears in the near future and we may be fighting tooth and nail for our own survival. And we probably should get ready for that.
Kevin Barrett: Wow. Get ready for the war against the machines.
David Skrbina: It sounds like a Terminator movie or something. I hate to say it, but that may be closer to reality than we think, or than we would like to believe.
Kevin Barrett: I don't know if you've seen that movie that somebody made warning us about killer robots. There's a campaign to ban killer robots. And a bunch of the leading lights of the cutting edge of the techno world have contributed. And they made a film, a scary film, showing what happens once the killer robots are unleashed — the flying, artificially intelligent, sentient killer robots. And it is a frightening commercial, mainly because it's portraying a scenario that they think is very feasible, very credible.
David Skrbina: It's amazing the way these things are getting physically more agile rapidly. There are some Youtube videos that really shocked even me. They were listed under the heading of Agile Robots or something like that. They were jumping. They're humanoid type robots and they're running, they're jumping, they're doing backflips. They're really amazingly coordinated physically. You still have a picture in your mind of a lumbering, sluggish robot. You just knock him over and he can't get up. But these new robots are really incredible in how they can move and jump and run. You marry that kind of capability with a high level of intelligence and autonomy and you could have serious trouble really quick.
Kevin Barrett: And then remember that the Pentagon and other countries' militaries are undoubtedly very interested in this stuff. They love super soldiers. They're already interested in robots as killing machines, because human nature is not given to murdering other people except in self-defense or in extreme anger. To turn people into aggressive killers, you have to brainwash them using some form of (mind-control) technology. SLA Marshall was the general who oversaw that process. There was a big study post World War Two about why 15% or less of the grunt soldiers will actually shoot to kill, while the other 85% are what he called de facto conscientious objectors who will fire into the air. They will step back and hand other guys the ammunition, but they will not try to kill anybody. "So how do we overcome this?" And so Marshall oversaw Pentagon programs that were able to brainwash normal American young people to become vicious, aggressive killers, essentially by appealing to their their racism and xenophobia with basic training, dressing up realistic looking enemies that you kill in your training, and you scream about the gooks if you're going to fight the Vietnamese or the Hajjis if you're going to fight the Muslims. So they totally dehumanize these young Americans in boot camp to turn them into killers. And they got it to the point that 50% would actually kill voluntarily in Korea and 90% would in Vietnam. But of course, this wreaked havoc on people's psyches. And so the non-psychopathic majority of young American men coming back from these wars has been really messed up.
And so we see the military loves to try to take human beings who are not into aggressive killing and turn them into vicious killer robots. But if they can just make vicious killer robots without even bothering to bring the human into it, somehow I don't think that's going to be much better for humanity.
David Skrbina: Well, no, you're right. And I suspect that's the way they're going to go. I'm sure they've run into so many problems, as you say, psychological disorders. And to try to turn people into killing machines is very hard to do on any kind of large scale basis. And the robot technology, the AI technology, is accelerating much faster than the ability to to brainwash people to become psychopathic killers. So I'm guessing the military is just giving up on trying to convince people to do it, and they'll just have the robots, the artificial intelligence machines, do it. And that will be far easier. And they're probably in the process of doing that right now. And they're developing their autonomous drones and intelligent drone systems that can do the killing. And we just kind of sit back and watch the action. But unfortunately, the action is going to come to us sooner than we like, and it's going to be right here on our home territory. And it's not going to be fun, right? Right now it's fun with those drones that are over there in the Middle East and it's kind of cool that we can watch them on our little video screens from Nevada and nobody cares. But when those things are flying over Detroit or Madison, Wisconsin and gunning down people, then it's going to be a whole new story, right?
Kevin Barrett: Yeah. And that's where there are such different scenarios. In the dystopian scenario, the rulers use this kind of technology to keep people enslaved. Unfortunately, of course, that's all too likely.
Or one could imagine a kind of a benevolent World War Three scenario. It sounds like a contradiction in terms, of course. But one could imagine that the strategists would say: Okay, we've got to take out the other side's ability to fight, right? Because that's the definition of what you do in a war: You're essentially trying to make sure the other side gives up and stops fighting. So we're going to have to go after their command and control. So our killer robots would first target their killer robots, and maybe we would also target the programmers of the killer robots, and the powerful humans who are in charge of the whole program. So we would just take out the top level of the other side, just completely take them out. But maybe they would also take our top level out. And then the rest of us down here at the bottom level would be liberated. Is that incorrigibly optimistic?
David Skrbina: Kind of a mutual actual destruction at the top?
Kevin Barrett: The suicide of all the elites: They fight a world war, and the elites, for strategic reasons, target each other and take each other out, leaving the rest of us to be the meek who shall inherit the earth.
David Skrbina: I guess that's one thing to hope for. But even if that happens, you've still got your robots running around without any leadership or control. And now they're calling the shots and there's nobody telling them what to do. And that could get out of hand.f
Share this post