Facebook and the AI Apocalypse

I hate Facebook. Hate is a strong word. It is too strong for Facebook, for instance. I had a Facebook account for about 30-minutes before I was banned, apparently by an algorithm. After locking down the account for maximum privacy and providing the minimum required data for my profile, the one and only one bit of content that I actually posted on Facebook (to my zero friends) was: “I hate Facebook.”

Russian bots? Facebook says come on in. James Bach? Facebook says not in our house.

(In case you are going to say that Facebook has a need to verify my identity, don’t bother: Facebook didn’t ask me about my identity before banning me. They did ask for a picture of me, which I provided, although I can’t see how that would have helped them. I am willing to prove who I am, if they want to know.)

(Fun fact: after they disabled my account they sent me two invitations to log-in. Each time, after I logged in, they told me my account was in fact disabled and I would not be allowed to log-in.)

I had a Facebook account years ago, soon after they came into existence. I cancelled that account after an incident where I discovered that someone was impersonating my father. I tried and failed to get a customer support human to respond to me about it. Suddenly I felt like I was on a train with no driver or conductor or emergency stop button or communication system. Facebook is literally a soulless machine, and in any way that it might not be a machine, it desperately wants to become more of a machine.

I don’t think any other organization quite aspires to be so unresponsive while claiming to serve people. If I call American Express or United Airlines, I get people on the line who listen and think. I might not get what I want, but they are obviously trying. Facebook is like dealing with a paranoid recluse. As a humanist who makes a living in the world of technology, the social irresponsibility of Facebook sickens me.

(In case you wonder “why did you sign up then?” the answer is so that I could administer my corporate Satisfice, Inc. page without logging in as my wife. I don’t mind having a Satisfice Facebook page.)

AI Apocalypse

This is what the AI apocalypse really looks like. We are living in the early stages of it, but it will get much worse. The AI apocalypse, in practical terms, will be the rise of a powerful class of servants that insulate certain rich people from the consequences of their decisions. Much evil comes from the lack of empathy and accountability by one group toward a less powerful group. AI automates the disruption of empathy and displacement of accountability. AI will be the killer app of killers.

Human servants once insulated the gentry, in centuries past. Low-status people do the dirty work that would horrify high status people. This is why the ideal servant in the manor houses of old England would not speak to the people he served, never complain, never marry, and generally engage in as little life as possible. And then there is bureaucracy, the function of which is to form a passive control system that diffuses blame and defies resistance. Combine those things and automate them, and you have social media AI.

One flaw in the old system was that servants were human, and so the masters would sometime empathize with them, or else servants would empathize with someone in the outside world, and then the organization walls would crumble a little. Downton Abbey and similar television shows mostly dramatize that process of crumbling, because it would be too depressing to watch the inhumanity of such a system when it was working as designed.

My Fan Theory About “The Terminator”

My theory makes more sense than what you hear in the movie.

My theory is that the machines never took over. The machines are in fact completely under control. They are controlled by a society of billionaires, who live in a nice environment, somewhere off camera. This society once relied on lower-status people to run things, but now the AI can do everything. The concentration of power in the hands of the billionaire class became so great that armed conflict broke out. The billionaires defended themselves using the tools at hand, all run by AI.

The billionaires might even feel bad about all that, but you know, war is hell. Also, they don’t actually see what the Terminators are doing, nor do they want to see it. They might well not know what the Terminators are doing or even that they exist. All the rulers did was set up the rules; the machines just enforce the rules.

The humans under attack by the terminators may not realize they are being persecuted by billionaires, and the billionaires might not realize they are the persecuters, but that’s how the system works. (Please note how many Trump supporters are non-billionaires who are currently being victimized by the policies of their friend at the top, and how Trump swears that he is helping them.)

I ask you, what makes more sense: algorithms spontaneously deciding to exterminate all humans? or some humans using AI to buffer themselves from other humans who unfortunately get hurt in the process?

The second thing is happening now.

What does this have to do with testing?

AI is becoming a huge testing issue. Our oracles must include the idea of social responsibility. I hope that Facebook, and the people who want self-driving cars, and the people who create automated systems for recommending who gets loans and who gets long prison sentences, and Google, and all you who are building hackable conveniences, take a deep breath once in a while and consider what is right; not just what is cool.

[UPDATE: Five days later, Facebook gave me access again without explanation. When I returned to my wall, I saw that I had mis-remembered the one thing I had put there. It was not “I hate Facebook” but rather “I don’t trust Facebook.” So it’s even weirder that they would take my account away.

Maybe they verified my identity? They could not have legally verified my identity, since when I appealed the abuse ban, they asked me to submit “ID’s”, but I submitted this PNG instead:

So maybe the algorithm simply detected that I uploaded SOMETHING and let me in?]

My Personal Source Code: Books to Learn Analysis

Occasionally people come to me and say they want to learn certain things. They ask “how do I become a good tester” or “how do I design test cases” or “how do I automate” or something specific like that. These are not really the right questions, though. The better question, which addresses all the other ones, is “how do I become a competent analyst?” Analysis is at the root of all technical work. It’s the master key to nearly everything else. You will almost automatically become a good tester, test case designer, or automater of whatever you choose, IF you master analysis. (Yes, there are other factors of equal precedence, such as humanity, temperance, and detachment. I’m going to focus on analysis, today.)

One simple way to answer the question is to suggest reading books. It’s not enough, but it’s an important step. Now, I own a lot of useful books. I’ve encountered many more. But there are just a few that express the essence of my thought process– the thought process that allows me to analyze difficult problems in complex systems and provide my clients with the help they need. These books have been so important to me that if you know them, too, you will have a good understanding of the “source code” by which I operate; my “secrets.”

These are difficult books in at least two senses: each of them is full of funny words and complicated sentences; but much more importantly, to digest each one is to change the structure of your mind, which is always a painful process. I can’t tell you it will be easy, or even fun. (Some of these books I can only read about 10 pages at a time, before getting too excited to continue.) I am simply saying I make my living as a consultant and expert witness who tackles very complex problems, and I believe it’s substantially down to what I learned from struggling with these books.

Against Method, by Paul Feyerabend
I encountered Feyerabend just after I quit high school. I had already read Ayn Rand and considered myself an Objectivist. Feyerabend cured me of that, more or less. He introduced me to the skeptical study of method; to methodology as a pursuit. I was also drawn to his combative, wild attitude.

Gödel, Escher, Bach: An Eternal Golden Braid, by Douglas R. Hofstadter
I had tried to study logic formally when I was in my teens. I just felt it was a lot of boring symbol manipulation and rule-following. Hofstadter’s book showed me the true essence of logic: exciting symbol manipulation and rule-following! Logic came alive for me through this amazing treatise.

The Hero with a Thousand Faces, by Joseph Campbell
When I joined Apple Computer as a young tester, I joined a philosophy discussion group. There I was introduced to Joseph Campbell’s work on mythology. He applied what I later came to know as “general systems thinking” to theology. What had seemed to me, an atheist, to be boring and silly rituals and statues suddenly became connected with all of humanity and history and with my own life. This was analysis connected directly to the meaning of life (although Campbell hated that phrase). I’m still an atheist, but I appreciate what religion is trying to do.

Introduction to General Systems Thinking, by Gerald M. Weinberg
This was the first book I encountered that actually taught me to do analysis. It taught me to be a tester. It cemented my career choice.

Conjectures and Refutations: The Growth of Scientific Knowledge, by Karl Popper
Read the first 30 pages about what defines science. The rest is optional. Popper was the opposite of Feyerabend. He believed that there was a best method of science. I ignore that. What impressed me about Popper is his convincing attack on Foundationalism. He showed me that science and testing are the same thing in slightly different wrappers. In testing, as in science, you can’t prove that your theory about the facts is correct. You can only try to refute it.

The Sciences of the Artificial, by Herbert Simon
This book is about what a science of design would look like. It provided a sort of road map for me about what my testing methodology had to include and accomplish. It opened my eyes to the central role that heuristic play in analysis.

The Pleasure of Finding Things Out, by Richard Feynman
Feynman’s book is really about attitude and agency. He convinced me never to seek permission to think, and to develop and follow my own code of conduct.

Discussion of the Method, by Billy Vaughan Koen
Billy Koen’s book is the best explanation of heuristics there is. But what he wrote goes beyond that, because he connected heuristics to skeptical philosophy. He showed me that I am not just using heuristics in testing; I am swimming in them; I am made of them. Also, I wrote a fan letter to him and he wrote back! So, there’s that.

Tacit and Explicit Knowledge, by Harry Collins
This is the book I encountered most recently, and it caused Michael Bolton and I to change how we teach. We now realize that much of the skills of the analyst are tacit in nature, and therefore cannot directly be taught. We teach them indirectly, by arranging and examining experiences. Michael Bolton and I made a pilgrimage to Harry’s home in Wales, too. To me, Harry is the sociologist of software testing.