Last night, I went to see Sam Harris, a controversial figure who has the best job on the planet – interviewing brilliant thinkers and sharing their incredible conversations with his listeners on his podcast, Waking Up. A neurologist by training, Harris applies his experience studying the workings of the mind and the nature of consciousness to practical issues of existence. His podcast is #24 on the iTunes podcast chart this week (in Canada) and so it would appear that the philosophical bend of his conversations is appealing to some form of a popular audience… Which begs the question… “huh? philosophy appealing to a popular audience?”
Philosophy tends to get a bad rap – it has a connotation of being impenetrable, interminable and intolerable. But when I think of what a philosopher does, he or she really just tries to answer questions about life and existence, which is something that everyone actually does some way or another. Furthermore, I dare say that it is not philosophy itself that is intrinsically complicated and perplexing, but that it is rather the complicated and perplexing nature of existence that makes philosophy so frustrating.
That having been said, several rather game-changing and inescapable issues have arisen lately that have given everyone an opportunity to think and talk like philosophers. Here are some of my thoughts on those topics.
A tyrant president – Political philosophy
There are few events in recent history that have crystallized people’s thinking on a subject like the ascension of Donald Trump to the Presidency of the United States. The election popularized esoteric terms like “cognitive dissonance” and “confirmation bias” to help explain what happened during the election, but as people took positions on the perceived front-runner (Clinton) and the disruptor (Trump), they had to make up their own minds based on the information that they found to be relevant, and that can be said to a basis for philosophical activity. In that shallow sense, everyone is a philosopher and everyone is free to develop their own conclusions on the matter.
For more dedicated philosophers however, one of the things that makes Trump such an interesting case defining aspects of his personality (ones that he himself attests to) align with those that describe the most unjust individual conceivable in Plato’s Republic, the first know western political philosophy from about 2,300 years ago. In the following argument with Socrates, Thrasymachus famously asserts that it should be obvious to anyone that the just are a bunch of losers and will always be the prey of the unjust who are just smarter and winninger:
“… the unjust is lord over the truly simple and just: he is the stronger, and his subjects do what is for his interest, and minister to his happiness, which is very far from being their own. Consider further, most foolish Socrates, that the just is always a loser in comparison with the unjust. First of all, in private contracts: wherever the unjust is the partner of the just you will find that, when the partnership is dissolved, the unjust man has always more and the just less. Secondly, in their dealings with the State: when there is an income-tax, the just man will pay more and the unjust less on the same amount of income; and when there is anything to be received the one gains nothing and the other much. Observe also what happens when they take an office; there is the just man neglecting his affairs and perhaps suffering other losses, and getting nothing out of the public, because he is just; moreover he is hated by his friends and acquaintance for refusing to serve them in unlawful ways. But all this is reversed in the case of the unjust man.” (https://www.gutenberg.org/files/1497/1497-h/1497-h.htm)
This sounds an awful lot like a contemporary businessman who might be accused of withholding payment to suppliers or contractors or one who prides himself on having the intelligence to avoid paying taxes. What’s worse is that the Plato goes on to outline the way in which an unjust leader would likely emerge from a liberal democracy to manifest the next step of political evolution – tyranny. I like Andrew Sullivan, but I think his video for BBC was a little over the top… but it distills down the crux of the argument poignantly:
Like I say – the point is not to critique Trump or praise him for so well exemplifying a(n unenviable) Platonic form, but to show how philosophy enters into the debate of what kind of leader and moral agent Trump is, based on a conversation about moral character that has been going on for millennia. I feel like it’s one of the greatest cultural magic tricks in the history of mankind that an ancient philosopher thousands of years ago could develop the wisdom and forethought to anticipate the necessary conditions that would give rise to Donald Trump. If that is true, then surely there must be something authentic about the discipline of philosophy… No?
Cars that drive themselves – Moral Relativity
Much has been made this year of the driverless car. I think that most people think about the implications that this would have to a non-professional-driver lifestyle, and that it might be analogous to the introduction of the Walkman and the way it freed people from having to drag around their turntables to enjoy their music wherever they went. Some people might think about the obvious benefit of being able to “drive” home after a night out at the bar, or being able to drive a family to a vacation spot without having to stop the car to change a diaper. Yet others might think about the economic impact of relieving millions of taxi and transport truck drivers of their labours – clearly an ethical issue worth contemplating.
What I think is most interesting about autonomous cars is the thought that cars will need to make their own ethical choices once they are put into widescale circulation. It’s conceivable that an autonomous car find itself confronted with the difficult choice of having to kill a single child who had run out in front of it, to swerve and kill a number of patrons sitting at a café, or to swerve the other direction into a pole killing the occupants of the car itself. Arguments could be made along differing moral intuitions for any of these choices… so how would a standard be set? Moreover, if you had a choice a the dealership, which moral compass would you choose? I can’t imagine too many customers who would voluntarily select the option to sacrifice themselves, or their own families who would most likely occupy the car.
The interesting ethical problem is that prior to the introduction of this technology, there was no reason for anyone to impose a decision on anyone else – freedom of choice and belief allow individuals to construct their own frameworks of ethical relevance, so long as they align with the laws of the land. Now, we are demanding that engineers and programmers encode a set of values into their autonomous vehicles in advance of the circumstances of these choices. Therefore, moral relativism and tolerance will be effectively negated and the only choice available will be the ones that the programmer chooses. Will these choices be driven by principles of justice and fairness, or will they more likely be driven by choices of commerce and demand? If the Honda Civic version of the autonomous car preferred killing the driver over bystanders, would you not opt up for the Honda Accord version that was tweaked to save the drive an extra 25% of the time? Would the Acura version offer 75% save-the-driver preference? You can see the pattern here.
The really sad part of this argument is that people everywhere are already having the conversation about this outcome and still it is a problem that lives in relative obscurity. As I started writing this blog post, this TED talk was published showing that I haven’t come up with any of this on my own, and that the people who did are far better dressed and have a much cooler accent than I do.
Androids or Roombas – Artificial Intelligence
Last night, not only was Sam Harris cooking the craniums of Torontonians, but the 2017 Emmy Awards happened. This year was a big year for critically acclaimed philosophical shows…like Saturday Night Live. Or even Westworld, the challenging to watch but incredibly rewarding cowboy western nominated for like 22 Emmies that reminds us just how disappointing Cowboys & Aliens really was.
Westworld was a show designed to pull on our moral intuitions and to test our commitment both to our own sense of humanity and to our relationship with technology. The basic conceit is that humans have succeeded in building sophisticated machines capable of passing the Turing Test, and these machines are employed in an immersive theme park where the guests are free to act with absolute impunity against the hosts, ostensibly to discover their true natures and earn the right to wear either white or black hats. The hosts however develop glitches that allow them to transcend their original programming (who could have seen that coming) and they develop a type of android consciousness.
Let’s start with the perspective of the characters inhabiting the show. They live in a world where iPhones look, feel, talk and behave exactly like their flesh and blood counterparts. An empathic human encountering a Westworld host feels empathy at their misfortune and wants to help them, which is about as absurd as feeling bad for your PC when it encounters a blue screen of death – how many users’ first response is one of sympathy in that situation, as opposed to futilely smacking the monitor with an open hand. The fact that we anthropomorphize just about everything from stuffed animals to Roombas tends to suggest that humans have a hard time NOT ascribing suffering and empathy to just about everything. The notion that we could ever create anything that remotely resembles humans and fail to anthropomorphize them magically seems hard to believe. Thus, it’s a chilling testament to the judgment that humans are ultimately callous pricks to portray them as the kind of people who would create a whole theme part dedicated to the terrorization of robot hosts.
Next, there’s the problem of whether robots need to be conscious and self-aware to become problematic for humans to control. Westworld quickly moves to explore the problem that if one were to create a machine that was so sophisticated as to develop independent thoughts, responses, and motivations, then it is easily conceivable that this machine could develop responses that were contrary to its creators’ designs. What it did with those decisions is the stuff of Terminator movies, but the thing that is interesting here is that regardless of whether or not the machine develops actual consciousness in the same sense that you or I have it, artificial intelligence presents the risk of moving out of alignment with the interests of humanity, and that is a fairly chilling thought.
Coming out of the show’s perspective a bit further and speaking of “actual consciousness in the same sense that you or I have it” – did you know that isn’t even what you thought it was? Apparently in the past few years or so, neuroscientists, philosophers, and psychologists have solved that whole mind-body dualism problem of how physical matter influences the spiritual soul-like matter of the mind that cannot be reduced to the physical…by reducing it to the physical. Yes, determinism of the mind is a thing now. So the scene in Westworld where an engineer shows an android that even though she absolutely believes that her free will is the cause of every thought she things, feeling she feels, and word she says, when he shows her a iPad that shows the computational model that is determining her every output, she understandably freaks out. So the kicker is that is how my brain works and yours too… So that message was baked into that HBO program that you thought was just about cowboys and saloon girls showing their boobs.
So this leads us to an ultimate philosophical question – what is our moral responsibility to one another if my responses are determined by my hardware and you could be a robot/zombie just as probabilistically-determined as I am? Would an ultra-sophisticated iPhone M(ankind) be a moral agent that I would need to imbue with rights and moral status while I remain perfectly justified taking my old laser printer into a field and savagely shattering it with a baseball bat? Maybe Sam Harris will have a solution. I just have to work up the courage to ask him one day.