Archive

Ethics

For this second Reading the game article, I will be taking a look at Deux Ex: Human Revolution, part three of the role-playing game series that started back in 2000, and will see its fourth installment, Mankind Divided, later this year. Human Revolution has some particularly interesting story elements from a philosophical perspective. This article discusses one of these elements and its potential in (philosophy) education in detail, leaving room for additional articles on some of the many elements of this game that have educational potential. Mind that this series is open to all educational applications, I merely focus on philosophy because it is my own field. I would very much like some biological (for example) perspectives on this game, so if you can provide some please don’t hesitate to get in touch.

Spoiler warning: this article describes major plot elementsDo not read below this if you plan to play this game in the future.

DEHR Header

Augmentations
If you haven’t played the game and aren’t planning to, Deus Ex: Human Revolution is a so called action role-playing game situated in Detroit, in the year 2027. Sarif Industries, the company for which you, Adam Jensen, work as head of security (official for corporate spy/detective), is one of the world-wide forerunners in augmentations, technologically sophisticated additions to or replacements of human organs that make their owners run faster, jump higher and think quicker (among many other things). The game starts with Jensen waking up after a life-saving surgery operation, now being kept alive by a wide variety of augmentations and prosthetic limbs.

Read More

Advertisements

Master’s Thesis in Applied Ethics
Utrecht University
Supervisor: dr. Stephen Riley
Second examiner: dr. Frans Brom
Submitted 30 – 06 – 2014
Graded a 7.5.

View the document on Academia.edu (alternative link)

Abstract
In this thesis I investigate the moral contents of the concept of privacy, moral contents interpreted broadly as those elements that directly concern value or a moral obligation or right, as well as those that concern the preconditions for a moral framework, and the concept of privacy understood, wherever it cannot be understood in the abstract, as it is used (in all its variety) in contemporary liberal societies.

Firstly I adopt Daniel Solove’s view that the concept of privacy is better understood in terms of family resemblance than in terms of necessary and sufficient conditions, as well as that privacy’s value is instrumental rather than intrinsic. This also leads me to the view that the moral content of privacy does not include moral obligations or rights. The concept of ‘privacy’ has no fixed content, and consequently no normative content in the sense of explicit obligations or rights. The moral content of privacy, I argue, is to be found in the existing moral frameworks that presume it. I investigate two specific moral frameworks, those of German philosopher Immanuel Kant and British philosopher and politician John Stuart Mill. In the first place, I choose specifically these frameworks because Kant and Mill have been, each in their own way, major influences not only on ethical theory, but also on the development of liberalism. Although their ideas of individual liberty and especially its relation to the state were in several respects controversial in their own times, they have been influential on and are still strongly connected to ours, that is. I link their thoughts with the different existing types of privacy (as stipulated by Finn et al.). Discussing their works, I establish two theses about the moral content of the concept of privacy.

The Kantian thesis: insofar privacy violations (as privacy violations) force a will, physically or psychologically, in its operation, and insofar privacy violations (as privacy violations) performed by the state have as a consequence that that state is no longer a means to (or: enables) freedom, Kant’s moral framework provides us with a moral reason to forbid these violations.

The Millian thesis: insofar privacy violations (as privacy violations) harm rather than promote total utility, and insofar privacy violations (as privacy violations) interfere with the sphere of liberties of a person who does not do harm to others, Mill’s moral framework provides us with a moral reason to forbid these violations.

These theses are structured alike, inquiring firstly about the consequences of the privacy violation itself, secondly about the privacy violation being performed legitimately or not, and then, thirdly, establishing whether the literature applies to it. For both frameworks, I argue, the most fruitful path is that of investigating the moral legitimation of state authority, and Kant’s and Mill’s positions can be seen as converging on at least one point, namely that they are able to show that certain specific privacy violations are wrong, not because the acts are wrong in themselves, but because of their context: the violator is the state operating outside of its moral authority.

In chapter four, I show that governments are actually violating all types of privacy (although some on a larger scale than others), and that there is thus a conflict between current practices and the moral frameworks discussed (regardless of the differing arguments underlying those positions). This suggests that certain justifications of privacy violations (like the general motivation of the protection of public health and safety) are not sufficient and should be given more substance. Governments should put more effort in demonstrating why certain privacy violations are needed, and why they weigh up against the interference with individual liberties. A rough sketch of such a proposal would be:

  1.   Assure that privacy violations are happening overtly, i.e. assure that citizens know or can know in general terms what kinds of privacy are violated and why, and do not want to find out afterwards that our governments were operating a massive espionage programme on their own citizens.
  2. Assure that privacy violations for the protection of public health and safety are non-discriminatory, i.e. assure that individuals or groups are targeted because there are strong reasons for seeing them as threats (to public health and safety, to the freedoms of others, etc.), not for any other reason.
  3.   Assure that if privacy is violated, it is done according to public laws.
  4.   Assure that privacy, if it is violated, is violated within the moral authority of the state.

Mind that this is only a rough sketch, and additionally, only concerns privacy violations by states, not by businesses. I hope my findings in this thesis can contribute to shining a light on the moral content of privacy violations by businesses in the future.

– the original assigment (Dutch version, see below) was given a 7.5/10 by Peter Sperber, under Approval of Marcus Düwell

If Immanuel Kant states that we always have to act according to maxims of which we can at the same time will that they become a Universal law, how much does his conception differ from rule-utilism? In this summarized form there seems to be not difference: both principles order a certain way of acting without looking at the consequences per act. I here want to address this question, and argue why there is a difference.

Central in Kant’s ethics is the Categorical Imperative, a universal principle that applies to all reasonable beings. “Act only according to that maxim[1] whereby you can, at the same time, will that it should become a universal law”, it states.[2] If I can not wish a maxim to be a universal law applying to all reasonable being, it cannot be moral. It is, according to Kant, the only moral obligation, because all other thinkable imperatives serve a goal, and are therefore hypothetical imperatives. As long as an act is not done from the principle of duty, it has no moral worth, even when it brings happiness to you and to the other.

Utilitarianism (utilism in short) defends the so-called greatest happiness principle, which says that the right thing to do is the act that brings as much happiness as possible to as many as possible people.[3] This almost per definition requires the use of rules, because calculating for each act what the possible consequences are is theoretically and practically impossible. Utilism knows two main conceptions on how to deal with these rules. Act-utilism says that these rules may be broken when this causes more happiness in a specific situation, rule-utilism states that one has to accept a set of rules that one is not allowed to break, even if the exception would bring more happiness about. The fact that a number of rule-utilitarians uses sub-rules that make exceptions possible indirectly is left out of consideration here, because with these sub-rules, the distinction between act- and rule-utilism blurs so much that it would make answering the question impossible.

To the question whether there is a difference between the conceptions of Kant and rule-utilism I surely can say “yes, there is.”  In short, the two seem similar because both give strict rules for moral behavior, but in essence they differ strongly.

This difference lies in the fact that rule-utilism gives rules to block the possibility of bizarre exceptions. As an act-utilitarian it would not at all be problematic to cheat on your wife if that produced more overall happiness. Richard Brandt, among others, gives this as an argument against act-utilism.[4] Although the categorical imperative does not allow exceptions in a similar way, rule-utilism gives rules with a goal in mind, where the categorical imperative is characterized by its being a goal in itself. Because rule-utilism in the end judges the morality of a set of rules (not of the rules in themselves), it is a consequentialist (also teleological) ethical theory; a theory that states that the outcomes of actions determine the moral worth. Kant’s ethical theory morally judges actions on basis of their motivations, and therefore of a fundamentally different kind, a deontological: an ethical theory based on rules, that looks at the motivated action instead of only the consequence. The rule-utilitarian thus assumes that morality is a means to general happiness, whereas Kant states that morality exists in itself, has no external purpose. It is the expressing of the free will of the person. In short: rule-utilism says that happiness is the greatest good, Kant says the good will is.

Stephen Darwall formulates the underlying normative difference as follows: the rule-utilitarian believes that together we are all responsible for everyone’s happiness, where Kant states that together we are all responsible for the conditions that are necessary for supplying everyone in what they need for living their (moral) lives.[5]

A last difference is the following: a formulation of the categorical imperative that Kant names is that you must always treat people as ends, never as means. [6] Utilism puts everything in the service of the greater happiness, through which humanity (man-hood) automatically becomes a means.

So once again, yes, Kant’s view differs widely from utilism’s. Kant’s ethical theory is first of all a deontological, not a teleological. The set of moral rules of rule-utilism is given to promote a greater good, the categorical imperative has morality as a goal in itself. Then there are some differences in, for example, how the two treat acts concerning humanity (man-hood) and the conception of the greatest good. More than enough to separate the two.


[1] Subjective principle for acting

[2] Kant, Immanuel (1997) Fundering van de Metafysica van de Zeden. Amsterdam: Boom – p. 74

[3] Mill, John Stuart (1998) On liberty and other essays. Oxford: Oxford University Press – p. 457

[4] Brandt, Richard B. (1991) Philosophical Ethics: an Introduction to Moral Philosophy. New York: McGraw Hill – p. 152

[5] Darwall, Stephen (1997) Philosophical Ethics. Oxford: Westview Press – p.168

[6] O’Neill, Onora  in Singer, Peter (1993) A Companion to Ethics. Oxford: Blackwell – p. 178-179