Recent Changes - Search:










edit SideBar

In Defence Of Moral Realism

The following is as much an effort to put my thoughts on the subject in order as it is an effort to persuade people to at least consider moral realism. I thank the very smart people at for causing my mind to turn to this subject. I'd not given much thought to this particular bit of meta-ethics until recently. So without further adieu...

Moral Realism is defined by Wikipedia as of Saturday, February 8, 2014 as:

"A non-nihilist form of cognitivism and is a meta-ethical view in the tradition of Platonism. In summary, it claims:

1. Ethical sentences express propositions.

2. Some such propositions are true.

3. Those propositions are made true by objective features of the world, independent of subjective opinion."

So how do we go about proving or disproving these claims?

To do this, I shall first establish some definitions.

What is Morality?

Some people argue that morality is simply a relative standard by which we judge things, as in End-Relational Theory. If true, then this makes morality inherently relative because different people can establish different standards and there is no basis for proving any particular standard as being more correct. I don't subscribe to this view.

Morality in my view is simply, and this a loaded statement I know, "what is right".

What exactly do I mean by this? To say something is right means to imply that it is the correct fact or world state or course of action leading to a world state given all relevant information. For instance, we can say that "1 + 1 = 2" is right because it correctly represents a mathematical relationship. We cannot say that "1 + 1 = 2" is good however. Goodness is a different property than rightness. Rightness simply says that, given all the facts, this is correct.

Rightness is not the same thing as rational. Rationality has to do with finding the best way to achieve one's values and goals. It is quite possible then for rational activity to be immoral.

Rightness is simply the property of being true. If morality is this, it essentially makes 1 and 2 correct by definition.

Morality as Truth

Morality thus is not a subjective standard we apply because we desire it. Rather, morality is a set of prescriptions based on descriptions of reality. It is a set of normative truths that we can infer through a combination of perception, logic and reason. In that sense it is very much like mathematics, and I argue exists in the same realm as mathematics. This essentially makes 3 correct by definition.

Thus, assuming that my definition of morality expresses something that actually exists, rather than just a hypothetical construct of my philosophy, the definition of moral realism is satisfied. Thus, to prove moral realism, I need only show that this definition of morality is, -ahem- true.

What is moral?

So then, what does this definition of morality imply that makes it falsifiable? It implies that morality is something that is grounded in facts. It implies strongly that whatever is moral is not a matter of opinion, but of knowledge, and that the reason why people disagree about morality is that they lack perfect knowledge.

I don't pretend to have perfect knowledge. Thus, any attempt at finding out what morality implies is inherently limited by this lack of knowledge. Nevertheless, lack of knowledge has never been a reason not to attempt to reason with what knowledge we do have. Science is all about figuring out what we can know despite uncertainty.

So what is moral? Something that is moral is fact dependent. Strictly speaking there are only a few facts that we know without question. We know that something exists, that existence is. We know that what some part of what exists has subjective states, that experience is. We know that some subjective states feel different than others, that some are noxious, while others are pleasant. We know that because of the feeling of these states, we discriminate automatically between them, assigning some of them to be positive (or good), and others to be negative (or bad). This is not a preference, but a feature of sentience.

We can, perhaps at the risk of some confusion, refer to these positive and negative valences as absolute values because we have no choice in assigning value to them. It is an automatic, or deterministic mechanical process. These absolute values differ fundamentally from other values that we can choose, and I think much of the confusion over values is in not recognizing this. Absolute values can motivate action and establish desires, but motivation is not by itself moral. The correctness of a desire depends on the consequences of them, whereas the correctness of a feeling depends only on how it feels. Feelings and desires are both facts. But feelings have valences, while desires are either satisfied or not. However, we do not say that desires are positive when they are satisfied and negative when they are not. In fact, the satisfaction of a desire often leads to its annihilation. It is therefore clear that desires exist to serve as means to motivate the achievement of values or goals. They may be good, but not absolutely good. I use absolute instead of intrinsic because it may be possible to hold some outside goods, like a better world, as intrinsically valuable. However, such is a choice that we can make to assign such value, so I consider absolute value as potentially different from intrinsic value.

Given these facts, we can begin to state what is moral. An entity with perfect knowledge would be aware of these facts, and would know what good and bad feelings felt like. As it would know what every entity in this universe felt, it would be able to reason about the truth of these feelings, these absolute values. And the fundamental truth is simply that all entities automatically discriminate or prefer feeling the good over the bad. There is a kind of correctness to feeling good, and incorrectness to feeling bad, that subjects automatically are motivated to act upon.

In a sense, this can be understood by looking at a goal-directed agent. When such an agent reaches its goal state, it is in the correct state. If it fails to do so, than it is in the incorrect state. Sentient beings, have an intrinsic goal state, and it is called happiness. The desires, values, and actions of the agent can be described as correct only in the sense that they contribute to reaching the goal state. Sentient beings could conceivably develop other goal states, such as desired states of the world. But those states would not be about them. A world state could be "correct" to a sentient being, but that could just be a belief, rather than necessarily being a fact about the sentient being. Knowing the actual correct world state depends on perfect knowledge, and is therefore unknowable to the average sentient being. Though, this should not necessarily preclude sentient beings from trying to know as much as possible and trying to create what they think is the "correct" world state.

It can be stated then that the best state is the correct state that an entity -should- be in. That is to say, there is a prescriptive relationship between right and good, that the truth prescribes goodness as being fundamentally correct. Thus all good should be right, though not all right should be good, because it is not the case that all things that are true should be good (to say that 1 + 1 = 2 should be good is silly), but all things that are good should be true (as in, goodness should exist).

An entity with perfect knowledge, if motivated to do what is right, would therefore act to maximize the good for all sentient beings, not because it was feeling benevolent, but because it would be the correct course of action consistent with the truth of knowing what the correct world state, and correct state of all sentient beings, was.

In attempting to be moral, we attempt to achieve this correct world state, rather than just achieving the correct state for ourselves. We choose to take a universal perspective, even without perfect knowledge, and try to approximate what an entity with perfect knowledge would do.

The Problem with Values

Something more should be said about values. Often one of the confusions of moral theory is that it must have something to do with all our values. This confusion I believe stems from the belief that values determine morality, which I believe is actually mistaken.

Non-absolute values are inherently subjective, and are based on our imperfect perceptual knowledge of the outside world. People who's knowledge of the outside world changes often change their values to suit the information they have. To found "morality" on these values is to make "morality" inherently subjective and error prone. Non-absolute values are useful because the fulfillment of these values correlates strongly with positive states, but this is not always the case. Values can be described as good or bad in terms of what consequences holding those values entails. But non-absolute values cannot be described as "absolutely" good or bad or right or wrong.

I will state however something that will likely be controversial, and that is that the correct values are the ones that are most moral. Most people do not have values which are perfectly moral. Rather they either think they do, or they don't care. Nevertheless, some values are closer to moral than others. For instance, I think Utilitarianism is close to moral, but it may not be perfectly moral. I don't pretend to know because I lack perfect knowledge.

Nevertheless, I conjecture that there is a perfect morality because objective truth exists, even if we in our limited nature can only apprehend subjective truth directly, and must infer the qualities of objective truth indirectly.

Thus, the truth is, that I cannot prove that my definition of morality is true. And so I cannot actually prove moral realism. However, I can conjecture my definition of morality as plausible. Thus, moral realism, -could- be true, and unless falsified, presents a legitimate intellectual position to take.

Morality as Computation

The interesting corollary to all of this is that if morality is truly like mathematics, then morality should be computable. Maximizing the good is in effect, a computation that sees maximum goodness as the correct state of the universe. In which case we could calculate a kind of "moral error function" or "moral objective function", and morality can be seen as a kind of optimization problem. This is of course, what the various shades of Utilitarianism have been saying all along.

Anyways, that's my attempt at a defence of moral realism. I apologize if it isn't the most rigorous proof.

I should perhaps make some clarifications to this moral theory.

Values and Correctness

I would like at this point to separate values into three categories:

Subjective value is the value we assign as subjects to experiences and things. While we can state facts like "subject values money", it does not follow automatically that "money is valuable" in any other sense. This is what most people mean by values in general.

Abstract value is like the common mathematical values, such as "5", or "true". It's relevance is that it allows us to make universally true comparisons of things. For instance, "9 < 17". This statement is true everywhere regardless of what any subject thinks.

Objective value exists at the intersection of subjective and abstract value. That is to say, it refers to what subjects value, but abstracts it to be universal. Thus, statements like "happiness > suffering" is an objective value because it is true in all cases across all subjects.

Correctness is a mathematical property that I am, perhaps abusing here by applying to morality, but I feel it best captures what I mean by "rightness". The fundamental assumption underlying my usage of correctness is that "true" or "positive" is preferable to "false" or "negative", because "true" and "positive" are more objectively valuable. That is to say, that any discriminatory system will choose to be more correct and maximize the objectively valuable, assuming it has the motivation to be right rather than wrong.

Morality as Subjective Value-Independent

Due to perhaps the lack of clarity in my earlier writing, my definition of morality may appear to be subjective value-dependent, but in fact a key feature of this theory is that morality is actually subjective value-independent.

What I mean by this is not that subjective values and morality are not at all related but that the relationships is one-sided.

Most definitions of morality assume that subjective values determine would our morality should be. I argue that the theory of Morality As Correctness suggests that the opposite is the case. Morality determines would subjective values we should hold (though not necessarily what we do hold). Morality As Correctness holds that what determines a moral statement as being correct is its relation to objective value.

The Central Thesis

The central argument of my theory then is that certain states like happiness are good or positive not because we subjectively value it, but because it is an experience that is objectively more valuable, and therefore more correct than suffering. That is to say, there is a mathematical relationship that says that happiness > suffering, and that therefore happiness should exist, while suffering should not exist. Good in this case, is not a subjective value judgment, but a state of correctness that happens to benefit the subject.

Why should an all-knowing objective value maximizer only maximize these things and not other things like the number of paperclips? Because while 100 paperclips > 10 paperclips is true as an abstract value, it doesn't follow that a paperclip itself carries any objective value. Paperclips are not universally valued by all subjects. Thus the statement 100 paperclips > 10 paperclips actually reads as: 100 (0) > 10 (0). Which is false and therefore not motivating. What makes happiness true and objectively valuable is that all subjects experience it directly as positive.

This is true even if you got your wires crossed. As happiness describes a positive state, any attempt to reverse the wires and make happiness a negative state would be defeated because the new happiness would actually feel bad and therefore no longer be happiness by definition. By it's nature of being a directly experienced thing rather than an indirectly experienced thing, it has a universal description that allows it to be objective.

The nature of values is that there are many, many subjective values, an infinite number of abstract values, and very few objective values. So far I have identified happiness as an objective value. I leave open the possibility that other objective values might exist, and therefore also be worth maximizing, but they would have to satisfy the criterion of being experienced universally by all sentient beings as positive or good. If it can be proven that happiness is not an objective value, and that there are no objective values, then this theory of moral realism can be falsified.

Edit - History - Print - Recent Changes - Search
Page last modified on February 04, 2018, at 08:20 PM