
Can a work of art be objectively good or bad?
It is a surprisingly hard question to answer.
Some people say that all art criticism is subjective. Nothing is objectively good or bad. It all comes down to subjective preference.
Others say that art can be objectively measured. Personal opinions can be removed from the discussion and a hard number can be assigned to works of art according to quality.
The debate between the objective and subjective positions on the internet can get quite heated.
People who judge art objectively are accused of limiting creative expression, denying people’s emotions, and arrogantly claiming their opinions are objective while denigrating the opinions of others as merely subjective.
People who judge art subjectively are accused of being consooomers who consume trash art uncritically, have no standards, and thus provide no incentive for artists to improve.
So who is right? Is anyone right?
Disagreement between the objective and subjective camps often stems from differences in definitions. I aim to clarify these definitions and show that objective and subjective criticism are not mutually exclusive.
Objective
Objective means “not influenced by personal feelings or opinions in considering and representing facts”.
An objective judgment is thus a dispassionate judgment, one made without the influence of personal feelings or biased viewpoints.
It is obviously impossible for anyone to be purely dispassionate. No one can entirely separate themselves from their emotions and biases. Therefore, it is impossible to be purely objective.
The best we can do is strive for objectivity. That is what being “objective” means in a practical sense.
Subjective
Subjective means “based on personal opinions and feelings rather than on facts”.
A subjective judgement, then, is founded on personal feelings, tastes, and biases.
It is impossible for anyone to be purely subjective because it is impossible for anyone to separate themselves from reality completely. To one degree or another, reality informs how we think and feel.
The best we can do is strive for subjectivity. This is what being “subjective” means in a practical sense.
“Objective” Does Not Mean “Correct”
A common misconception is that “objective” is a synonym for “fact,” “true,” “correct,” or “unfalsifiable”. But this is not the case. To be objective simply means to consider facts without subjective framing. You can consider something objectively and still be wrong.
For example, you may have incomplete data. This would limit the strength of your conclusion, even if you considered the data with perfect objectivity. You may come to a conclusion that is probably correct–but without all the facts, you could never declare your conclusion unfalsifiable.
The scientific method is an objective process, but scientific theories are not necessarily true. That’s why scientists call them theories. Scientific theories, like any objective opinion, can always be changed by new discoveries.
Correctness is an end; objectivity is a method.
Can We Really Trust Our Senses?
Some say it is impossible to be objective because we cannot trust our perception of reality.
What if our senses are lying to us? What if we are all insane? What if we’re living in a simulation? What if we aren’t even real?
While this philosophical discussion may interest some people, it is of no practical use in everyday life or in considering art.
To function as humans, we must presuppose the existence of objective reality and our ability to perceive it.
Can We Make Objective Statements About Art?
Art has objective qualities. Books, paintings, movies, music, sculptures, and other art are composed of things that exist and can be observed.
It can be objectively observed that a painting has a castle in it, or a movie has a chase scene, or a book is set in a fantasy world.
So yes, we can make objective statements about art.
Objective statements need not be superficial, either. You could talk in depth for hours about a video game, for example, using only objective statements.
You could talk about narrative events, plot structure, character development, the average frequency and length of combat sequences compared to other games, movement speed, weapon accuracy, recoil patterns, special abilities, map size, terrain features, collectables, progression systems, game modes, paid content, control and graphical options, bugs, playtime for the average casual player, and a thousand other things.
You could do all this without ever using subjective terms like “fun,” “satisfying,” “enjoyable,” or “boring”. You may not have all the facts, but you can still make objective observations about a work of art.
This doesn’t mean subjective views aren’t valuable. Some people prefer critics to voice their subjective views.
They don’t want to hear only the hard facts about a game—they want to know how the game made the critic feel. If a critic has similar tastes to them, they might take that critic’s opinion more seriously than another’s.
Nothing here is controversial, I hope. Everyone knows, or at least chooses to believe, that art has objective qualities.
The controversy starts when people start using qualitative judgements like “good” and “bad.” That’s when factions start forming and insults start flying.
This is also where the topic gets complicated.
Can Art be Objectively Good or Bad?
To answer this question, we must agree on our definitions of good and bad. How about this:
Good: “having the required qualities; of a high standard”.
Bad: “of poor quality or a low standard”.
Both definitions rely on the words “quality” and “standard”. So let’s define them:
Quality: “the standard of something as measured against other things of a similar kind”.
Standard: “something used as a measure, norm, or model in comparative evaluations”.
Bringing all these definitions together, we get a statement like this:
Good and bad are statements of quality. We measure the quality of things using standards. Standards provide measurements to determine how good or bad things are.
Whether works of art can be objectively good or bad thus depends on where qualitative standards come from.
Do qualitative standards come from subjective preference, or are they “discovered” as scientists discover the laws of the universe?
Where Do Standards Come From?
You can’t apply standards to something without knowing what it is. To know what something is, we need to define it. So where do definitions come from?
This leads us to a complicated etymological discussion I’d rather not get bogged down in.
Suffice it to say we create definitions. Definitions are thus subjective. The things definitions refer to, however, exist in objective reality. If definitions did not refer to objective reality, they would be meaningless.
The subjectivity of the definitions we assign to things that exist in objective reality forces us to forge intersubjective agreements about what our definitions mean.
We cannot be certain that any two people agree perfectly on what a definition means, but we can get close enough for practical purposes.
Do Standards Come From Definitions?
In other words, are standards inherent to the definitions of things? Do definitions necessitate one or more standards that cannot be subjectively ignored?
Consider the word “story.” The most common definition of “story” is “a series of interconnected events.”
This definition appears to identify a standard that something must satisfy to be considered a story: “a series of interconnected events.”
Therefore, definitions necessitate standards.
This is likely due to the fact that definitions, subjective as they are, refer to objective reality. Objective reality is where they derive their meaning.
A series of interconnected events means something, even if our language cannot perfectly encapsulate its meaning.
Definitions Don’t Have to Be Perfect
A short word on this before we move on. Please ignore the fact that we are both sinking into a philosophical quagmire.
Definitions do not need to be airtight. They do not need to include or exclude everything in the universe according to their criteria.
No definition, after all, is absolutely airtight … probably. Maybe I’m wrong. An absolutely airtight definition is certainly rare, at any rate.
Not being airtight doesn’t make a definition useless. The definition of “story” isn’t useless, even though there are ambiguous things some would call stories and others would not. The definition still refers to real concepts, albeit with imperfect accuracy.
The more exact a definition is, the more concrete it is. It is easier to come to an intersubjective understanding over concrete words. The boundaries of abstract words are more ambiguous and thus open to more subjective interpretation. The only way to limit room for subjective interpretation is to make the words we use as concrete as possible.
The Difference Between Definitional and Qualitative Standards
So far, we have shown that definitions are subjectively constructed and yet refer to objective reality to varying degrees of concreteness.
It follows, then, that definitions logically necessitate standards we cannot subjectively ignore. A story must contain a series of interconnected events. Even if we disagree about ambiguous edge cases, we can understand the reality of the concepts represented by these words.
But this is a definitional standard–a minimum standard something must satisfy to be considered a story. A story must contain a minimum number of events to be considered a series and a minimum amount of logical consistency for these events to be considered interconnected. As long as something achieves these minimum standards, it is a story.
But is it a good story? Is it a bad story? We know where definitional standards come from. But where do qualitative standards come from?
Qualitative Standards
Someone might say, “A story is a series of interconnected events. For something to be interconnected, it must have a logical throughline. Plot holes break this logical throughline. Plot holes are therefore bad. Quality is therefore derived from objective reality.”
But my response would be: “A story only needs a minimum amount of interconnectivity to function as a story. Any greater amount of interconnectivity is unnecessary and therefore subjective.”
People can like stories that make little sense. Maybe they only like stories that make little sense. For them, a “good” story would be one that ONLY satisfies the minimum interconnectivity requirement and no further. Such a person may seem strange to us, but are they wrong?
I don’t think they are. Definitional standards and qualitative standards are different things. Definitional standards only describe boundaries of meaning for something to be considered something.
It seems to me that whether something is good or bad within these defined boundaries is entirely up to us individuals to decide.
Even if you disagree that stories that make only a minimum amount of sense are good, you probably won’t deny that stories with lots of plot holes are not, by definition, stories. You would still call them stories—just bad ones.
Everyone has different qualitative standards. Everyone orients these standards in different directions, from good to bad. Everyone puts the threshold for good quality at different places on their internal quality spectrum. Everyone conceptualises the threshold between good and bad as having a different degree of exactness or ambiguity. Everyone balances the importance of different standards differently.
Finding Objectivity Within Our Subjective Frameworks
Standards may be subjective, but this doesn’t mean that qualitative judgements can’t be objective. Once a subjective framework (a standard) is established, objective measurements can begin.
We could say that the more sense a movie makes, the more good it is. A movie with lots of plot holes, by this metric, would be worse than a movie with few plot holes, assuming that is our only standard.
The internal consistency of a movie’s plot is neither entirely concrete nor abstract. We can use reason to objectively identify when something doesn’t make sense. But when weighing the importance of plot holes, a degree of ambiguity forces us to draw upon our own subjective interpretation to some degree.
It is easy to compare the significance of a cameraman appearing in the background for a few seconds vs a confirmed dead character coming back and no one acting surprised. But it is not so easy to compare two similarly impactful plot holes and objectively say which is more significant. In such cases, ambiguity forces us to draw upon subjective interpretation or dismiss the comparison and say they’re close enough to make no matter.
So we can objectively measure quality within the subjective frameworks we establish, though these measurements will never be totally objective. We can only strive for objectivity in our qualitative assessments, assuming we want our opinions to be shared or at least understood by others.
Does Subjectivity Make Art Criticism Pointless?
Our standards for quality may be subjective, but that doesn’t make art criticism pointless.
We may all hold different subjective viewpoints, but nothing prevents us from agreeing upon intersubjective standards. Our understanding of these common standards may never be exactly the same, but we can come to a close enough understanding for practical discussion.
We do this all the time. When we critique the quality of art, we do so with the understanding that we hold similar intersubjective standards.
Most people, for example, agree that plot holes are bad. They agree narrative coherence is good. We may place the threshold between good and bad in slightly different places on the quality spectrum, but for the most part, where we place the threshold will be close enough for us to understand each other.
Thus, when someone says, “X book is bad,” we can usually infer their qualitative standards based on our knowledge of them, the book, and the context of the conversation.
Even when two people hold different qualitative standards, it is useful for both parties to understand what the other’s standards are.
You might not like romance stories, but by understanding the intersubjective standards of those who do like them, you could estimate the quality of a romance story according to the standards of those who enjoy them. Understanding alternative qualitative standards helps us empathise with people and the art they consume, even if we can never feel the same way.
Subjectivity does not make art criticism pointless. If anything, it makes it more fun. It makes us aware of how many ways art can be perceived rather than narrowing all art criticism down to a finite set of unchanging principles.
The Danger of “Good” and “Bad”
Whenever we use words like “good” and “bad”, we do so with the understanding that people share our intersubjective frameworks for quality and that they understand that we understand that these frameworks are subjective.
To avoid confusion, it is best to avoid these words whenever possible. They are good shorthand words because they are abstract, but in the interests of clarity, we must try not to overuse abstract terminology.
It is best to limit objective criticism to statements of fact, such as I discuss in the early sections of this essay. Only use “good” and “bad” when you’re feeling lazy af and trust your audience to understand what you mean.
Summary
Let’s wrap everything up.
There is no such thing as objective quality in a universal sense. This doesn’t mean you can’t assess something’s quality or that something cannot be objectively good or bad. It means that something can only be good or bad within the framework of an intersubjective definition of quality.
A subjective definition of quality is comprised of one or more subjective standards. These standards provide objective measurements to a greater or lesser degree depending on the abstractness or concreteness of the metrics they use. All these standards are ultimately derived from our subjective preferences.
Correct me if I’m wrong about any of this. I’m not a philosophy expert. This is just my best effort so far at understanding objectivity in art criticism. I definitely feel I am pushing the limits of what my tiny brain can achieve, so I’m well aware I may have gotten things wrong.