Monday, November 30, 2015

Star Star Star Star Star

Interesting post by Natalie Luhrs: Conflation of Review and Critique, or How to Annoy Me.

There's a lot in that post that I agree with, and a few things I'd probably want to respond to by embroidering them with qualifications in my own baffling idiolect, each of which would then be followed by an "or you could say" or an "or to put it another way, more-or-less," and then some more readily-recognizable and not-unreasonable and perhaps slightly-equivocal sentiment, albeit one which actually would not very well capture whatever weird hair-splitting involution I originally registered in my own baffling idiolect, but rather comprise a tacit bottom-lip-trembling invitation to forget I said anything, okay? Waaaaaah!

Or in other words, there are a few things in that post that I might disagree with!

But instead of attempting anything mildly and/or misleadingly coherent today, let me offer twelve quick sets of semi-rhetorical questions, mostly about stars:

(1) Look at this four-and-a-half star consensus on a scraped PoD "book" of Wikipedia etc. content about Neal Stephenson. "Tough read, but rewarding," says one reviewer who, like almost all the other reviewers, thinks they're reviewing something else. It's not even clear what: the entire Baroque Cycle collected in one volume? A particular Baroque Cycle novel? Cryptonomicon? Damn, maybe I'm wrong and they really mean those Wikipedia or whatever articles. Question. What's up with that?

(2) Would you ever rate something you hadn't read?

(3) If you do a lot of star-type rating, do you implicitly create certain sub-categories within the materials that you're rating? Or is everything compared to everything else? Do you imagine a reader going through all your ratings, or just encountering each rating in the context of that book? That is, do you ever try to communicate things with your ratings as a whole gestalt, or do you think of each encounter in isolation? Or something in-between?

For instance, do you tacitly judge all the books by a particular author against each other, and try to put them in the right order as carefully discriminated as possible as your priority? In other words, in what ways are your evaluations transitive? If you are confronted with the decision between either (1) signalling that Book A is better than Book B by Author X, or (2) signalling that Author X is better than Author Y, which do you tend to go for? And/or do you tend to do genre groupings together, or historical periods? If you have answered that you compare everything to everything, does that even count across different sites and platforms?

Do you ever sort of defer to local norms, clustered around a platform, or an author, or even a particular work? Or do you sort of create your own clusters of norms, perhaps according to genre or micro-genre, or perhaps according to something that is a bit harder to describe?

E.g. here's a test: how would you rate these ten works out of five stars, to best communicate what you think and feel about them: Hamlet by Shakespeare, The Merchant of Venice by Shakespeare, The Taming of the Shrew by Shakespeare, The Age of Innocence by Edith Wharton, The Dispossessed by Ursula le Guin, Dune by Frank Herbert, Ancillary Justice by Ann Leckie, The Buried Giant by Kazuo Ishiguro, The Road by Cormac McCarthy, Seveneves by Neal Stephenson. You have to do the whole list. Can you rate them all so that no matter what pair you pick, the better work out of the two has the more stars?

(If there are some on the list you haven't read, you can swap them for another work by the same author, or use one or more of these spares: The Concept of the Political by Carl Schmitt, I Am Legend by Richard Matheson, Assassin's Apprentice by Robin Hobb, Kafka on the Shore by Haruki Murakami, Carter & Lovecraft by Jonathan L. Howard, Planetfall by Emma Newman, Auora by Kim Stanley Robinson, Trouble on Triton, by Samuel R. Delany, Stone by Adam Roberts, Cranford by Elizabeth Gaskill, Doctor Faustus by Christopher Marlowe).

UPDATE:

Consider this Goodreads user as a specimen. It does seem strange that somebody who was so consistently disappointed by Lightspeed or Clarkesworld or Nightmare would keep coming back, issue after issue. Also he is very valiant to keep struggling through the works of so many writers of color, and so many works by those he probably considers libtards, cucks and feminazis, when he simply can't find any that he likes. Which is to say, I don't think he has actually read most of the works he has given one-star ratings to.

It should go without saying that this dude hails from the Puppies realm of fandom, and he and I probably disagree about pretty much everything. He probably thinks there shouldn't be gun control, I don't mind if there is or not.

But in the context of this post, it should also go without saying that I'm interested in his self-perception. How does this reviewer see what he is doing? Does this reviewer actually think of himself as doing something underhand, as trolling, cheating, gaming the system, meddling with the data? Is there a moment where he thinks, "the ends justify the means!"? Or does he consider his practice legitimate, an expression of the same norms that govern rating all over GoodReads, only given different opinions and information? Does he in fact believe it is enough to "know" with confidence that something is badly written, since he has sampled similar stuff in the past and finds it unreadable, and ascribes its publication to political correctness and nepotism? He probably sees himself -- in roughly Puppies lingo -- as a non-political lover of freedom and democracy standing up to the  hegemony of the Social Justice Warriors. If he were called out over it, would he disappear and try to do it again more subtly, or would he be a little confused about the substance of the challenge? Would he try to defend his practice? If so, how?

(As it happens, I think he probably does see himself as a saboteur, doing something underhand, and the reason for that are that Barack Obama gets five stars, whereas Mitt Romney and Rand Paul get just one; this really looks a lot like an attempt at muddying the waters and achieving deniability. I can maybe see him feeling that Mitt and Rand just aren't alt right enough for him, but all that love for Barack is a real tough sell. If that's the case, it's interesting that this politically motivated rater only has the courage of his convictions when it comes to cultural politics: he's very willing to betray his instincts in rating political memoirs and actual books about politics and cultural affairs, if it will slightly smooth the way for his opinions within the sphere of speculative fiction. Of course I'm speculating. Maybe he just loves those Obama books. I'm also interested, by the way, in his treatment of Stephen King, who gets the occasional two- or three-star ratings peeping out from a long gleam of one-star ratings. Mamatas also musters the occasional two- or three-star rating. Is this more muddying of the waters, a performance of even-handedness? Or is it possible that he has read, for instance, all those Stephen King books, in fact loves Stephen King, apart from King's implicit and explicit political stance? Are we seeing a grudging trade-off, where an author is punished as a public figure, but the punishment is mitigating in the case of a few books the rater found totally outstanding? Is this GoodReaders user vacillating between rating those two entangled things, the author and the author's works?).

(4) What if you know that a book is worth, for instance, four stars. Let's say you are very sure of it. And let's say that this book is currently sitting at an average of about two stars, after about twenty reviews have come in. Conventional wisdom is that you should now rate it four stars, its true worth to you. Right?

So what would it take for you to rate it five stars, to bring its public valuation closer to its true worth? Its just price, you could say? Would you ever be tempted?

Or perhaps this is already your practice? Archers take the prevailing winds into account, shouldn't reviewers? Doth not the responsibility of truth-telling partly imply paying attention to the context in which you speak, and ensuring that you set your meaning on a course such that, as far as you can tell from the currents and forces before you, it will eventually hit its true target as closely as possible? Does it feel slightly like I'm Lucifer? The Morning-Star Tempts the Star-Giver, etc.? What if you were absolutely resolute that it was worth two stars, but it was sitting on an average of one star, after a hundred reviews have come in? Would you give it two, three, four or five stars?

(5) To what extent when you are rating a book (or for instance a hotel or a service or a person) do you feel you are following commonly accepted norms for the generating of ratings? To what extent are the norms you are following describable? Do you think it matters if there are different norms simultaneously in operation? If there are different norms in operation, what are the ways they overlap, and what are the ways that they don't? To put that another way, what sorts of books are more likely to reveal, and what sorts of books are more likely to conceal, the difference between the norms in operation? To what extent do you think the norms you conform to are visible to others when you exemplify them? Do you ever virtue-signal with your ratings?


(6) Do the ethics of rating (OK that's probably too grand a term) actually change as more ratings are accumulated? Do you incline toward temperance / perhaps mild positivity if there are very few ratings, and then go for more extreme and potentially unforgiving ratings if there are lots of other ratings to absorb your contribution? Do you consider how many ratings a work is likely to gather over its lifetime? I.e. if you are likely to be one of only two or three people who ever rate it, or maybe the only person who ever rates it, does that change how you marshal your stars?

(7) What do you think of those people who bestow a book with only one star because it isn't about what they thought it was about?

Or those people who give it one star because it arrived late in the post, or because they thought it was too expensive and so they didn't buy it (for example, as in the sole Amazon review of this academic book, which is modestly priced for an academic book, although of course academic books are far more expensive than they ought to be), or in some other way found the book unsatisfying not in itself, but because it wasn't something they wanted it to be?

Or what do you think of those people who rate the book, rather than the marketplace seller, when they find that the quality of the copy does not match the description? I mean, how many stars would you give those people? No, that's not what I mean at all. I really mean, what's up with that? Can these practices be defended, or can parts of them be defended?

Similarly, what about people who go ahead and give a rating based on a single hated or loved feature? -- e.g. "I hate present third person tense novels. I go around giving them all one star." What about the people who give their ratings based on the same handful of somewhat flexible, extremely broadly applicable criteria, criteria that are brought to book after book, for instance, transparency of style, relatability of character, plausibility of motivation? What about the person, quoted below, who gave a book one star because their friend deeply disliked it? Is that practice defensible? (Compare liquid democracy, perhaps?) Less drastically, how much of a book do you think you need to have read before you are competent to rate it? Or alternatively, how many times do you think you have to read a book before you are really competent to rate it? Say, "in an ideal world"?

Similarly, when you buy Amazon reviews, how does that actually work? -- what parts of the process are automated? Or are there sweatshop workers writing those reviews, or what? Does anyone know?


(8) What are the truth conditions for the attribution of a certain level of stars? If the answer is "it's subjective" or "it depends how you feel," how do you know what you feel? What does a number feel like? How do you feel a number? Do a number of stars feel different from a number of hearts? How do you know you are really feeling the number, and not just mistaking some different feeling for the feeling of that number?

How is it possible for people to disagree on how many stars a book should have?

What would be necessary to make it impossible for people to disagree on how many stars a book should have? Is it possible to give a book the wrong number of stars? Is that is what implied by sites like Love Reading, Hate Books, or by authors sharing their #1starreviews on Twitter? (Answer: sometimes). Is the ascription of stars an aesthetic judgment (e.g. in a Kantian sense), and/or should it be? Would you ever go back and adjust all your star ratings to accommodate the sudden magnificent appearance of something that is greater than anything you have ever encountered, or is literally, by an order of magnitude, just the worst.


(9) Are stars scattered because books are scattered? In other words, do different star ratings tend to reflect sharply different styles of reading, and/or sharply different phenomenologies of reading, and/or the reading of different books? How real are these multiple different books ("book" in roughly the sense of "what is read") that supervene on the common book ("book" in roughly the sense of "the codex filled with words")? Are they at least real enough that we could for instance write literary scholarship about them -- that we could talk about "the one-star Aftermath" and "the four-star Aftermath" in the same way we talk about the First Quarto Hamlet and the Folio Hamlet? How do such different phenomenologies relate to the identity characteristics of intersectionalism and/or diversity discourse?


(10) There could be an algorithm which adjusts the average star rating to a targeted, weighted "average." Would you want such functionality? Would you want a targeted, weighted "average"? Obviously not. What if you could adjust the algorithm yourself, and there was an easy filter switch, to toggle between the weighted average and the unweighted average? Are you absolutely certain such an algorithm is not already in place? If it were, how would it first come to your attention? And what about if we weren't talking about books? What if we were talking about, say, Uber drivers? Would you want such functionality then?


(11) Imagine that reviews were not for authors in any sense, and also not for readers, in the sense that they were not primarily a filter for readers to decide where to focus money, attention, faith, and/or mental and emotional labor. Who or what else might reviews be for?


(12) What recipe would you use to create a useful weighted average? I.e. if you could see all the individual reviews, and could see the average star rating, but could also see another rating, "star tally tailored for me," or something?

The obvious recipe is to gradually slightly bump up the weight of reviews which people find "helpful." And vice-versa: if lots of people find a review unhelpful, perhaps it shouldn't carry so much weight. (But do you think there is something a bit seedy about that suggestion? After all, people weren't asked whether they think the star rating should have greater or lesser weight. They were asked whether they found the review -- the text in particular, you gotta imagine -- helpful).

What about a slight extra weighting according to word length -- a decent tracking variable for indicating that a reviewer might have put some consideration into their review?

How about an innocuous scan for phrases like "couldn't decide between three and four stars and eventually went for three" which will re-weight that rating as, say, 3.25 stars?

What about weighting somebody's star-giving according to how prolific they are with their stars: if I only ever give one-star or two-star reviews, perhaps my two-star reviews should be weighted as a curmudgeon's highest praise?

Are there any ways of building in tests that reduce the impact of reviewers who really have not read the book in question? Would you want them?

And what about increasing the weight of reviews by reviewers whose reviews you personally have "found helpful" in the past? Or even increase the weight of reviews by reviewers whose reviews reviewers whose reviews you have found helpful in the past have found helpful in the past?

What about going for a fully blown k-NN classification to determine how close a particular reviewer's "tastes" are to your "tastes," with taste being modeled by extracting features from a data-set including reviews you have written and reviews you have found helpful or not helpful? How would you set up the parameters in detail? What are the second-order possibilities and risks, in terms of a new incentive structure for reviewers, and for cultural production more generally?

Elsewhere:
RevRank.

No comments:

Post a Comment