This cartoon is by Barry Deutsch and Becky Hawkins.
There was some interesting discussion of this cartoon on Tumblr, regarding if this cartoon stereotypes husbands.
Remember, if you like my political cartoons, please consider supporting my patreon!
Transcript of the cartoon:
Panel 1
In the foreground, a middle-aged man types on his laptop. Behind him, a yelling child is calling to the man, while the child’s mother, holding an infant, shushes him. A caption shows us what the man is typing.
JUNIOR: Dad! Dad! DAD!
MOTHER: Junior, let your father work.
CAPTION: “The ‘wage gap’…”
Panel 2
Same scene. The boy has calmed down, and the mother is bringing him along by the shoulder as she exits. The mother looks exhausted, and the baby is pulling on her hair.
MOTHER: I’m going out – I have to meet with Junior’s teacher and do groceries and pick up your dry cleaning and…
CAPTION: “…mostly disappears….”
Panel 3
The mom has departed, but the man, still typing, turns his head to call out after her.
MAN: Oh, the nursing home left a message about my mother… Would you take care of that?
CAPTION: “…when you control for the fact…”
Panel 4
The man turns back to typing.
MAN (thought balloon): Hope she makes stew for dinner tonight.
CAPTION: “…that women work far fewer hours than men.”
Read the tumblr discussion. Maybe you could fix the valid criticisms brought up with wording… If he were typing something like “the wage gap does not account for unpaid labor done by women” and have him thinking something like “how could I do it without her” in the final panel.
A friend of mine has an interesting Master of Engineering project. He’s working on an algorithm to assess an author’s work for sexism, racism, etc… You get the body of work, you assign two sets of values to everyone – one describing the character in terms of race, gender, ethnicity, sexual orientation, political leanings, etc… the other one on a positive/negative scale in terms of altruism, ruthlessness, obliviousness, etc… Unfortunately, only the first part is automated, and it generates a list of characters you have to manually rate. But once that’s done, it looks for correlations.
He is mostly tuning it on old Socialist realism crap, and uses it to dissect what passes for fiction to the right wing crowd at Baen. (No offense meant to Baen, they publish most of my favorite new fiction, in addition to garbage that makes me nauseated after a few pages)
You would be surprised (or you may not be) at how Italians look in US publications at the end of the 19th century, and Engineers in Social Realism, all the way up to the late 80s. He gets some pretty interesting, and expected, results from MRA sites, as well. But Barry, you have a claim to fame with his engine. No one, and I mean not even the Stalinists from the 50s, have managed such extreme deviations as your political cartoons (we fed 24 to the engine) You are really reaching for three standard deviations on “Men bad, women good”. You’re going pretty strong on race and curly hair, as well, but gender is where you really have no equals.
Now, I am not saying that this is a bad thing. Maybe a good purpose is served by your having men always in the wrong. After all, the world agrees with you. Men get longer sentences for the same crimes, no category is as discriminated against by the Boston Police as males, and just changing the names from male to female changes sympathy on online surveys by double digit percentages.
But anyway. I got mine, and I see no reasonable scenario in which I won’t keep it. I also have a daughter, and no sons. So, keep on holding your mirror to the world. Men need to see other depictions of themselves besides fat incompetent morons, and musclebound unfeeling mayhem dispensers.
Petar: I’m a little skeptical that your friend has actually created a computer program that is able to objectively determine the “race, gender, ethnicity, sexual orientation, political leanings” of not only prose works but cartoons in an “automated” fashion.
From what you describe, it seems like I’m literally the most anti-male creator in all of history, and I’m far more “men bad, women good” than any other creator ever (or any that your friend has yet measured, anyway). Which is weird, since it’s trivially easy to find cartoons I’ve done in which the “good guy” character is male, the “bad guy” character is female, and I recently did a big cartoon decrying anti-male sexism. (All three examples from the current front page of my site.)
If I’m really the worst, most anti-male creator in history, then I think men must not have much to complain about.
Of course it only works automatically on prose. For the cartoons, you have to feed it the data. If you do not believe that it can be done automatically for prose, you are quite behind the times. My own Master Thesis did it for killings, family relationships and contacts, back in 1997. I used the Bible as my first real test.
And yes, you are the outlier so far. Mostly, because rabid animals like John Ringo and Tom Kratman go over their excretions and make sure that the genders and religions are balanced. You do not. (And as I said before, I’m not sure it is a bad thing per se. Most right wing SciFi authors are actually showing a black vs bias bias quite close to the mainstream, and I think that it general, it is a good thing for society)
And f course, I am sure that worse biases can be discovered, if, for example, you look at Japanese in WWII US cartoons.
As for your counter examples, I’m shocked you could not come up with better ones. Not one is helping your argument. I never said you hated males, or were a bad person. All I said is that in your work, when a woman and a man interact, the man is invariably worse, nearly three standard deviations away from what you’d expect by chance. The articles on fucking townhall.com don’t come close to showing that bias against Blacks…
In the first cartoon both characters are male. It cannot show bias.
In the second cartoon, both characters are bad, but the male character is much worse than the female one. The female one is within the bounds of the law, doing her job. The male one is betraying the public trust. If you think that a mole is only as bad as the agent who turns him, we’ll have to disagree.
The third cartoon, I had not seen before, but it would not make a good sample. I can’t even tell how many characters there are – some could be the same one in a different setting, and it is hard to assign them values. We tried to look for cartoons where one could not make the argument that the data is too tainted by subjectivity. For example, you have a habit of drawing characters so that it hard to tell whether their dark skin is a result of race or of facing away from the light source. Thus, he will not even talk about the black vs white bias, which is close to two standard deviations, but he does not trust the marking.
By the way, if he ends up using a lot of your work in his presentation, he will ask you for permission. He is not quite there yet.
—-
By the way, I really dislike this cartoon. I think I may dump on it weekend, if it does not feel too personal.
Petar: So your friend does corpus linguistics studies of bias? Cool! What program are they in? Can you (or them) tell us more about how the research actually works?
Last I checked, corpus linguists shied away from declaring absolute numerical scales in favor of talking about different grammars but, hey, maybe that’s changed in the decade since I was reading about it. It’d be interesting to get caught up.
yrs–
–Ben
I wanted to highlight this bit, Petar. You’re saying that Amp’s cartoons are an outlier in terms of their male-to-female balance ratio of bad characters, an outlier compared to other works your friend has examined. But this tells us absolutely nothing about the absolute scale: if all of the works your friend had previously examined had 90% female bad characters, and Amp had 50% female bad characters, he would still be this much of an outlier, even though his work was even-handed.
There are three other concerns, of course, assuming your friend’s software is working correctly. (Not a slam, I’ve had to send three different emails to a professional mailing list this week correcting previous results due to minor but important software bugs…) One of them is that Amp writes cartoons arguing against socially conservative viewpoints, which are genuinely held more often by men. Another is that–as we discussed in a previous thread–Amp’s lovely stylized characters are sometimes hard to determine the genders for. And, finally, this still requires human coding for the positive or negative portrayal; you’ve left out the bias of the coder as one of the contributors to this result. (I would expect any coder to have some kind of bias–you can measure systematic differences in data labeling even for physical data labeled by experts; something this subjective would certainly have contributions from the person doing the coding. I don’t necessarily mean political biases, just that people rate the same things differently in systematic ways.)
***
I love everything about the body language of the children, by the way. It’s amazingly dynamic for only having two still panels in which to display it.
Harlequin, I mentioned deviations from random expectation. Whether it is Scholars vs Peasants, Black vs White, or Men vs Women, the final results are comparable. If someone has an oeuvre in which Black characters are nearly always wrong when disagreeing with a character of a different race, you may suspect that the author holds certain views. If a book has the Theist characters nearly always right in their interactions with Atheists, you can make a guess about the author’s beliefs. ‘Nearly always’, “Sometimes”, “More often than not”… you can quantify those, and one way is to do so via deviation. Why? Because this way you can more easily account for biases due to representation and dependent variables. (As a totally unrelated big, here is a link to an article by people who failed to do so) And no, it is not about how many characters are bad or good. It is which of two differently marked characters are worse or better.
Furthermore, when the interacting characters are similarly marked, you cannot draw any conclusions, just as you cannot do so when two characters are similarly bad. For example, in Ampersand’s second cartoon, you cannot add anything to your data about gender or race, but you could use it for the ‘wealth’ and ‘social status’ categories, i.e. “Rich bad”, “Well dressed bad”, “Homeless good”.
And finally, Amp is not an outlier just in Male vs Female deviations. His Male vs Female deviations are outliers compared how intellectuals are portrayed in 50s Stalinist fiction, etc…
Actually, no. He is in the MIT HASTS graduate program, and his undergraduate degree was in Psychology (he’s one of my wife’s) He wants to squeeze in an MEng in there, to be able to command bigger bucks if he decides to go for the gold instead of the good. Most of the CompSci and Linguistics tools he is using have been developed by someone else. He is all about social sciences, and he wants to be able to judge bias automatically.
He also is trying very hard to convince himself that some bias is useful and beneficial to society, while other is very harmful. I do not need to be convinced, so I am helping him… well, that, and he using one of the few programs of mine of which I’m proud.
Cartoons are impossible to automate (yet) and harder to mark (not only because of ubiquitous humor) but he wants to use them to make some of his points more accessible.
You must understand that he would be considered a lot more of an ally to most people here than I would be. But the two of us can agree on a few subjects, and one of them is that showing Black people are economically successful, well educated and superior in rationality based arguments is a beneficial bias, and showing males as universally ignorant, stupid, lazy, immoral, emotionally deaf, self-centered, cavalier towards the needs to their parents, partners and offsprings, etc… is a harmful bias. Whether it has basis in reality has nothing to do with its effect on society, at least for my friend’s purposes.
By the way, as to Corpus linguistics, even my original data miner should not be associated with it. I was not trying to learn about language from corpora, I was using rules formulated by others (possibly Corpus linguists) to build relational databases (give me a break, it was the 90s) from corpora. I certainly went for the gold over the good.
“) One of them is that Amp writes cartoons arguing against socially conservative viewpoints, which are genuinely held more often by men.”
Citation needed?
Petar & Pesho are the same person? Can you do me a favor and choose one name to use for future comments?
Same person, yes. I could not remember what username I have been using. It is the same name, really, just different forms. I’ll try to stick to Petar, and of course, the e-mail should not change at all.
And as soon as I posted it, I realized that the browser was caching it as Pesho. Next time.
Pesho:
Oh–this was not at all clear to me from your previous comments, sorry. So never mind my comment. (Also, this setup makes more sense coming from you–since I know you’re a regular reader–than the random drive-by comment it appeared to be!)
***
Ortvin Sarapuu:
Citation needed?
That’s quite specific to American culture of the last few decades, I should have said (as Amp’s cartoons also apply to). Here’s the first non-locked article I could find addressing differences; inasmuch as you can map social conservatism vs liberalism onto Republicans and Democrats in the modern U.S. (more than in the past, but definitely not perfectly),women are more likely to support the Democratic party.
“That’s quite specific to American culture of the last few decades”
Oh, NM then. I thought you were speaking about men and women as a whole.
OK, Petar. If you can point me to your “friend’s” data set or their statistical writeup, I will give $10 to the 501(c)3 of your choice. I don’t believe the study actually exists.
First of all, 10 dollars? Seriously? Why not a dime while you are at it?
How about we both put $10,000 in escrow, and then I convince an arbiter of our choice that the project exists, code of mine is being used in it, and it is at a reasonable degree of completion? Is that good enough for you? I can do that with just the stuff over which I have control myself. I do not own his work, just the marking algorithm. (Technically, I do not even own that, but it is my Thesis)
I’m not willing to put even money on the final product, whatever form it takes, going to DSpace rather than Google or Palantir (why am I giving Palantir as an example, instead of SAS’s ilk? Because you have to admire the chutzpah in using the name after what happened to most Middle Earth palantirs)
You may note that I said that he is working on a Masters of Engineering as a side project to a HASTS Doctorate, and that he has not even decided how he is going to make the results more palatable, because as anyone who has ever read any fiction knows that there is always bias. The poor sod is in social sciences. You can’t just present a working tool, without considering all the ways it cuts.
Petar: Are you Pesho under a new handle? This exchange is pretty confusing if not.
(nothing wrong with changing handles, just trying to sort out who I am talking to in this exchange.)
yrs–
–Ben
Yes, Pesho and Petar are two forms of the same name, and I am in the process of making sure all of my browsers have Petar as the Amptoons handle. I thought I said so earlier in this very thread.
I consider your counter-demand of $10,000 to be further evidence that you’re not on the level. Also the idea that you’d put up money too is laughable; we both know that if the initial claims are false there’s no way you’d actually not back out somehow, and if they’re not you take no risk by putting up the money.
Petar and Jameson, I don’t think continuing with the “wager” line of discussion would be productive. (Although like Jameson, I’m a little skeptical.)
(Want to comment more, but my bus is coming! Later.)
I don’t find it at all unlikely that an MA student at MIT kit-bashed together an algorithm to “automatically detect bias.” I am very skeptical of the results of said algorithm, and if it was my algorithm I’d be guessing that there are some pretty large errors going on in my definitions of bias or procedure for assessing it, if it returned the result that Barry’s prose writing on Alas! is the most biased in human history.
At the very least I’d be checking it against some radical feminist “kill all men” blogs and more acerbic, angry blogs like “We Hunted the Mammoth” and “Not Sorry Feminism.”
yrs–
–Ben
(Going off into the weeds a bit on this comment, sorry, but it is in response to the thread if not the post–I debated putting this on the open thread and won’t mind moving there if you’d prefer, Amp.)
I’m with Ben on this–it seems like a decent side project for a grad student with some software skills. Writing such a program in a way that produces meaningful quasi-objective results is a different question, though. I’ve been through enough media discussions to know that, even given a well-defined scale like “altruism”, the same character data may be interpreted wildly differently by different people* in a way that is not equal for all characters (although giving them some criteria to define positive traits is better than just asking them their overall impressions). But then, of course, there are the questions of which traits define worse vs better and how much we should weight each one–which aren’t objective, either**. Actually, that might be an interesting point of investigation, if it hasn’t already been done, but it’s a bit circular with the stated end goal.
In any case, having a single person doing the data encoding is unlikely to give useful results: you’d want a group of encoders, run the algorithm for each one and average the results, probably, with a good representation of points of view etc among the encoders. And even then, you might still have signatures from common cultural biases (I would expect women to score lower on, like, “criminality”, if that was one of the criteria and your coders were from the US, even given a text where that criterion should be the same for all the characters).
In other words, it’s not the bias-finding algorithm I think is wrong. I think there’s a problem with the choice and the contents of the inputs as you’ve described them.
*Oh, the fan wars about whether Snape would turn out good or evil…
**I have a long-standing disagreement with one of my best friends about J.K. Rowling’s world building. I claim it’s terrible because it would never work, practically speaking. She claims it’s great because it’s so engrossing and works well as a fictional setting. We agree entirely on the description and disagree entirely on the interpretation, just based on how we weight the criteria.
At this point, I believe that there is something wrong with my English.
Someone quoted a paragraph in which I used “deviation” twice, and said that he did not notice I was talking about deviation. Someone is asking for the study, when I have specifically explained what stage the project is in. I have at least twice explained that only the collection of attributes for characters is automated, not the marking. It should be obvious that until the tool is usable, the marking will be done only by people who are involved in the project, and that yes, in many cases it will be subjective. I have repeatedly stated that we are using Amp’s cartoons not because they can be automatically groked, not because we want to indict him for crimes against humanity, but because two of the people involved love SOME of his cartoons, and because the author wants to be able to illustrate some points quickly, which will be hard to do with prose.
As for the bet, this is what escrow is for, so that people cannot back out after the rules are set. Of course, I will set conditions that I can fulfill – I was not accused of having an imperfect tool, but of lying, which is easy to disprove.
And finally I never said Ampersand is the most biased author ever. I just said that his male vs female deviation was the highest we have ever received, which may be due to poor marking, or to the choice material (SHORT cartoons)
But here it is again, so that no one can be mistaken to exactly what I claim:
After 28 of his cartoons were processed by three people each, with both attributes and judgements manually assigned, 25 of the cartoons were retained because all three people had agreed on the positive/negative markings. The attributes, on the other hand were discussed and agreed on, eventually, except for one other cartoon, which we discarded, because we could not agree on the race of one character.
This left 24 cartoons, and they produced a number of values, for race, gender, apparent wealth, physical shape, etc… One of these values was extremely high, between two and three standard deviations, and it was the gender one.
None of the earlier tests (true, we have no looked at Nazi in Soviet WWII propaganda) had produced such a high deviation. While I would love to use it, it is not my call – for now, it looks as if Ampersand’s cartoons, IF they get used at all, will be used to illustrate a bias that is not necessarily a bad thing. Obviously, this is not the male/female one.
This said, I, personally, think that the way Ampersand portrays males as nearly always in the wrong, whenever they disagree with females is … not a good thing.
Note that the algorithm tries to weight the interactions of attributes, including how the presence of one attribute affects the treatment of another attribute. For a made up example “Slavic people are always wrong unless they are also Catholic”. By the way, that (and the way to combine markings from multiple people) is the really new thing in the project, the rest is mostly small improvements on existing tools.
Now, if you still find what I am saying unlikely, there’s nothing I can say.
And I will not say much about this subject, unless someone says something that we find interesting and worth looking into (which has already happened, thanks!)
And I will still try to find the desire to explain why I think that this specific cartoon is really awful.
This was me. To quote my response:
It was the “from random expectation part” that I didn’t understand, because you didn’t mention it in your first comment. I thought you were comparing to internal measures of mean and standard deviation from your previously-compiled data set.
I agree there has been a lot of confusion in this thread, but I don’t think it’s from your English. You mentioned a project a friend had been working on. It’s obvious you’re familiar with many of the features of this project, since you’ve been involved; so when you brought it up, there were some things I think were so obvious to you because of your previous involvement that you didn’t think to explain them (for example, the fact that you were benchmarking to random data sets). Left to our own devices, a number of us interpolated your statements with what seemed most sensible to us to fill in the gaps–for example, my filling in that you had measured the mean based on previous data. This is a pretty common communication problem.
As to whether you claimed Amp was the most biased author ever, you did give a bunch of pretty extreme examples that you said he was worse than. Since you were using that result in support a contention that Amp is heavily unfair to men, I think grilling you about exactly how this supporting evidence was derived is a fair thing to do. :)
Also, my correct pronoun is “she.”
After the latest explanation, I now believe the project is real (thought as a statistician I still have concerns with the statistical techniques described.)
I apologize for derailing the thread.
Where would you like the $10 donated?
Specifically about the statistics : there seems to be a fundamental issue of multiple comparisons, especially as including interactions squares the number of potential tests. I think that the frequentist approach described is particularly ill-suited to dealing with this issue (thought it wouldn’t be a trivial one in any paradigm).
“Cartoonist who sometimes does cartoons about men’s sexism toward women features a great deal of male characters being sexist toward female characters: News at 11.”