Well, the bias is measured within language, and it is concluded that the bias in the expression of thoughts is due to a bias in the thoughts per se. (Which seems reasonable.) So to me the title is not misleading; if at all, it would be the conclusion to society.
No. Not really. They showed bias in the corpora they used for training their algorithm.
There are languages that are inherently gender biased like Hebrew, but English seems to me to be much better balanced. The question of course is how you define the language - is it the abstract grammar, or some random texts you use to train an algorithm on.
Oh completely :). That's singing my song :). But I've become a little sensitive to the habit of CS folk to claim that we've discovered things that people already know about.
Not specific to this article, but a random thought: I wonder if machine learning / neural networks may actually be used for good, by giving us a way to measure/discover the most severe kinds of bias existing in society. E.g. might we be able to estimate in which contexts sexism or racism is greater in magnitude? Or possibly notice biases that simply happen not to be a source of concern for society currently (not a cause or a movement with many supporters), but are nevertheless severe in impact? (Imagine discovering that there are large biases against atheists, or against obese people, or some category no one's seriously thought about…)
There are languages that are inherently gender biased like Hebrew, but English seems to me to be much better balanced. The question of course is how you define the language - is it the abstract grammar, or some random texts you use to train an algorithm on.