I guess he realizes they don't work.
Because people don't typically use Facebook at work? It's a recreational tool. The issue is that for Facebook employees, they are working on building that recreational tool, and that hampers productivity and professionalism.
> I guess he realizes they don't work.
Or he realizes that they work for a service that's typically consumed during one's free time, and not as a direct component of one's employment.
I applaud the move.
> “What we’ve heard from our employees is that they want the option to join debates on social and political issues, rather than see them unexpectedly in their work feed.” [said Facebook spokesman Joe Osborne]
Others have organized to skip work in an act of defiance, affecting colleagues who are not engaged in their political cause:
> a group of employees staged a virtual walkout in early June to protest Facebook’s decision to leave up a post from President Trump about social unrest
Not exactly a "gun to your head" type of forced, but nonetheless these are actions designed to foment a political clash with other employees to engage with certain political causes when they may not want to concern themselves with that at all. What makes this particularly problematic in the workplace is that many employees rely on work feeds and teamwork to earn a livelihood for themselves and their families, so they cannot realistically avoid it.
Why focus on political speech when it's really any non-work speech that's distracting and should be banned?
But we all know that's impossible to enforce, and we've developed unfortunate but necessary coping mechanisms, like noise-canceling headphones.
Or maybe, y'know, just give up on the open-office concept, which has always been a productivity destroyer. I imagine post-COVID office spaces will be much more conducive to quiet and productivity.
That sounds good to me. I've never had to talk about these kind of things at work. Are there work places where this is unavoidable?
How are these two things causally connected in your mind?
As far as I can tell, this is the first comment in the thread to suggest we're talking about widespread mandatory "social justice" meetings that are political in nature and unrelated to the workplace's legal obligations. That's not what I got from @daok's comment or anything in between, and there are widespread mandatory diversity training programs in the U.S. that explain all the comments above, from my perspective.
I've never seen that (mandatory meetings unrelated to work) myself, across employment at 2 multinational corporations, several mid-sized companies, and a handful of startups. The mandatory diversity training programs in most companies are there to meet the legal obligations of discrimination law, whether they tell you that or not. Usually they tell you that.
I have a peer who had similar meetings at their firm and the VP of Diversity and Inclusion went as far as to say attendance at such meetings would be tracked on an individual and ongoing basis.
I feel that in general, my company has a very traditional and apolitical work culture, so my imagination runs rampant with what things must be like at FB, Google, & social service sector workplaces.
That's interesting, and I might be out of touch with this year's corporate response to the riots.
https://classunity.org/racism-and-responsibilization-in-white-fragility/
Sorry, but this is misleading. Most such companies are actively complying with antidiscrimination laws. These diversity discussions/initiatives at work are almost always unrelated to those laws. It's not a case of "Hey, we suddenly realized we're not complying with laws, so let's launch these initiatives." Most such discussions don't have any content about the law or legal aspects because they are not about laws prohibiting discrimination.
I've never been to a diversity training program that did not talk about the laws, and I've been to many. You are suggesting that companies are wasting their money for no reason, and consciously running training programs that are unnecessary? Why would "most" companies do that?
> Most such companies are actively complying with antidiscrimination laws.
Complying with US discrimination law means that companies are providing "reasonable accomodations" for people in historically discriminated categories, and for anyone who might file a complaint in the future. Educating the employees about acceptable behavior is a pretty obvious way to avoid getting sued.
> It's not a case of "Hey, we suddenly realized we're not complying with laws, so let's launch these initiatives."
The risk of running into both PR & legal trouble over diversity complaints has risen over time, due to increasing levels of education, increasing exposure to cases of discrimination and abuse, and generally increasing awareness of diversity issues.
The law doesn't require us to banish the phrase "master/slave" from our lexicon, yet it is part of our diversity initiative.
The law doesn't require us to banish the words "whitelist/blacklist" from our lexicon, yet it is part of our diversity initiative.
The law requires we do not discriminate based on a protected category (gender, race, etc). We have always been compliant. However, the law does not require us to hold special recruiting events for underrepresented folks when we are already compliant, but yet that is a major part of my company's diversity initiative.[1]
There are pay disparity laws. We were fully compliant before they became the law. They are not part of the diversity initiative and usually not discussed.
The law doesn't prohibit us from saying "Merry Christmas!" but our diversity initiative addresses this.
The list goes on and on.
Note: I am not saying I'm against such initiatives. Merely noting that at least for my company (and jurisdiction), these initiatives are not about what the law requires. We have always had employee expectations training for all employees, and it already covered things required by law, long before the company dove into these initiatives.
It may be that your company's diversity initiative is about legal compliance, but for many company's, it is a lot more than that (by a lot, I mean the majority of the initiative is not required by law).
> The risk of running into both PR & legal trouble
There's a world of a difference between PR trouble and legal trouble. Your comment originally was about the legal side, and so is my response. I do believe that most diversity initiatives are about PR, and not about the law, which was why I responded.
[1] These are invite-only, and so it's not open to all groups - just the ones we picked.
Except they are. Your word list is an exceedingly literal straw-man interpretation of a law that is purposefully vague, and doesn’t prescribe which words you can use. You can’t assume that specific policies of a company aren’t there for legal reasons just because the law doesn’t state the exact same policy in the exact same words.
The spirit and the letter of the law, as I already pointed out, requires that employers take “reasonable” actions toward making all employees feel welcome. Society and business have collectively decided that certain words can or do make some people feel unwelcome. Because of that fact, and because companies don’t want their management sued for neglect, we have interpretations of what it means to be compliant that are predictive and speculative. That really does not mean that the initiatives are not about the law, in fact exactly the opposite. Asking people not to use certain words counts as one of those “reasonable” actions, and gives the company a paper trail of attempting to be compliant.
Unfortunately, the only way to know is to test it in the courts. We differ as to where society currently stands on such matters, and it is certainly not clear that were this to go to court, rulings will be made to penalize companies utilizing the lexicon in the manner that it has always been utilized. And of course, my company never said "We should stop using these words because we are interpreting the law this way (or we think society interprets the law this way)." The notion that there may be a lawsuit was not even mentioned with regards to these phrases.
Your argument is "law is intentionally vague, and society believes X". I agree on the former, and not the latter. Unfortunately, anyone can make any statement when it comes to vague laws, which is why having precedence in a court is important.
You've also not addressed the other examples in my comment. My company, which is compliant with the law on hiring based discrimination, is not going to get sued if they decide not to hold special recruiting events for people of certain protected classes. Your earlier comment asked whether "I am getting it from HR and/or company lawyers", and in this particular case, the answer is definitely "Yes". The need to hold such events has been widely discussed in the company, and it has been made clear that this is a proactive initiative, and not required at all of us. Much of the criticism about this particular part of the initiative within the company is very much about "this is not required by law". There is no way HR is going to claim it is.
Furthermore, I just noticed in your earlier reply:
> You are suggesting that companies are wasting their money for no reason, and consciously running training programs that are unnecessary? Why would "most" companies do that?
I suggested no such thing. Companies do a lot of things that are not required by law. I didn't say all such things are a waste and are "for no reason".
No, my argument is predicated on my experience and discussions with company HR and lawyers at multiple companies. It sounds like your experience differs, and that's fine. My explanation for why the words you chose aren't specifically listed in the law is because the law is intentionally vague, and not prescriptive about which words you can use. The law is stating a goal, and companies are interpreting how to achieve that goal, because they have no other choice.
> Your argument is "law is intentionally vague, and society believes X". I agree on the former, and not the latter.
I feel like this is getting unnecessarily argumentative, and I'm guilty of escalating it. But I did not say, and didn't intend to mean that all of society agrees. However, it's sort of a fact and not a debatable point whether certain groups of people and businesses have decided that some words are sensitive. That's exactly why it's showing up in diversity programs.
> I didn't say all such things are a waste and are "for no reason".
Okay, I apologize for mis-interpreting. You have said multiple times that it is "not about the law" and you haven't offered an alternative explanation. If the reason has nothing to do with the law, then what is it, and why are companies saying it has to do with the law? What is the goal behind the proactive initiatives?
> You've also not addressed the other examples in my comment. My company, which is compliant with the law on hiring based discrimination, is not going to get sued if they decide not to hold special recruiting events for people of certain protected classes.
I tried to address this. Attitudes are changing over time. Being compliant yesterday doesn't necessarily mean you're compliant today, even if the law doesn't change wording. Growing awareness means that what's "reasonable" is a moving target. Also, we were talking about diversity training, and not affirmative action nor only hiring discrimination laws. Your company might get sued in the future if it doesn't take reasonable actions along the way to prevent people from feeling marginalized or ostracized, even though it believes it's in line with the law today. That has already happened at other companies, and one reason companies are trying to be proactive.
I'm confused with the question, given that you yourself gave the answer:
> The risk of running into both PR & legal trouble
You yourself stated a reason other than legal.
But even without PR, I'm surprised you're asking. Do you not believe they can be pushing these initiatives because they actually care about diversity? Or because they view it to be a competitive advantage over other companies? Or because they believe diversity will lead to better company performance? The last is one of the main stated goals in my company. I don't know if they themselves believe it, but it's clear that many, many people do.
> I tried to address this. Attitudes are changing over time. Being compliant yesterday doesn't necessarily mean you're compliant today, even if the law doesn't change wording.
I really, really do not see any group winning a court case against a company because they did not have special hiring events for people of their group. Perhaps in the future, but not any time soon. I do not for a second believe my company did this because they were concerned about the law.
If the company's normal recruitment practices is discriminatory towards a certain group, I can understand. That's not the case here. Moreover, even if it were, having such events would not protect them. You can't wipe discrimination in one part of the company by compensating in another. If your job application page has stuff that discriminates against, say, African Americans, then having special recruiting events for them will not alter the fact that you are discriminating.
> Also, we were talking about diversity training, and not affirmative action nor only hiring discrimination laws.
The thread is about discussions of diversity in the workplace, and were not limiting it to training.
Sure, I do think companies care about diversity, and are interested in the competitive advantages. But the only mandatory meetings on diversity I've ever had were about communicating policy that is attempting to adhere to the law, even if in a proactive sense. The goal of the law is to care about diversity and is founded on a belief that a diverse society has a competitive advantage, so I don't necessarily see a hard line between complying with the law and actually caring about diversity.
> I really, really do not see any group winning a court case against a company because they did not have special hiring events for people of their group.
I don't see that either, and it's not something I claimed. We weren't talking about affirmative action, you're moving the goal post. We were talking about widespread mandatory diversity programs.
In addition to what the parent says about PR, there are some actual good reasons:
1. Some companies are starting to understand and believe that having a diverse team is a competitive advantage when it comes to designing and marketing products intended for a diverse audience.
2. Some companies are starting to believe that it's just the right thing to do to try to increase diversity in their ranks, to attempt to combat systemic sexism and racism that historically has kept certain groups on the sidelines for some roles.
Whether you agree with these things or not, companies are increasingly believing in them, and that's at least a part of why they go far beyond what the law requires when it comes to anti-discrimination. I find it unlikely that a company would lose a court case for not pushing to hire more diverse candidates, or not having implicit-bias training, or not removing terms like master/slave or whitelist/blacklist from their internal lexicon. It does not seem like companies are doing this because they are afraid of running afoul of the law.
I will just add that I agree that pushing to hire more diverse candidates isn't likely to cause a law suit for most companies. It might for a large company that is lopsided and clearly discriminating, but it'd take evidence which is hard to get. That's really outside the scope of what I thought we were talking about, though, because pushing to hire diversity isn't something that requires all employees to actively participate in the process.
The other two, avoiding implicit bias training and not removing sensitive words, in combination those could cause problems - and I know of companies where they have caused problems. If people actually use words that make multiple employees feel uncomfortable, and the company management has a record of complaints and no record of action to resolve the complaints, there is real liability there in today's world.
By and large I think there's probably a lot more agreement here under the surface than it looks like. My mistake might be failing to clarify that I'm not saying legal reasons are the only reasons. There are other reasons, I'm just saying the legal reasons are usually there, and are important. This is probably getting less true over time, where legal reasons were what it took to get some companies to actually do something, and today growing awareness means that companies are more likely to think it's the right thing to do, more likely to agree with the law, and more willing to begin taking action without any specific legal concern. I guess maybe it's quite a good sign that people here are disagreeing with me because it means things have been going the right direction, compared to my work experience over the last couple of decades.
They aren't. In fact, the discussions I've seen at companies in the bay area go far beyond the minimum requirements of the law. They aren't about preventing discrimination, they're about actively increasing diversity (in some areas, but not all).
https://classunity.org/racism-and-responsibilization-in-white-fragility/
It indeed would look bad if I said any such thing. Fortunately, I did not.
FWIW, that's a claim that doesn't quite match my experience at work, nor of talking to some of the people who implement these programs. Though to be clear, we were talking about widespread mandatory company-wide meetings, not any old discussion on inclusion that happens to occur while at work. We may need to get more specific about which programs we're talking about, they're certainly not identical everywhere. It also doesn't seem to add up when I read our current laws, which are changing over time to emphasize.
> In order to achieve this, companies are activity discriminating against non-minorities, which some may call positive discrimination.
The historical term for this is affirmative action (https://en.wikipedia.org/wiki/Affirmative_action), and the idea is to temporarily increase benefits for a disadvantaged group, not to intentionally decrease benefits for the advantaged group. Calling it discrimination, therefore, is a framing that isn't always true, and is somewhat political. It's not always true because things aren't always zero sum. If I choose to give someone a dollar, you don't lose a dollar.
If there really are a fixed number of jobs at a company that is 80/20 men, and the company decides 30% must go to women, then technically yes, that is a form of discrimination. But - just hypothetically - is it a negative discrimination if the reason that the company is 80/20 men in the first place is because it previously discriminated against women, and would have been 50/50 without several decades of history of unspoken discrimination?
It's important to also think about a few things- One, that unlike social prejudices, affirmative action is not intended to be permanent. It's intended to help boost people who've been unfairly and systematically disadvantaged, while they're disadvantaged, and only until things even out. After that, the boost should go away by design. Two, that some of those disadvantages in history have been really extreme, and the kind of discrimination you might imagine you feel when your company tries to hire more women isn't the same order of magnitude of what women and black people as a whole have gone through.
> Regardless, it's discrimination nonetheless and it's very common yet somehow nobody cares.
Is all discrimination bad always? I'm very discriminating about my partners. I'm not sure that nobody cares, I think some people are in favor of seeing that gender and racial injustices actually go away, since not doing anything about it hasn't worked yet.
The thesis here is that current hiring practices are biased (often implicitly and unintentionally) against women and POC. Since removing implicit bias is exceedingly difficult, actively requiring hiring managers to hire more people from underrepresented groups is a way to put your thumb on the scale in order to equalize them.
I can understand how you'd see that as discrimination against white men, and if you squint at it in just the right way, it really seems like it is, but what it's really doing is attempting to reduce an unfair advantage that white men have. No, it's not perfect, and I'm sure occasionally a white male does legitimately get discriminated against. But that's a small price to pay to lift a ton of other people out of the status quo of discrimination they're usually stuck in.
The fact is that women simply represent a small percentage of the overall workforce in engineering. The only way you can get parity in representation is to get parity in the underlying workforce. The only way to do that is to encourage women to pursue a career in this industry, but that's not something you can change overnight and I doubt companies care enough to invest in something that may pay off in 20 years.
I'm all for doing things that aren't discriminatory and removing unconscious biases in interviews, job descriptions, and whatever, but that will not move the needle. It's a supply issue.
Discrimination is discrimination, no matter how you want to dress it up, and it's never OK.
I'll add that some of my best colleagues have been women. I much rather not work in a sausage fest, but I also don't want to work in a world where active discrimination is supported.
Let’s agree on this 100% and then ask the question: how do we get rid of discrimination? If we have some implicit social bias that is causing a measurable difference in outcome for women, how can we get rid of the bias? If we take it on face value as truth that all discrimination is bad, the no discrimination at all is the ideal. I assume we both agree on that completely. In the mean time, before we’re able to fully eliminate all discrimination, which is better: negative discrimination against women resulting in the outcome of fewer women working and lower pay, or that plus an offset positive discrimination that boost the outcome for women so that there are more in the workforce and the pay is closer to equal?
We can try to push outcomes to be closer to equitable, but the most important question there, I think, is: will the affirmative action actually help remove the original implicit bias against women?
> The fact is that women simply represent a small percentage of the overall workforce in engineering.
That has changed over time, and is different depending on where you live. It went up from 0 a century ago to an average of something like 35% in the 70s, and has declined since then to like 20%. In some countries, the balance is closer to 50% and in a few places, its over 50% - spots in India for example. Isn’t that alone evidence indicating things have not settled, that we can’t rest on some notion that the workforce balance today represents the natural state of things? That we are obligated to ask why, and make sure men aren’t accidentally contributing to the discrepancy? (Especially given that in the past there is a documented history of that happening.)
> The only way to do that is to encourage women to pursue a career in this industry
What if the reason women are choosing not to pursue engineering is because there are still biases, and they know it? Then how would you encourage them?
> Discrimination is discrimination, no matter how you want to dress it up
What if the job you’re talking about being offered to a woman is subsidized and would not have been offered to a man either way? Is that still discrimination?
If you're writing accounting software for paper suppliers or something equally banal with few ethical implications, then sure, there's no need (and less reason) to have water cooler conversations about pro-genocide agitprop or whatever.
EDIT to add that of course not all departments at Facebook make the sort of decisions that have a marked social impact. More referring to the content policy teams, and the news feed algo teams, and so on.
The problem is distinguishing ethics from politics. These are very hard to disentangle, because ethical values are usually based on some political orientation. And I don't want Facebook to be making political decisions on my behalf, as a user. And I don't even want internal employee discussions to be derailed by political considerations.
So how do you distinguish ethics from politics? I don't think it's possible, unless the company defines its own ethical values, a priori, and only considers those when making decisions.
If you read the article, I think that this is precisely what Facebook's new policy is trying to do by putting a fence around "social issues."
Are you sure that isn't exactly backwards? Because it definitely should be the other direction in my very strong opinion. One's morals or beliefs on ethics should inform their politics, not the other way around.
One way is based around a person's inner being, the other is molding their being and stances based on a sports team.
The company's products are in no way social media platforms.
However, I'm sure it's easier for my jobs since they were for a retail company and an engineering firm.
Otherwise, you're basically saying corporations should participate in the political process but individuals should not. And that's exactly how we got the Earth into the increasingly shitty state it is currently in.
Is your claim that we really need more corporate control over politics in the US and less citizen participation?
The fact of the matter is political conversations have high risk of annoying/frustrating/alienating their participants. To have these conversations at work is just making employees less productive and asking for a controversy.
I dare say it might be time for some employees at Facebook to pause and think about how their work may have an impact on the world.
As Facebook, I'd rather pay for some yoga classes so that people don't have time to think about their actions.
I agree that there's no need to bring up politics in the break room (or worse, during active work) and risk alienating people. It's a bad idea, just like talking about or advocating for particular religious beliefs.
But if your company is being politically active in ways you find unethical, I don't think it's reasonable to expect people to just put their heads in the sand, ignore it, and get their work done. And not everyone has the luxury of quitting a job whenever their don't agree with the company's politics.
I'm lucky that I'm in a place financially and career-wise that I can just quit and find a new job if I disagree with my company's politics, but many others don't have that luxury. Their choices are either to talk about it and try to get their company to change, or feel awful keeping quiet.
I have no idea how you came to this conclusion.
That's like saying "if employees don't have total freedom of speech at work that means only corporations have freedom of speech".
It's a two-way street. You are providing your services to the company in exchange for compensation. It's an unequal relationship, to be sure, but the company needs employees to exist and survive.
> If you don't like it, you are free to move on.
If only life were that simple, and if jobs were so plentiful and easy to come by that people could be so picky. Sure, a lot of tech workers are in a great place financially such that they can quit in protest (and lose their health insurance, among other things), but most workers don't have that luxury.
People whose existences are deeply politicized (and they do indeed exist) are not often excited to have political conversations with people who say things like "'existence is political' thing is just a bullshit phrase."
I'm not trying to be a dick, but it took me a long time to realize that there were conversations I was not being made a part of because I was not receptive to, or dismissive of, those conversations.
This is a tactic that needs to be called out more often.
It's of course not universally true that's the case for everyone, but I think it's worth thinking about. If your attitude is dismissive of someone's lived experience, it's not likely that they're going to go out of their way to include you in conversations about it; on the contrary, I'd expect them to explicitly exclude you in order to protect themselves.
It is a bullshit phrase though. They don't want to bring it up because any skepticism is viewed as a direct attack on their ideology, and their ideology is the core of their existence/identity, so, calling out the illogic of their ideology is a political attack on their existence.
They want to TELL you, they don't want a discussion.
A big part of the problem was just that everyone was using Facebook for work all the time. So quite often, there would be some enormous thread arguing about whether X or Y was the right policy, was Trump violating the rules and should be kicked off Facebook, or was Facebook's anti-Trump policies violating freedom of speech, or was it racist for an employee to say they supported Trump during a meeting, etc etc.
And you use the same interface for important things like, announcing hey this database service team is launching a new API next week, could you provide feedback on it. Type X of hardware is being deprecated next quarter. So you really have to be checking Facebook-for-work consistently for professional reasons. You have to scroll past the political debates all the time.
Some features are work-specific, but it feels basically like you are using the Facebook interface, but just with all the content being from your coworkers about work stuff.
Notifications are exactly like facebook notifications, so you'll get an email that says "open workplace to see this". To use the tool effectively, you need to have it open at all times. It's a weird in-between state of slack and email, where it's somewhat async and somewhat sync. You'll get overwhelmed with notifications and will want to have "inbox zero", which is considerably more difficult than in email, where you can optimize your workflow. The sidebar will have a list of groups you're in with read/notification counts that make absolutely no sense.
It also has chat functionality that can't be turned off, and it's not group based. It's like being forced to use gchat for all communications.
It has a "organization tree" feature that requires employees to fill in their own reporting structure. This also can't be turned off.
If you want your employees to spend all day chatting in forums inside of facebook, workplace is the product for you.
> was it racist for an employee to say they supported Trump during a meeting
This is sort of what I was thinking about in my original comment. I would hate to have to discuss political affiliation, or make public judgements on other people/issues.
> And you use the same interface for important things like, announcing hey this database service team is launching a new API next week,
I miss so much in my news feed already, using it for work would make me nuts.
Lets say you're a delivery driver for a pizza parlor that is famous for the level of rat poison in its sauce. Do you continue to knowingly deliver the poison pizzas because you're not the one making the sauce? Afterall, who's going to deliver the poisonous pizzas?
Come on. This is right up there with, "I was just following orders."
Maybe you're right and developers at FB don't see that they have any responsibility or input towards what FB does with the tech they build, but I hope not. It would be horrible to be that detached from the outcome of your work.
You only get one-sided discussions because going against the grain will be career suicide.
Greg, a nobody who nobody likes says X thing - X isn't a big deal, and doesn't affect most people and about 45% of people would agree X shouldn't be said, even if it isn't a big deal.
Mark, a senior directo who everybody loves says Y thing - Y is a big deal, but only 10% of people would agree it shouldn't be said.
Guess which will get called out?
Seriously, haven’t they don’t studies that most of these revolutionaries are actually not very smart people. How does risking your life for a cause have anything to do with being correct about an issue?
I would say, FB did the right thing here, to not supporting a platform that actively politicizing itself.
For people who believe that everything is political, there are no projects with less moral ambiguity, it's just more or less openly visible.
Everything has some political issue around it, but Facebook has politics baked into it because it's using political issues as a means of making money. They sell advertising to politicials, when they know the ads are lies. Their platform is filled with fake accounts pushing genocidal agendas from dictators, and in many cases facebook is sweeping it under the rug.
The way their platform is built is setup to manipulate people, and that platform is being used at scale to do so in ways facebook knows is fucking up the world. Its very existence is political at this point.
I don't really take issue with that, Germany even has that codified, and we're very far from being free-speech-absolutists. Media companies are compelled by law to air political ads by all political parties without checking them, judging them or commentary. Short of being obviously illegal, there's nothing they can do which lead to our center-left state media being told by the supreme court to air the far-right (actually far right, with skin heads, boots and all the stops, not just anti-low-skill-immigration conservatives) NPD's spot.
> Their platform is filled with fake accounts pushing genocidal agendas from dictators, and in many cases facebook is sweeping it under the rug.
But not really. They exist, but the platform isn't "filled" with them. The vast majority on FB is not political.
I'm sure that FB would be quite okay with not having politics at all. Sure, people are on the platform, but they'd rather have engagement around cat pictures, celebrity news and similar things, because people shouting at others about their ideology aren't buying sneakers. They're not a political advertising company that relies on political ads as their primary funding.
Banning political speech is simply not an option, because some people sometimes want to argue about politics, and you're going to have to fight your users if you don't allow that. You never want to fight your users.
Food and water production has VERY big ethical issues. Palm oil, mass-slaughter of animals, deforestation, Nestle taking away water from locals, CO2 emmissions etc.
So yes, there are problems in the food and water industry, but I don't really get what your point is? Should we just close our eyes, ears and mouths and say "fuck it, not my problem"?
Not at all what I was aiming at. The problem people have with FB isn't how they produce the product, but who uses it.
The problem with food and water in the equivalent scenario, would be in who consumes it. If you let everyone consume it "woah, that's a political choice". But it really isn't. It's the default, deviating from it is a political choice.
No, the problem is that the product they produce is _specifically designed_ to be used in this manner because conflict and argument increases "engagement" and for a large portion of the employee base their bonus depends on performing work that leads to this outcome.
It used to be that people and organizations making unethical purchases were the ones we considered, and held, responsible. For a long time we've had good, positive movements centered on informing the buyer. We added expiration dates, ingredient lists, nutritional value information, crashworthiness scores and reliability ratings, country of origin labels, even ethical sourcing labels. Perhaps too much of a good thing caused information overload and resulting numbness? Somehow, between the Prohibition, the "war on drugs", and the supply side moral regulations, we've lost the spirit of "well informed free agents making decisions".
Most of the services (FB and the likes) we're discussing here are morally neutral by their nature, and it takes concerted efforts to make them non-neutral[1]. It is the particular use they are being put to that is moral or immoral. Let's not shift vast moral powers from the wide society to a narrow cadre, shall we? The economy is a neat distributed system. It's the popular democracy before democracy became popular. Let's not give it up.
--
[1] example of non-neutrality: the current trend of algorithmic manipulation
I don't think that's the case. Is it moral to exploit human psychology when developing addictive features that pull people into the site over and over? Is it moral to sell user information to advertisers so they can emotionally manipulate you into buying crap you don't need? Is it moral to design interactions that evoke outrage and disagreement in order to increase engagement? Is it moral to track user activity across the web, outside the company's site?
I don't think any of these things are moral. These practices might not be necessary for a site like FB (then again they might), but this is the model they all seem to choose. And that's what actually matters.
The gist was, a bare messaging+microblogging platform is, by its own nature, morally neutral[1]. Of course if the operator starts doing editorial decisions - like algorithmic timelines, or propping up/pushing down content, or manipulating user mood - then the operator clearly is making moral judgements & decisions.
Funny how respecting user privacy does, at least partly, absolve the operator from a lot of risks related to making moral judgements on a mass scale in a hurry.
--
[1] with the only caveat that, if somebody believes facilitating communication to be evil or good, then it would be considered respectively evil or good.
It's the classic argument, "technology is neutral; how it's used determines the ethics". Well, yes, I agree with that, but here we have a company that's using it unethically, and has no desire or need to stop their bad behavior. And that bad behavior has been instrumental to their success. That's what matters.
I'm going to go out on a limb and predict that if a facebook employee wants to talk about poverty in underdeveloped countries on internal social media, then that's going to be ok, but if the discussion concerns people who were harmed because they followed bad medical advice that was spread by use of facebook, all of a sudden that's an unacceptable socal issue at work.
Nah.
Every Facebook employee I've met, every one that I'm reading, they are sincere about what they're angry about.
It's a bad look, to assume some Facebook employee's opinions are being co-opted by... fucking Google? Apple? That's ridiculous.
I'm not even going to speculate why anyone questions some random Facebook employee's sincerity.
Instead I offer: Imagine if someone told you, every opinion you had, all the time, talked over you or told you to shut the fuck up and said, "Oh you're getting co-opted by Google, this is exactly what they want you to do, 'destroy our culture.'" And then, in the same breath, that guy defends, breathlessly, some idiot outraged over the removal of master/slave nomenclature, or some idiot trying to mainsplain crackpot sex difference theories to his female coworkers.
C'mon, you'd be mad as hell, it's so utterly ridiculous.
Since GitHub repositories renaming their master branch did cost us significant money and almost caused downtime, I would be fully understanding if someone else is feeling outraged about it.
I belive you need to work on your example of a bad person ;)
How about we go with? And then in the same breath that gal defends selling people's secrets for pennies on the dollar and willingly accepting that they'll likely have very real negative consequences for your users once the private data in your database invariably gets leaked onto the internet.
Oh wait, that wouldn't leave much left at Facebook, wouldn't it?
So let's just question the moral integrity of anyone working at Facebook. Seems reasonable, given what egregious privacy infringements their work enables.
I think "outrage" is an inappropriate response. We're talking about removing nomenclature that has been (and continues to be) used to oppress an entire segment of society. I think removing that is worth a little money and downtime, if it comes to that. People who are "outraged" that it cost them some time and work probably could stand to show some compassion for their fellow humans.
Usually, when open source projects introduce a breaking backwards-incompatible change, they will first deprecate things and then wait some months to give people time to update. After this nomenclature had been in use for 10+ years, I can't help but wonder why there was no time to take the user-friendly path in this instance.
So to the people who are fixing the mess, it certainly feels more like you got kicked because someone else wanted to show off his/her moral superiority.
There existed a reasonable way to change the nomenclature, but it wasn't taken.
And that's where the question of empathy and compassion comes in.
As a random white dude, the "pain" I face by dealing with problems around these name changes is completely minimal and trivial when compared to the emotional pain they cause people in certain groups that actually have a lived experience of oppression.
Regardless, I do agree that if there is crazy scrambling and short timelines to change these names, that's a problem in your org. There should not be reckless urgency to get this done; it should be done just as any other major change should be: with planning and risk assessments. Where I work, we are doing it slowly and with an eye toward not causing downtime. If your org is not doing that, then I agree that you have a valid complaint. But this complaint should be directed at the bad process, not at the work itself.
> I think "outrage" is an inappropriate response. We're talking about removing nomenclature that has been (and continues to be) used to oppress an entire segment of society.
In this specific case, outrage is appropriate because the nomenclature as used by git has nothing to do with "master/slave". It's a well-intentioned but misguided attempt at what you describe, unless you are making the preposterous claim that the word "master" should be purged from all contexts.
Of course, WRT this article and the discussion on it, if you try to point this out and discuss it at a company like Github (or mine) where the group making these decisions is convinced of their correctness you risk ostracization and career suicide. In fact, the statement you closed with
> People who are "outraged" that it cost them some time and work probably could stand to show some compassion for their fellow humans.
implies that you also are convinced of the correctness of this decision, and that anyone who objects to it is not compassionate (and by implication, not worthy of consideration). This is not a good approach to take if your goal is to educate.
I'll agree that git's use of "master" is not as egregious as "master/slave" in database terminology, but it's still not great.
There are two prevailing uses of the term "master". One refers to the quality of being exceptionally good at a particular skill. By and large, I don't think most people have a problem uses of "master" where that's the intended meaning. But "master" in the sense of "leader" or "controlling" isn't great, even if (in the case of git's "master" naming) there isn't a corresponding "slave" role.
> if you try to point this out and discuss it at a company like Github (or mine) where the group making these decisions is convinced of their correctness you risk ostracization and career suicide
I agree that this is bad. These sorts of responses have a chilling effect on reasonable conversations and discussion. But in some ways I do understand why this happens; people who are directly affected by terminology like this are getting really tired of having the same conversations over and over about something that evokes significant emotional pain every time it's brought up. Again, it's not great, but I think it's understandable. And it's frankly hard to understand why using a word like "master" in technical terminology is somehow so important that it's even worth getting into repetitive discussion after discussion about it, especially when doing so causes some people pain. That's where the concerns about empathy and compassion come into play, because the people who constantly fight against this change do not seem to be even trying to look at this from someone else's point of view. (And I say this as someone who initially was resistant to these changes, but have since realized that I was wrong to do so.)
Many:
Education. Healthcare. Corrections, law, and law enforcement. Social work. Public utilities and subsidized housing. Etc.
I bet a simple keyword filter for names of politicians could catch 90%.
I wonder if people would pay for it?
Eh, I don't really like that idea. For one, it only really addresses the problem of being exposed to content you find unenjoyable.
Honestly, sometimes I do wonder if consumer-level broadcast technology is the psychic equivalent of doing something like letting everyone fly planes without any training. It might be better to adopt communication technologies with a little more friction.
Or they could just not show posts to groups you're not in and from pages and public profiles you don't follow! Allowing something to interject into your newsfeed should be opt-in, but right now it isn't even opt-out, except for not logging on at all. It would also be cool if there were a way to opt-out from seeing shared posts selectively for people on your friends list, e.g. I want to see things that Overly Political Relative posts themselves, but not things that they share from other places.
That being said, I deactivated my Facebook account a couple years ago, so I'm no longer a user whose opinion they should theoretically care about anymore.
My timeline shows as strictly chronological, and only text and photos posted by my immediate friends. No groups, ads, publisher's bullshit, promoted things, no trending, no nothing. Just photos and plain text.
Giving people tools to make sub-lists of certain friends/groups/etc. in order to organize their experience better (on their terms, not at FB's whim) would be great, too.
I'm happy to be proven wrong though. Maybe this is the thread where people will make practical suggestions.
When it was discovered that tetraethyl lead was widespread in the environment and caused neurological damage, it was banned. Yes, that materially harmed several chemical companies whose livelihood was based on producing tetraethyl lead.
So what?
If your business model harms people, I don't care if stopping harming people eliminates your business. People matter. Businesses do not.
Are we supposed to just go, "Yeah, we know Facebook is harmful to millions, but won't someone think of the poor shareholders?" Then shrug and accept it?
There is no shortage of sites that will take Facebook's place.
What is the legislation you propose to prevent Facebook, or the millions of other existing or soon-to-be existing apps, from doing harm to people?
OK, consider gambling. That is simply a kind of software that enables people to engage in behavior that turns out to be harmful for a large number of them. And, because of that fact, it is heavily regulated.
> What is the legislation you propose to prevent Facebook, or the millions of other existing or soon-to-be existing apps, from doing harm to people?
I don't know if we know what sort of regulations would help yet. But I do know that if we assume a priori that corporations cannot be forced to change their behavior because it might hurt the poor corporation, then we will never figure out the answer.
Removing tetraethyl lead was certainly doable. Removing every car from the road was not. One was a targeted change that improved the industry, while the latter was so impractical that they never considered it.
Here's a thought - you assume a priori that shutting down social media would be a net win. How did you come to that conclusion? Did you spare a thought for the people who's social lives revolve around spending time with friends online? You'd advocate for taking away these people's social networks because you're certain you know what's best for them?
I didn't actually say that. I think many social media sites are net positives, like this one here. I think Facebook specifically is a net negative.
> How did you come to that conclusion?
Performing experiments on users' emotional state without their consent: https://www.theatlantic.com/technology/archive/2014/06/everything-we-know-about-facebooks-secret-mood-manipulation-experiment/373648/
Cambridge Analytica: https://en.wikipedia.org/wiki/Facebook%E2%80%93Cambridge_Analytica_data_scandal
Facebook makes users feel worse about themselves: https://www.bbc.com/news/technology-23709009
You get the idea. None of this is new. Some communities are more toxic than others. Some businesses are less ethical than others. I believe Facebook is an unethical business led by an unethical man making a product that is more harmful than good for most people.
> Did you spare a thought for the people who's social lives revolve around spending time with friends online? You'd advocate for taking away these people's social networks because you're certain you know what's best for them?
I did not advocate that.
Should Google's algorithmic filtering of content also be prohibited?
Should Doubleclick/Google's surveillance practices (which almost certainly include some kind of "shadow" profile) be prohibited?
Also, people have been talking about shadow profiles for at least a decade now, and yet no disgruntled FB employee has revealed all. Why do you think that is?
I don't find it weird at all, I've noticed that often people get really upset about FB or GOOG doing something, while ignoring the other, so hence my questions.
I don't think that this is a coincidence, and I 100% disagree with the notion that this is because FB employees are well paid.
I honestly think that it's because shadow profiles essentially don't exist in any meaningful form (there's probably some logs for non-FB users, but I don't think that they are aggregatable to a specific individual without an account, mostly because that would be super low value and really hard).
I don't agree that banning social networks would be productive or even possible, but this argument doesn't make sense. People had social lives before social networking. People had friends online before social networking. Social networking is not required for these things.
- Reduce majority of bots by making them unsustainable.
- Provide direct money to improve moderation and make platforms liable.
- Make journalism cater to individuals rather than ad networks.
- Remove toxicity because trolls won't pay after getting banned regularly. No need for other fingerprinting methods.
- Reduce the number of users and silo them automatically.
Free business models are anti-competitive and result in worse service for the users by making platforms accountable to advertisers (other companies) than consumers.
Force facebook to introduce minimum payment based on purchasing power. Outlaw free/freemium models in software or limit them to a time period (3-6 months).
This won't apply to non-profit services. And open source will be fine since it will only apply to services or for-profit companies.
So besides straight up changing the algorithms to promote non-decisive content, these are a couple things I think could help:
- Limit the spread of information in general in favor of content created by the people you follow
- Un-personalize advertising
> Limit the spread of information in general in favor of content created by the people you follow
I don't think that's what people want from their social networks nowadays. FB, Twitter, YouTube, TikTok, Snapchat, etc all do not work this way anymore. Suggesting that Facebook revert their app to what it was 10 years ago is not a serious suggestion because there are many other apps that will fill that void. If it's not FB, another app will take its place and give people the outrage they're looking for.
> Un-personalize advertising
Advertising plays a very small part in this. Most of what you would call "disinformation" is spread through reposts, which are not affected by advertising.
Sure, there might be some hostile actors out their spending money on pushing propaganda to the masses. But from my experience, people actively seek this nonsense out, the algorithms just make it easier for them to find it.
In my eyes, the real problem is that most people aren't equipped with the right tools to identify bullshit. Simple things like an inability to gauge scale. e.g. "9,000,000 gallons of oil has been spilled from pipelines in the last 10 years" Is that a lot? I have no idea, but what I can do is compare that against other forms of oil transportation. Most people won't do that work though, they will go straight to outrage.
They don't work this way because it makes shareholder's the most money, not because it is the best experience for the user.
> Advertising plays a very small part in this. Most of what you would call "disinformation" is spread through reposts, which are not affected by advertising.
A completely false ad about a candidate of a different political party is much less likely to be called out or reported because it is only shown to a highly targeted group of people. This lack of accountability creates disinformation. These ads could not be ran as a billboard advertisement or in a non-personalized ad space.
All of the counter arguments always come down to this: Facebook would make less money. And, yes, of course that is going to be the case because if any of these changes would make them more money they would have implemented them themselves. It requires a public corporation to accept that they are making the world a worse place, and to choose to make less money to stop doing that.
And it would also require them to make a product that people desire less, and risk losing to a competitor that gave people what they want. People want to cluster in silos, chase novelty, and spout off with 100% confidence about topics they know nothing about.
"Crack dealers don't profit from drug addiction, they profit from the pleasurable effects of consuming crack. The fact that very addictive drugs are pleasurable to consume tells me more about people in general, rather than crack dealer's business model."
Social media conglomerates manipulate how billions of people perceive the world around them, with disastrous effects. They should be held accountable for that.
Absolving them of all guilt, and blaming all the nefarious effects of social media on the consumers, accomplishes nothing.
Nothing good comes from denying the dangers of an addictive drug, or leaving its distributors free to misbehave without consequence. That is the current situation with social media companies and their enormous influence on our minds.
I don’t think we should ban social media. But not holding multi-billion dollar social media conglomerates accountable at all is lunacy.
For example by doing something when they are warned for years by multiple entities that FB is used as a tool to support genocide, like in Myanmar.
It seems that only when such things blow up publicly and the stench of bad publicity gets too bad they send out Zuckerbot announcing his usual platitudes to then get back to business as usual. And that's far from the only example were their product was used for oppression by authoritarian regimes.
This company could do a hell of a lot more to counter this. But they just don't give a shit, unless publicity gets too bad.
edit : word change
In Facebook, there are only "friends". So, if you don't like what they are always carrying on about, don't have them as a friend. Just like in real life.
Some people are indeed immune to covid, babies too, most probably. I've personally heard of numerous cases of persons not getting the virus at all while their spouse was in intensive therapy or worse.
Later edit: To add to my comment, what do you call sleeping in the same bed, eating from the same plate and having direct physical contact with a person who gets the virus and ends up in IC or dead while the other person tests negative for the virus?
Let's not forget that ever since February we've all known that this virus is particularly easy to transmit/get, so you cannot say "that person got really lucky, that's why he/she hasn't got it".
HN readers seem to have totally lost it w.r.t. COVID and misinformation. It's practically guaranteed that in any thread about misinformation/FB/Twitter/etc someone will state something about COVID that's true and then describe it as misinformation, or state something about it that's false and then decry the conspiracy theorists who don't believe it.
Your assertion that "babies are in fact immune" is demonstrably false: https://data.cdc.gov/NCHS/Provisional-COVID-19-Death-Counts-by-Sex-Age-and-S/9bhg-hcku shows 20 deaths for children under 1 year old. Presumably many more than that were infected but survived (unfortunately covid.cdc.gov is timing out for me right now and a quick search didn't give me infection rates for that age group).
Yes, the number is small compared to cases in older people. But "babies in fact are immune" is in fact the same kind of misinformation you're railing against.
To put it in perspective, according to the UK govt's own analysis, it's very likely that all currently reported positive infections are false positives!
Why would they give up control of the world by doing something silly like that? Think about how much political influence Twitter has based solely on which tweets they show the President and corporate press. Consider how much untraced in-kind donations these companies can make by tweaking which news stories you see. The crazy thing about it is these things can be tweaked by humans, but it's largely controlled by AI now, which no one person will completely understand what's happening in any of these systems. We're in the early stages of AI controlling the global political future and it will tend to create whatever kind of future generates the most clicks. It's kind of like the game Universal Paperclips, except with clicks/rage/ads.
I hope you take this as kindly as I intend it, but what you're proposing is a conspiracy theory. This is a relatively nice attribute for a theory to have, because it gives you a nice heuristic for deciding whether the theory is true!
The likelihood of a conspiracy being true decreases as the number of people with knowledge of the theory and an incentive to report on it increases.
To take an extreme example, if the moon landing was faked, tens of thousands of people have somehow held on to that secret. Tens of thousands of people who could gain overnight notoriety by telling their story, and hundreds would have the proof required to gain even more popularity. The fact that nobody has ever broken ranks is a strong sign that the moon landing was not faked.
"Twitter and Facebook are secretly tweaking which news stories Trump and the rest of us are seeing" isn't a conspiracy on nearly the same scale as a faked moon landing. It requires some pretty incredible things to be true though.
- Maybe every employee knows, and none of them have decided to say anything, despite the large incentives to reveal the secret and win their moment in the limelight.
- Maybe not every employee knows, just enough employees know to implement it and hide that implementation from the others. Maybe every employee on the Algorithmic News Feed team knows. I don't know how Twitter and Facebook are structured, the team probably isn't called Algorithmic News Feed, but as one of the more important systems both Facebook and Twitter must dedicate at least a hundred engineers. So, 200 people were quietly chosen for their ideological purity and ability to keep a secret from their peers. These 200 people write code in secret. Somehow they commit lies to the monorepo and apply private patches to the code before deploys. The SREs must also be in on it, because those private patches will still show up in traces and their bugs will show up as errors. All of this happens inside Facebook, a company notorious for employees who speak up and expect transparency. It also happens inside Twitter, a company with such lax controls that until just recently thousands of people could use the internal admin tool to take over any account.
I don't know, I guess it's possible? Maybe you have a better idea for how it could be happening, but it just doesn't seem very likely at all.
I’ve seen this kind of thought pattern a few times and frankly the way you are thinking doesn’t match reality.
I work on a 1000+ person enterprise software project.
Less than 5% of those 1000+ understand our customers requirements and use cases in any real depth. This is despite trying for years to incentivise developers to have a broader understanding of our business.
Within that core 5% most decisions are driven by the 3-5 people who care about the particular area.
So for a 1000 person+ org you would need to corrupt 3-4 people to drive a hidden agenda.
This is for a project not trying to be secretive in any way.
To relate it back to Twitter you would probably need the right 3-4 people to push hard for content moderators to be hired in San Francisco instead of Bangalore in order to push hard left views.
Exactly - our product uses angular because two of our core engineers loved angular, helped people who were having trouble with angular, and hired people who also liked angular.
Not because angular was the best tech choice. We didn’t even do a proper evaluation.
And this is for a hundred million+/year project......
(FB == Fish Bowl)
You're swearing off the internet entirely?
Just less than 10 years ago, it would've been considered very rude to push your religious or political opinions on to others, especially when it's a professional setting it would've been considered highly unprofessional. But nowadays that line doesn't seem to exist anymore.
Pre-2012 Facebook was awesome. Now the feed is almost exclusively bullshit from people I don't know.
No one has to use Facebook.
Government's role is to make the ideologically agnostic machine of business align with our values. In the kind of competitive economy we have, it can only be this way. If we try to apply politics from within a business, we risk introducing instabilities and ineffeciencies, making the business less competitive–an existential threat to the values we incorporated into the business.
No, the existence of controversy over an issue is a question of empirical fact, not political opinion.
The ascription of significance to the existence of controversy may be a political opinion (and is certainly a value-based opinion), but not the question of whether controversy exists.
Those complaining of being deplatformed would probably agree strongly with your definition, however, so I will admit the definition of this word is itself controversial. Or maybe I shouldn't, because the prior sentence feels very political to me.
Any. Controversial is a continuous-valued, not binary, attribute.
How controversial is enough to justify a particular reaction? That's a political judgement, and in practice has as much to do with where you stand on the controversy as how much controversy there is.
How do you know if people cant express anger at someone? It is not mock question. I recently found out that colleagues who pretended to have good relationships (because we dont talk negatively about others as cultural thing) had long term resentments against each other. And those resentments were influencing work under surface in negative way - until it blew up into dysfunction which is how I realized.
If the countermeasure is reasonable we implement it.
We've found this keeps people from festering. Heck.. one employee thought he was being underpaid. He was.
He put it in the issue tracking system and we now have a public skill system and renumeration scale.
The consensus against retrospective punishment is a lot weaker than people might expect, and who knows what new social crimes the future will bring.
The IRA and/or its successors or friends appear to have taken the same approach as Russian security services have with the rash of targeted murders in Europe, with a "this totally isn't our doing, but anyone slightly educated on the subject will recognize our hand, because we want them to be aware that it's us and we don't actually mind people knowing" wink wink nudge nudge threadbare veneer of disclaiming responsibility.
Normally, I wouldn't really care: the 2016 stuff everyone made a fuss about on social media was largely ineffective and at best served as a smokescreen to distract from their very successful actions outside social media--Buff Bernie is a lasting meme treasure and nothing more. This go 'round, however, they've apparently learned from their mistakes, and I'm seeing.evidence that personal friends _are_ receiving and and are influenced by their messaging.
I thankfully haven't really had to watch any family or friends succumb to the Fox News media poison, and thought my social circles largely insulated from that sort of problem, but I was apparently quite wrong--right about _what_ wouldn't influence people, but blind to the idea that other actors would follow the same model and create content that _would_ suck in their target audience.
https://twitter.com/evelyndouek is a good source of reporting about Facebook and other social media cos' continued lackluster attempts to stand up potemkin independent review bodies, if you want more info on the space and can stomach more disheartening news.
The people in question are current students at my alma mater, where I studied, among other things, Russian language and the former Soviet Union. Some in the current class are likely studying the same, but most aren't, and even those that are, well, they're just starting to study it.
Again--no certainties there, but while I received a fairly decent US high school education, coverage of the cultural and political history of the former Soviet Union is limited by necessity--there's just not enough time to slot that in among everything else US high schoolers are expected to learn.
My gut feeling is that if I'm seeing them share this sort of content, that it's reaching them organically, not because they're finding it after a long time studying the whole of the space over a decade of hobby interest--that's where I'm coming from. The end viewpoints and values may have similarities, but they will be colored by many other factors, and those factors matter.
If that intuition is right, while we may share views in some sense, their view is quite possibly being shaped by actors whose intent is to shape it in a particular direction, who recognize that there are avenues to do so (the amplification/radicalization potential of internet content rabbit holes is well-documented at this point), and who aren't really interested in building a nuanced perspective grounded in mutual understanding of both FSU and American history.
Intuitively, based on their past actions, those actors want the opposite: to (skillfully, mind you) leverage their own nuanced understanding to craft a shallow, targeted narrative that's believable enough, with the primary goal of supporting their own agenda and political goals, not with the goal of building a strong basis of mutual understanding across borders. Is trying to reason about those aims hard, to the point of being nearly impossible to get right? Yes! Entirely! But I don't think the response warranted is "well, it's hard, we should all give up and just see what happens". We must try to instead do the best we can, both in our words and actions in a given moment and with an expectation that we won't be entirely on the mark always, but that we can and should try to watch for our mistakes and catch them as early as we can--that is how we improve and help one another.
So, to sum up, can I rule that out definitively? No. Can I make what I think is a reasonable assessment of what's going based on the information available to me and my own background of knowledge, and recognition of what's changed in the world since I made a similar journey? Hopefully, albeit worryingly, yes. I therefore think it's important to not abdicate any notion of responsibility or to call it a day and agree to disagree on a lot of the nuance--doing so tacitly grants one sort of nuance authority, and the intent behind it may not be entirely benign--historically, it hasn't, and an about face seems unlikely at this time.
So the dastardly Russkies didn't intend that Trump be elected? Someone tell Rachel Maddow! This changes everything!
American coverage on their efforts was by and large terrible, at least from major outlets. Focused analyst coverage in the space has been a lot more nuanced, but nobody's reading that without an existing personal or professional interest.
The other half of that analyst coverage is that they rapidly became quite tired of Maddow and friends hammering on a very simple narrative that missed the point, but was very effective at achieving its actual goal, keeping consumers of major media on the left-of-center end of the American political spectrum engaged in their content and bringing in continued advertiser money. That tiredness is relegated to water cooler discussion on Twitter, however, so it's not going to shape major outlet coverage much.
That's about the extent of the claims that can actually be checked by the reader. Of the rest, I certainly agree with the warnings about poor security for voting machines and other election infrastructure, but that's been a commonplace on HN for a decade, and the most salient if by no means the most egregious example this cycle, the Iowa Primary, is totally dismissed. Also in other parts of the article we're assured without any sort of proof that no one hacked a voting machine in 2016. Can we be so sure? The narrative walks a narrow path. The Russians did bad things but not catastrophically terrible things (i.e. they prepared to discredit the election on social media but didn't change the results). Voting machines should be more secure but let's not even mention requirements for open code and hardware audits (about which I've been writing my legislators for many years). Federal efforts on election security since Trump took office have been paltry but everything before that was great. Did Goldilocks write this? Was she the confidential source who provided most of the information without attribution?
I'm glad that normal neoliberal Democrats will finally distance themselves from the Maddow noise, but I would have preferred actual progress by this date rather than just "yeah sorry we went loopy for 3.5 years". I'd also like some indication that the next president, whether he takes office in January or four years later, will do anything at all to make voting more secure and more accessible to citizens. As it is, I just expect more attacks on the First Amendment. News media firms won't complain; as you observe they're banking fat stacks with Trump to kick around. The concern that keeps me up at night is that they're cooking up a new Russia effigy with which to torment the public now that Covid-19 seems likely to remove Trump himself from public office.
[0] https://www.cbsnews.com/news/the-phishing-email-that-hacked-the-account-of-john-podesta/
Why does it sound good to anyone that Facebook employees should be prevented from discussing the ethical implications of the product they sell their labor to create? Facebook complete lack of accountability - internal or governmental - has to date:
- incited a genocide [https://www.nytimes.com/2018/10/15/technology/myanmar-facebook-genocide.html]
- provided a bias for right wing content in a American election year (and fired the employee who blew the whistle on it) [https://www.independent.co.uk/life-style/gadgets-and-tech/news/facebook-fire-employee-conservative-right-wing-breitbart-charlie-kirk-dimaond-and-silk-a9659301.html]
- exacerbated a global pandemic, indirectly causing 1000s of deaths, by not policing Covid misinformation [https://www.theguardian.com/technology/2020/aug/19/facebook-funnelling-readers-towards-covid-misinformation-study]
- is arguably a contributor to the global rise in authoritarianism [https://www.theguardian.com/commentisfree/2020/feb/24/facebook-authoritarian-platform-mark-zuckerberg-michael-bennet]
and that's really just the tip of the iceberg. If you buy into the notion that Mark Zuckerberg is a nice man in a hoodie trying to run a business that his employees are tearing down with some radical agenda then I'm sorry, but how naive are you? Facebook has a track record of ignoring the consequences of what happens on their platform in order to continue profiting. It's not a mistake, it's the point.
We should be cheering on tech workers challenging the ethics of the work they produce, not talking about how inconvenient it is for Facebook workers to start realizing how questionable the product they're building really is.
It's unfortunately very much in the interest of Facebook's leadership team to discourage it, however, as a clock in clock out, see and hear no evil labor culture is good for the leaders' personal wealth, so ethics be damned, number go up.
There is a fundamental difference when you're talking about a stock-owning, educated, in-demand software engineer, even if they are "just" working on scaling Facebook's image service. They have the institutional power at the company that they could leverage to change the product's outcomes, if they so desired.
Institutional power doesn't have to be leveraged towards political ends, but if you profit directly from an institution choosing unethical behavior in pursuit of profits then you are also behaving unethically. It's completely reasonable to apply that standard to the best-paid of Facebook's employees, just as it is completely reasonable for those employees to petition against committing more unethical behavior.
Facebook themselves already admitted that they were used to further a genocide, so your dismissal is somewhat beside the point.
https://www.nytimes.com/2018/11/06/technology/myanmar-facebook.html
Nobody is stealing anything, as the rules are set right now influencing public opinion through media channels is not seen as "stealing". If the powers that be were to physically alter the votes and the voting process that would be another discussion, but almost everything presented in the media is fair game.
But this cannot be done, despite all attempts to quiet the cognitive dissonance. Every employee of an evil company is evil.
Every political message lobbied for by the employer is the employee's political statement. Any claims to the contrary reek of hypocrisy.
If you work at Facebook, your work directly or indirectly supports Facebook's political decisions. Facebook just doesn't want you to talk about it. Because Mark and the executives make the decisions, and you're just supposed to follow orders. This is how it works at many other companies. But for a long time, Facebook was able to recruit people to work their by promising that they could 'change the world' and 'make a difference.'
Side note: One of Facebook's board members apparently enjoys the company of white supremacists. https://news.ycombinator.com/item?id=24444704 Will Facebook employees be allowed to talk about that? If you work at Facebook, how do you feel about that?
If your team won the Super Bowl or your nation took home a lot of gold at the Olympics, how does that affect you materially?
Advancing progressive politics necessarily requires upsetting the status-quo, whereas if someone is a social-regressive then simply adhering to the status-quo suits their agenda just fine.
...so then some places want to avoid that image so they do allow discussion on controversial topics with the constraint that people debate things civilly - so far so good - except those same places want to void an image of partisanship - and in the spirit of freedom-of-speech they'll allow discussion on any topic (again, provided it's done civilly).
...which leads to that place or community falling victim of the paradox of intolerance (as, by their own rules, those communities must allow for the advocacy of genocide and unspeakable crimes provided they're advanced by an individual who conducts themselves with politeness, while their debate opponent might be a gay, black disabled Ethiopian Jew who is rightfully concerned for their own life and future who may utter a swear-word a bit too loudly and suffer censure for doing so).
As far as I can tell, the most workable solution to that is for the bounds of the Overton Window to be explicitly declared by the bossses/mods/admins - and in doing-so instantly open themselves up to accusations of partisanship, especially if extremists take advantage of people acting in good-faith.
I believe in most places the current Overton Window permits discussion and advancement of communist utopian ideals but not far-right ethnonationalism - assuming those two are somehow equivalent - and if Facebook - or any other place - had a similar declared Overton Window policy then it can be said they're biased towards the left, which is great fodder for the pundits on America's most popular right-wing TV news channel.
Why is that "by its nature"? I don't think that's "by its nature" at all. Phone companies and ISPs facilitate communication between people, but that doesn't necessitate they take political stances on what communication to allow. Why should Facebook? If certain language should be restricted, then laws should be written restricting said language and Facebook should comply with those laws. Nothing about Facebook's nature forces them to go beyond that and act as de facto language legislators.
Facebook doesn't "facilitate" communication in the same way a computer "facilitates" communication. It facilitates communication in the same way that a forum or book club or group of people facilitates communication. Groups require some moderation to remain popular. Facebook is driven to moderate by the market.
Taking a stance to not control what communication is allowed is a very political stance. It just so happens that, I believe in those cases, it's also a legally mandated stance; but, if it weren't a legally mandated stance, it would absolutely be a political stance, whatever they ended up saying.
Where a private company decides to limit free speech (or not limit free speech) is, 100%, a political stance when the laws have not been written that make that decision for them.
Even if we maintain a law around protecting companies that just host other people's content vs curating and publishing content, it could be seen as a political decision whether a given website and company choose to be on the publisher vs public content stance.
I'm forgetting the word for publisher vs ... whatever it is where they take no responsibility for what people post on the site; but I hope my point is clear.
A segment of social media company staff also don't like that reality and want their social to censor the political parties / discussions they don't like and thus they toe the line and give unsatisfying non-answers at all hands and to the media.
Then in your model of the world, how would one not take a political stance?
If everyone takes a political stance in your model by definition no matter what their intents or actions are then it's a rather useless definition.
Is providing food in supermarkets without political background checks "a political choice"? If so, then everything, including picking your nose with your left or right hand, is a political choice and the term "political choice" becomes utterly meaningless.
You don’t help, but you don’t report them to the secret police either.
That is a choice.
A non-political stance would be one which has zero side effects on anybody other than yourself. As soon as the actions you make and the decisions you take have an effect on somebody who isn't you, it becomes political.
Actually _taking_ a non-political stance is an exercise left to the reader.
Not really, no. That is not usually what people mean, when they say "take a political stance".
We can extend this to other examples. Do you think that a grocery store should ban people from their stores, if the individual is wearing a pro Trump, or pro Biden Tshirt?
I think it would be pretty silly to condemn a grocery store, for refusing to ban people from their stores, if they were wearing a "Vote for Biden/Trump" shirt.
Most people would find it absolutely and completely ridiculous to ban people from stores for doing that.
Imagine someone walks into a grocery store naked.
Or wearing a t-shirt with a explicit image of a man and a woman having sex. Or two men having sex.
Or a t-shirt which says / shows something extremely inflammatory yet not illegal.
I could imagine various stores making various decisions in all of these cases, all of which would be the folks working in that store expressing their beliefs!
Humans are inherently social and thus inherently political (politics in the sense of politics as the negotiation and management of a community).
Or imagine someone walks into a grocery store and doesn’t wear a mask! Lol :):/:(.
Or wearing a "vote for Trump/Biden" t shirt.
In that situation basically everyone would agree that it would be ridiculous to say that the grocery store should ban people for wearing that t-shirt.
I think that most people would not call refusing to ban someone for wearing this t-shirt as a "political" decision.
Most people would agree that it is not political to refuse to ban someone for wearing a "vote for Trump/Biden" shirt.
If you do call that political, to refuse to ban someone for that, basically everyone would disagree with you.
But, in general, businesses businesses should generally have a fairly wide latitude as to what the t-shirt slogans they allow customers to wear. (Though I think we can imagine various slogans a business might deem objectionable.)
At the same time, businesses can reasonably have a fairly narrow latitude as to what employees should wear, even barring an official dress code, with respect to even advocating for a specific candidate--and that may even get into matters of company campaigning.
But the point is that basically nobody would call it a "political" decision if a store allowed customers to wear "vote for Biden/Trump" shirts.
Nobody would call that political, if a store allowed customers to wear those shirts in their store.
In China, for example, while I don’t know for sure I would bet you could not wear a t shirt with the face of say, a former communist party member who had opposed current clique in power (Ie the closest thing to an “opposition” politician that China has).
Or a democracy activist t shirt.
Point is, in the US, though we are lucky in that we often don’t have to think about it, our politicial principles of free speech allow for a lot of behavior.
Stores are demonstrating their political belief in freedom of speech if a store manager doesn’t kick up a fuss when someone walks in wearing a t shirt for a politician the manager dislikes.
Of course there are probably also laws or the manager is savvy enough to know they could get the store sued, but, you get what I’m saying I think / hope :).
You don’t notice it until it’s not there.
And thus, what often appears to be not making a choice, really is making a choice, albeit the default choice :).
So then you agree with the vast majority of people that it is not a "political" decision to refuse to ban someone for wearing a "vote for Biden/Trump" shirt?
Cool. That is my point. Basically everyone would not call it political to refuse to ban someone for that.
A lot of bars and clubs have dress codes. Many ban wearing clothes that could be perceived as "gang" colors. It would be a terrible business decision for a grocery store to do this, but I don't think it's wrong.
That's simply not true. One can verify the truth or falsehood of your statement by applying the knife of logic. Draw your statement to its logical conclusion in order to determine if it results in absurdity.
Let's do that.
A person who has had their brain surgically removed will (quite probably) never mention politics or attempt to control other's communications about politics. According to your statement, that person's actions are political. Sorry, but that's absurd.
2. You wrote: Your conflation of passive inaction with an active choice to refrain from certain action is absurd.
That is only meaningful as a circular definitio0n. Choosing not to act differs from an inability to act. Why? How would you define the difference between the action of not acting and the action of not acting, because of not making a choice to act?
I wouldn't, because not making a choice to act in a particular way is very different from making a choice to avoid as policy acting in that way, or, as was at issue upthread, “Taking a stance to not” act in a particular way. Taking a stance is (for this subject matter, at least) political. Inaction on its own is not.
Publishers are liable for everything posted on their websites. Platforms are not - as long as they make good faith efforts to take down or prevent posting of illegal content.
Both are allowed to engage in moderation, curation or "censorship". Engaging in such does not make a website a publisher.
Would you prefer it if social networks went straight to bans for rule infractions?
These are key to the usability of any social network. And they are inherently biased. Any such organization also has to take money, so ads are also key to their operations, and they have taken political stands on ads too.
The attempted comparison to utility companies is not compelling.
* arguably they do now by limiting "scam calls"
[1] https://www.getrevue.co/profile/themarkup/issues/probing-facebook-s-misinformation-machine-241739
Some of filtering is based on what the user wants to see, some of it is based on some notion of how "good" a piece of content is (scored by likes and engagement numbers), some of it is from advertisers paying to have their content make it through the filter, and some of it is Facebook deciding what should be seen and what shouldn't (mostly driven by their desire to keep you on the platform). Every single thing you see on Facebook has made it through a huge filter that ultimately decides if it's something you should see or not. And the inevitable outcome of building a gigantic what-information-do-you-get-to-see machine is that there are many, many parties trying to influence the machine.
Phone lines don't have that problem.
If Facebook limits the filtering to engagement, then it isn't the fault of Facebook that political content is engaging. That's just human nature. Disasters, outrage, politics, polarizing topics - these are all popular topics both online and off-line, and apread quickly as town gossip well before Facebook.
It is only when Facebook steps in and says that particular topics need to be exceptions to the filtering rules that apply to everything else, that they make themselves into a political actor.
For instance, let's say that the news feed showed you content based purely on number of likes. If political posts get lots of likes that isn't Facebook's problem. If the same ranking rules apply to all posts (# of likes) then they would remain neutral. As soon as Facebook says "content from x person will have their ranking artificially changed to reduce/increase engagement with it" thereby making an exception to the rule that applies to everything else, they have now become a political actor.
But... I think what we're seeing with political content is just a symptom of the real problem.
> Disasters, outrage, politics, polarizing topics - these are all popular topics both online and off-line, and spread quickly as town gossip well before Facebook.
This is true. But when information spreads through people's conversations with each other there's limits to how fast it spreads. There's also a lot of room for dialogue and different perspectives. If I have some silly conspiracy theory that I want to spread around, it's going to be pretty hard to convince the people around me that 5G is going to activate microchips that were injected into my bloodstream. They will likely point out that basic laws of physics don't really allow for that. But if I know how to game a social media algorithm[0] to connect me with millions of people that are susceptible to that kind of thinking, I could convince a shockingly huge number of them to believe it[1]. Especially if the social media platform isolates those people from opposing opinions and connects them with people that think similarly.
I think social media is like removing the control rods from a reactor. Those basic human flaws are now being amplified and capitalized on at a scale we can barely even grasp. And it really doesn't matter if Facebook, Twitter, etc. are "at fault" or not. It's a fundamental problem with this services and the problems will continue to get worse.
0. https://www.npr.org/2020/07/10/889037310/anatomy-of-a-covid-19-conspiracy-theory 1. https://www.cnn.com/2020/04/13/us/coronavirus-made-in-lab-poll-trnd/index.html
Does any site actually do this successfully? It seems to me that even sites that lean heavily towards algorithmic curation (including HN) still have an element of human veto.
This is like saying discrimination that gets baked into an ML model isn't the creators' fault imo.
If I build a bridge intending it to stay up and it happens to fall down 6 months later, I'm responsible for it. Facebook created an algorithm that divides people politically and that surfaces content that is provably fictional. So they should be held responsible for it regardless of their intent. They don't get to invoke "common carrier" status when they're writing software that makes decisions about what you do or don't see. What makes a telephone a "common carrier" is the fact that the telephone doesn't decide who you call.
It doesn't matter whether it's software or a human. What matters is that decisions are being made by Facebook about what you do or don't see.
Whether or not it is intentional is immaterial to the effect. The law doesn't care about your intent. I wouldn't intentionally dump toxic waste into a river but I'm liable for dumping whether I intended to or not. Mark Zuckerberg can't just throw up his hands and go "oops it's software I can't help it" when it's his company that made all of the decisions about how the software works.
The information is out there. There are reliable news sources. There are reliable databases and encyclopaedias and journalism. If people choose not to read them then that's on them.
Propaganda, misinformation and deception have always been human issue - mass media magnified them as it does everything else.
And I think we all have the right to critique social media, just like we can critique the news, books, movies, etc.
We don’t have to agree but the discussion is a valid one to have!
We get to help shape our society and world, after all :).
https://www.facebook.com/business/help/2593586717571940?id=673052479947730
https://www.fastcompany.com/90538655/facebook-is-quietly-pressuring-its-independent-fact-checkers-to-change-their-rulings
They can't impute that they checked facts, remove postsings they believe are incorrect, and then quietly put pressure on the fact-checkers to have a different "opinion" as to what is "factual".The problem is, Facebook doesn’t show people content that they want to see. They show people content that they will engage with. That’s a very important distinction.
HN algorithm/moderators actually explicitly do the opposite: if a thread gets too many comments too quickly, it’s ranked downward. The assumption is too many comments too quickly indicates a flamewar and the HN moderators want to keep a civil discussion. The approach Facebook takes is to “foster active discussion” which on the Internet typically means a flamewar. Noting generates engagement like controversial political views. So that’s what Facebook’s algorithm/moderators show to their users.
Facebook absolutely is a social conditioning tool, it’s designed from the ground up to show people content that stirs their emotions enough to click “like” or the mad face icon or even leave a comment and wait around until someone replies back.
I think it is far worse to attempt to condition people by showing them things *they wouldn't otherwise engage with". What is scarier, showing somebody something they want to engage in based on their past behaviour, or showing somebody something they wouldn't have otherwise seen if you hadn't gone out of your way to shove it down their throat?
People seem to want Facebook to make people more placid. Oh you have extreme views? Here's, let's condition that out of you by only showing you more moderate stuff. Oh you think x is bad? Let's not show you anything to do with x so that you'll hopefully forget about it and not engage with that part of your brain any more.
Like I've already said, this alternative is far more Orwellian and far more of a tool for social control, than simply optimising for engagement.
I don't think that makes sense, and I don't think that's what anyone's advocating for.
If you friend someone, or follow a page, or whatever, you are explicitly saying "I want to hear what this person/group has to say". They aren't saying "I want FB to carefully curate what this person/group says in order to increase my engagement of FB". FB shouldn't promote, hide, or reorder anything coming from someone who I've explicitly chosen to follow. It should just show me all of it, and let me decide what I do and don't want to see.
This isn't correct. The law in most modern democracies, as far as I'm aware, is very concerned with intent.
This why we generally define murder and manslaughter as distinct.
Murder is the unlawful killing of another human without justification or valid excuse, especially the unlawful killing of another human with malice aforethought.
https://en.wikipedia.org/wiki/Murder
Manslaughter is a common law legal term for homicide considered by law as less culpable than murder
https://en.wikipedia.org/wiki/Manslaughter
Murder vs manslaughter is the extreme example, though you'll find courts are broadly quite concerned with intent.
2. Does it matter if it’s Facebooks “fault” or not? The issue is their power.
Imho ideally they would acknowledge and accept responsibility for their power and in the US at least there would also be some laws regulating them in this regard.
being political is not an incidental facet of facebook, it's a core intention.
And the inevitable outcome of building a what-calls-go-through machine is that there are many parties trying to influence the machine. Eg. faking caller ID, evading blocks with throwaway numbers, spamming no-response calls to figure out which numbers are valid to target, faking a robot voice to pretend to be a real person.
Practically every modern platform uses centralized systems to filter the noisy world down to something fit for purpose, and sometimes this intersects with political issues. That's no reason to expect a platform like Facebook to become even more political in their stance than the existing level of politicization that is almost impossible to avoid.
The ethics of being a/the major institution of mass communication in large parts of the world may not force FB to act as language legislators, but these ethics certainly should compel them to do so.
Relevant points:
- If FB’s status as a mass comms source is threatened, then the company itself is threatened. This threat can be due to a lack in trust in the platform and/or legislation that effectively legislates them out of existence (see below re free speech). This existential issue should compel them to factor language legislation into their corporate policies.
- Stockholders certainly care about FB’s status as a mass comms source even if no one else does.
- Stakeholders obviously care about this, too.
- Relying on governments to regulate mass communications is a Pandora’s box for FB since FB is an international platform.
- In the US, in order to facilitate and encourage free speech, mass comms laws are not particularly restrictive, but they are built on an underlying assumption about social-based regulation that generally hold up but seem to be completely broken with platforms like FB. If FB doesn’t address this issue, then the laws that end up addressing this issue may end up legislating FB out of existence.
To close, whether playing the language legislator is part of FB’s nature, an emergent property, or something else, there are very real reasons that FB has policies on regulating language. Whether they do this well or not is a completely different issue, but putting the onus on government legislators to address the problem with formal laws seems, at best, overly dismissive.
Companies don't just have no reason to regulate language, they also have no serious authority to do so. The onus has always been and can only be on government legislators to address these issues in the most fundamental sense.
I'd like to see Facebook try to take on the Democrat/Republican eternal conflict in the US or the CCP in China on adopting a universal policy that they don't agree with, armed with powerful arguments like "the power of ethics compels me!" or "it's a Pandora's box because we're international!" or "stockholders care about our status!". Going above and beyond their most basic policy obligations has been a great way to attract the ire of political authorities who are now agitated over whether Facebook's policy is intentionally empowering or weakening their political enemies.
Because Facebook explicitly chooses how to construct the timeline it presents to its users, and becomes some of that timeline contains political content.
If Facebook were a dumb first-in-first-out aggregator, it wouldn't be political.
But it's not.
They were regulated as public utilities and common carriers. Facebook is not.
Just like any corporate decision, you have a small number of people who are relevant to making the decision, and they discuss among themselves. It isn't productive to have 10,000 people who are all angry if Facebook doesn't make the decision their way, and they each spend an hour complaining about it on Facebook, while claiming that they're doing work because they're discussing a corporate decision.
These issues are not part of your job description. You were hired to write Javascript, not to set corporate strategy. Sit in your seat, content yourself with the $500K/yr you're being paid by your betters, and refrain from sharing with everyone else your facile moralism.
I use Facebook to say in touch with old friends. If they want to share last Tucker's video, not sure why FB should block them if I don't block them.
In situations regarding literal terrorist propaganda, and active calls for violence (Which were the examples given), which are already illegal?
Yes. The courts are the proper place for determining how literal terrorism/imminent threats of violence should be handled.
I don't think this is controversial, to say that people who make imminent calls to violence, as has been already defined as being illegal by the court system, should be handled by the law.
Most everything else should be not blocked by the platform, though.
Honestly? Yes. At least that way there's impartiality, accountability, an appeals process, and enforcement.
If Facebook self-moderates you get all of the same downsides of "big government"[1] moderation and none of the benefits listed above.
[1] I assume your argument is predicated in terms of "big government" as though an unregulated and for-profit company having a near-monopoly on key parts of modern society is somehow superior to any kind of state involvement in reigning-in excesses that the free market fails to address.
But on a publishing platform where a posted article can have millions of viewers with no connection to the author who would miss out on important context... that won't end well.
There's a difference between Facebook facilitating private communication between individuals and small groups with inherently limited information-spread (e.g. phone calls, emails, IM) and Facebook operating a publishing platform that allows for mass communication. The problems we're seeing today stem from that very same mass-communication publishing platform being used as a state-level propaganda tool to sway public opinion (e.g. Russia discouraging Dem-leaning voters in 2016) at one end, to Facebook knowingly allowing and facilitating extremist groups to operate on their platform and coordinate real-life terroristic assaults at the other end.
My problem with Facebook is that it acts as radicalization pipeline by channeling lies that are too crazy for mass market media into the minds of people who are susceptible to believe them.
For a recent example in America, Facebook was used to spread propaganda to Republican rural residents in Oregon by telling them that Democrats were coming to light wildfires in their towns. Oregon rural people responded by setting up their own vigilante checkpoints.
It goes without saying that the claim that Democrats were coming to set rural Oregon on fire was a lie, but that lie spread like wildfire through Facebook, and it was a lie that could have very easily resulted in fatal violence.
To make matters worse, many of the too-crazy-for-mass-media ideas spread on Facebook are the product of astroturf propaganda and are not organic. The problem is not people sharing videos of Tucker Carlson, or even individuals spewing racist diatribes; the problem is well-funded right wing propaganda groups using Facebook to distribute material that encourages people to hate--or even kill--their fellow citizens.
While I agree that judges should make the ultimate decision on what is acceptable speech, the judicial system just isn't fast enough to respond to the speed of Facebook-propagated propaganda. Society needs a better solution; we can't wait for a perfect one.
Facebook is pouring gasoline on the fires of social division. This is not just unethical but is extremely dangerous to social stability and it ought to be stopped.
My parents are very much of the "no politics at work" generation and I really question why that cultural strain has carried itself into 2020 since it only serves company board members/executives and categorizes rank and file employees as automaton code monkeys who should "shut up and type".
Armchair thought: in this odd period of history where, ostensibly, capitalism "won" as the political system of choice and "the end of history" was declared we have entered an alarming stage of hyper-capitalism mixed with growing discontent/civil unrest. More than ever there seems to be a breathless determination by upper-middle class professionals to not rock the boat in any way in the hopes that these mega-corporations will continue to prop up the stock market, pay out outrageous salaries, and keep the gravy train running. It's a kind of cognitive dissonance where we can see how much damage the big players in tech are wreaking on global society - there's ample evidence - but to recognize and face it would sully the deeply held ideal that tech is some kind of great, benevolent force in our society (more cynically: confronting it would also mean confronting that fact that we as tech workers have ethical responsibilities to society at large that we have at best ignored, at worst defied).
Practically, it's not. Yes, you can catch up on how your cousin's new baby is doing, but you can't disentangle that from the extremist propaganda, disinformation, and real harm that these platforms incur by leveraging human psychology against us. Taking the view that ethics and work are separate silos is hopelessly naive. Almost every profession requires constant awareness and ethics in order to be a benevolent force: doctors, lawyers, builders, scuba gear manufacturers, car designers all have a responsibility to their end user and I can't see how tech is any different. I doubt people would react the same way if this were GM instead of Facebook and their employees were up in arms after learning the car they had been designing and building had a track record of blowing up and killing people.
On a more personal side: I honestly cannot stand when most people discuss politics in the Slack at work. The vast majority of comments are snarky, are unsupported (by data) opinions, or are caustically dismissive of opposing views. It's bad enough when people holding political views I disagree with engage in that behavior, but it's much worse when people I do otherwise agree with do. And it happens in just about equal measure, as far as I've experienced.
Work is already stressful enough without adding to it with political fights.
2. I think it's disingenuous to imply that Facebook workers - and bear in mind we're not talking about the janitorial staff here, but tech workers who command salaries at and above $100K p.a. - must work at Facebook lest they be destitute. The greatest advantage of being a tech worker is the range of high salary positions available to you. That aside, I return to my previous point about this not being an abstract, culture wars style debate, but specific critique of company actions. It's not politics, but internal politics. Every company has internal debates about the strategic and ethical direction of the company - why not this one?
3. I understand that politics can be exhausting, especially in the highly polarized environment we live in, but I don't think that's sufficient reason to forbid internal critique of any company. Moreover I think the stakes are higher than we are comfortable with - Facebook has already ADMITTED that they provoked the Burmese genocide 2 years ago [https://www.nytimes.com/2018/11/06/technology/myanmar-facebook.html].
To flip the question around: what makes YOU think that YOUR personal right to feeling relaxed at work is more important than an employee's right to ensure that they do not work on a product that can lead to mass murder? Moreover, is it really a political stance to demand that you are not complicit in unethical activity?
Probably gotta just do what Mark thinks is right and, should that not be clear, guess what he would think is right. And suffer the consequences yourself should you guess wrong.
Most of its employees job have nothing about politics. They imagined such relevancy themselves to create a greater significance/satisfaction from the daily mundane job.
It's therefore hard to see how taking this offer would not be choosing to sell your ethics for money and success, given that you could likely land a well paid job anywhere.
No normal company is going to sign _any_ contract provided by a prospective full-time employee (except perhaps if you are a sought after celebrity being hired at a VP level or above), so it would just be a waste of time and money for someone to take your advice.
Even if the hiring manager personally wanted to, there is no process for doing this. They don't have lawyers standing by to review such contracts. It would probably be hard to even find out who would have the authority to sign such a contract.
Further, retaliating against whistle-blowers is already illegal, as is ordering employees to break laws, so I don't know what additional protection you imagine you would get from such a contract.
Agreed. To take my advice, you would need to be hired as a contractor/consultant. Normal companies do this all the time.
I looked at your blog; you seem like a talented, driven person. Why not apply those gifts to something meaningful? Why spend your limited working years building tools for this horrible company?
What's that Upton Sinclair quote? Ah, yes: "It is difficult to get a man to understand something, when his salary depends on his not understanding it."
I recognize that it might not seem fair to take this position, but understand that it's an easy one to take when I look at FB's negative effects not just on the world, but on the lives of actual people I know. It seems unlikely to me that a disinterested party could truly weigh FB's positives and negatives and think the balance is positive. But you are far from unbiased, and I hope you can at least realize that.
Facebook has done a lot of good, but IMO there is no question that it's done more harm.
And Zuckerberg is a crazy person. There are a lot of people I wouldn't want to ultimately report to, but Zuck is right next to Larry at this point.
Not a great way to argument against someone who disagrees with that point
It's hard to square that with the algorithmic feed, likes, etc, which are making the world worse every single day in favor of engagement metrics. We've known for many years how destructive these are.
Facebook and Twitter could literally make the world a better place simply by disabling those kind of features. Just remove them. It doesn't get easier than that to substantially improve the world, yet it's not being done.
I'm not sure about that. I agree that the world might be better, but I'm not sure they could just disable them. The next smaller competitor who won't will have more user engagement and grow. If something is a very effective advantage, I believe you can only remove it by coordinated action and enforce it on a global scale.
Modern weapons are terribly efficient at killing people. But if you're the only country that's removing them from your arsenal, you depend on the mercy of your neighbors.
If you don't want to make the world a worse place, you don't do it. Hiding behind such logic means you're really just virtue signalling.
And it's not like people don't like it. They "want" to be engaged, to feel anger and surprise etc, those systems work because they're catering to peoples' instincts and desires.
Incidentally, this is why Google+ failed -- it was a social network marketed to the kind of people that hate social networking :)
Wow, that's a very racist/sexist statement and you don't even leave a hint about why you think it's true. Worse, it reads like you expect it to be obvious. What about a person's gender or skin makes them "a bad demographic for social networking"?
Edit: Also presumptuous of you about the HN crowd. Where would you even get those statistics? HN doesn't collect that data.
Whatever internal discussions you're having, they're not working. I'd posit that they can't work, because FB's entire business model is predicated on user-hostile, polarizing behavior, whether anyone internally will admit it or not.
It frankly does not matter one bit what things are like internally when externally we can see the harm FB has caused, and there is zero evidence that harm is going to stop.
It sounds like, from reading your comment a couple times, you know what is right but are tempted to ignore that and take the cash.
I have worked for a lot of start-ups, including several that grew to become significant giants in their segment of the internet. Facebook has been the only one I worked at where there was a group of employees whose job was to create posters to hang up in all of the offices telling everyone else how important and worthwhile it was to work at FB -- looking back I think this level of internal propaganda should have been a warning sign.
Sounds like there's nothing left to abdicate.
As a screw in the Facebook machine, your significance is trivial. This is true regardless of your intention.
Get over the ethnical drama I would say. Big tech is about as ethnical as banks. In another word, the companies don't care, and they are probably not.
I used to loathe Facebook and like Google. These days both seems about the same. Facebooks policy to leave people alone deeply resonates with me even though I still dislike them intensely for what they did to WhatsApp.
And for what it is worth, Facebook unlike Google hasn't insulted me for a decade with the ads they show.
Usually you see someone say something like this when they're presented with truly awful options. Seeing it used to refer to a $400k comp package is a bit jarring.
And if you've made it through FB's hiring process and they've given you an attractive offer, I find it hard to believe you don't have other options that don't involve a big ethical quandary, or wouldn't if you interviewed around more.
That being said, it’s a good idea to understand why the pay is so high (and it’s not because they’re nice people who only want the best for their employees):
You will be expected to leave moral qualms at the door. This an unwritten rule at many companies, but Facebook had to write it. That says something.
You will be expected to work for it. Hard. The people I know at Facebook easily put in 1.5-2x the hours I do at a FAANG-ish (late nights and weekends seem to be the norm), but get paid roughly 1.5-2x what I do. If that’s a tradeoff you’re willing to make, go for it. I however am making more money than I know what to do with, and thus value all the time I’m not working (hobbies, travel, side projects, etc) way more than the money I’d make from working during that time.
At the end of the day you aren’t going to singlehandedly destroy the fabric of society all that much in your first year, so you’re fine making the above sacrifices for a year or two for some quick cash then fucking off to pursue some real interests, go for it. But I sincerely warn you against sacrificing too much of your life (youth especially) and morals for money —- it really isn’t as valuable as it’s cracked up to be.
The sooner this fucking election is over, the better. No more having to read about Marxism, Trump, Racists, Snowflakes and Trannies.
I downloaded nVidia Broadcast a while ago, it's really quite good.
When you work at Facebook you should know what's going on and what the company is doing and causing and trying to help fix it.
It sounds like leadership is asking employees to put the head in the sand - shouldn't a leader propose the opposite? What happened to move fast and break things?
It feels very old fashioned, but are we not getting a little burned out by a world where people openly nail gun their identity politics to the mast?
When I were a lad (way back in the nineties) I was taught it was rude to talk about politics, religion, or money. This applied to anywhere one was in polite company, not just at home, and definitely not at work.
That seems like a bigger issue. If I am an activist and I poison the enormous dataset that's being fed to a ML model, is anyone even going to notice?
For example the work Google does on "de-biasing AI" is all about taking ML models and warping its understanding of the world to reflect ideological priorities.
My point is that the individual level outputs (which you'd need to accomplish what the OP was talking about) are essentially impossible to tune so precisely, given our current understandings of the models.
The have also suggested a "meme cache" - one of the memes shown is a Folgers coffee cup which says "Best part of waking up, Hillary lost to Trump".
Based on this classifier and hits to the meme cache, "trolls" would experience things like auto-logout, limited bandwidth.
Under "when to trigger this" they also suggest the period "Leading upto elections".
So on the one hand this document seems well-intentioned because there's some bad behavior in these groups like raiding, doxxing, racism, etc.
Rather than focusing on behaviour like doxxing and raids, the approach suggested seems to be directed at a specific group. Why? In the entire universe is it only this group that engages in this kind of behaviour?
It also does a broad classification that lumps anyone sharing the same memes, or vocabulary with punitive action.
Also they associate the election with this, which seems especially puzzling.
“If you don't stick to your values when they're being tested, they're not values: they're hobbies.”
― Jon Stewart
(Many said something similar, but I just love Jon Stewart)
I see so much debate about what's right to do within FB, "how will people change the structure from the inside with this rule?", etc.
QUIT. Just quit. Seriously. Make it public why you quit. Quit en masse. FB is not a good company. Your talents are useful in many other places.
Yes, I'm privileged in saying this. No, I wouldn't feel comfortable quitting my job right now.
But if you believe enough that FB is an evil company--as many of us have known for 10+ years now--you should not work there.
If they are doing bad things, and they are not open to people fixing said bad things, stop helping them do bad things.
Why does quitting help?
Consider IBM. Their revenue is about 20% of its peak. It used to be seen as a monopoly power. Now we barely think about them.
I think that IBM has made itself an unattractive place for employees where it used to be seen as an extremely prestigious place to work. And I think poor quality employees, along with average to mediocre management has squandered an incredible, dominant company over the past 20 years.
Facebook will decline if all the most desirable employees just quit. It’s basically just math - those who know the most, interview best, and have the most accomplishments will be able to leave the fastest. Facebook will be left with the D team and they’ll get taken to the cleaners by competitors (as they already are with TikTok - and what’s the average age of the most engaged users of Facebook again?)
Anyway, the point is, you quitting hits a company in precisely the right place - their wallet. Employee turnover is tracked and costs companies money. Higher turnover does bring about changes.
And FB ceases to exist. A loud message will echo through silicon valley fit years to come.
The only ethical decision for the engineers to make is to quit. Thus all employees there are mathematically unethical. They are writing the code that executes the immoral decisions.
What is the plausible internal plan of the agitators and activists inside FB and other firms? They have none, beyond systematically ban more and more users who violate ever more bizarre and ad-hoc purity rules. That's not a plan.
Moreover, quitting over this stuff isn't a one way street.
Facebook is not an evil company. I wouldn't work there today but that's exactly because of their vicious internal partisan politics that make these firms so unfriendly to anyone who isn't strongly on the left. For anyone who thinks corporate diversity programmes are sexist against men, Brexit is a commendable move towards localism, that sometimes Trump actually might have a point, etc, Facebook is just not attractive to those people today.
It sounds like Zuck may be getting a grip on his workforce and professionalising it. If so, for every activist quitter they'll suddenly find they're more appealing to 10 more normal employees who just don't want their workplace to be a political battleground. Moreover people get more conservative as they age, and they also get more experienced. So they may suddenly discover they have access to more experienced senior engineers who were previously, uh, content with their current job.
To elaborate: They don’t all pay well. Not even close. And none of them pay workers in other countries well, and they can outsource endlessly with no repercussions.
But they want to keep their home in the US (the world military empire) because it lets them do whatever they want. And they don’t pay migrants nearly the same because migrants are bound by their visas. But too much of that and the Americans will catch on.
Unionizing in this case is extremely unlikely. There is too much for the employees to lose. Instead, Facebook and the US Government will continue doing whatever they want.
That's fine, let them. "If I don't do it, someone else will" is a poor justification for anything.
It's a horribly dystopian opinion that disparages the moral action and inhibits good people from doing a good thing: taking a stand against unethical behavior.
What fixes things is organized political action.
I did not vote for them and I would rather have the people I did vote for (and that I can stop voting for) solving those problems.
That is just me though. I'm sure many other people would much rather have their problems solved by Facebook employees than elected representatives, I mean I for one also think that most people elect literally the worst people in the world and would rather have them ruled over by unelected clerks at facebook.
On one side, some people have an interest in not accepting that their financial success is arbitrary and illegitimate. On the opposite side, some people feel that they have been locked out of an arbitrary wealth transfer and so they have a strong interest in not accepting that they're incompetent losers and that they deserve to be at the bottom of the food chain because they didn't time the market right (a highly speculative and irrational market too!). Or maybe they didn't pass the Facebook whiteboard test job interview questions several years back (which is also an arbitrary hiring process by many accounts)... So basically they missed out on a huge opportunity because of some fickle arbitrary reason.
I don't think blocking discourse is going to improve things. History has shown time and time again that preventing free speech will stop people from finding compromises. The only solution to the worsening problems will be violence.
If the elites keep suppressing speech, the result will be worse than WW2 and the elites will not stand a chance because it will be fought on their own turf... The elites won't even know who their enemy is. Their own friends and family members could be against them. They won't even realize it until it's too late.
The right thing to do is to find political solutions. I personally think that UBI (Universal Basic Income) would solve most problems. It wouldn't fix the wealth gap immediately, but it would fix the mechanism which is suspected of causing arbitrary (centralizing) wealth transfer and that would at least level the playing field.
UBI is a really good compromise. If the elites are so confident in their superior abilities, surely they have nothing to lose by leveling the playing field right?
BTW, I currently earn 100% passive income so I'm actually saying this as someone who is on the winning side... I've come so close to complete failure - I leaped over the crevasse in the nick of time; the system's fickleness and arbitrariness are crystal clear to me. I'm currently standing on the winning side of a very deep precipice and I can see legions of talented people running straight into it.