Bonus clip: https://www.youtube.com/watch?v=DH76CZbqoqI
[1] if the line between dys- and u-topia depends upon prevalence of man-portable SIGINT devices: https://news.ycombinator.com/item?id=24069572
Further playing "bot or not?" we have the Stasi (human) vs NSA (automated): https://news.ycombinator.com/item?id=24470017
For example, simply providing an alternative to pay walled article is a recurring task people do here. It's easy to automate and doesn't raise eye brows. It raises someone's perception of the profile if they were to do a quick check. Another one include providing alternative to products. It's easy. Search through product hunt or other sites for results or wishing someone on their product launch/show HN which again doesn't require contextual understanding to the same degree.
Big tech, philosophical, news media, etc threads are predictable. T5 and electra models from Google are good at filling the blanks (in contrast to gpt which generates texts in forward fashion) so they can be used to make unique sentences following a pattern. They are more meaningful at the cost of less randomness.
Many posts on HN appear first on lobster, small subreddits, GitHub trending, and popular twitter accounts. You could simply fetch the links at a random interval within a timezone and post unique links here.
You can target a demography who is least likely to suspect it's a bot. HN is siloed in many small parts despite having the same front page. You can predict which users are likely to post in certain threads and what their age demography is i.e Emacs anything. Database of HN is available on big query.
You can train a response to suspicious comment calling them a bot: That hurts. I am not a native English speaker. Sorry, if I offended you. or Please check the guidelines...
There are many techniques to make a sophisticated bot. ;)
https://ai.googleblog.com/2020/02/exploring-transfer-learning-with-t5.html
https://github.com/fuzhenxin/Style-Transfer-in-Text
https://ai.googleblog.com/2020/03/more-efficient-nlp-model-pre-training.html
https://console.cloud.google.com/marketplace/details/y-combinator/hacker-news
It wouldn't surprise me if a non significant number of users here were bots.
I am more interested in the question: Does the difference matter especially in text as long as a bot user is a more useful user?
I don't know where to go to meaningfully engage with humans anymore. It's just smarter and smarter bots
That's pretty damning. Facebook execs knew that extremist groups were using their platform and Facebook's own tooling catalyzed their growth, and yet they did nothing about it.
The consciously and proactively blocked attempts to fix it.
I think you misinterpreted this?
Edit: I see the confusion now. emilsedgh, you, and I all agree. I thought emilsedgh was saying the opposite of what they wrote.
Seems like the relevance of that line really depends on answers to both. I.e., if extremist is super narrow we may be talking about 64 people out of 100. If extremist is overly broad, then maybe all the recommendations were for groups that a majority of the population would not find offensive.
Just saying the line by itself without context doesn't convey as much information as it first appears.
What else beside outright banning should have they done ? (I think banning extremists wouldn't have impacted their revenues much so they should have but that's another debate)
That's almost certainly not what they did. When you see someone ranting about the 5G turning the coronavirus communist or whatever, that person didn't generally come up with that idea themselves; they were exposed to it online, either via friends, or via this.
Their algorithm is likely pushing extremist nonsense on people which it determines are vulnerable to believing it, which isn't the same as having an affinity for it. Obviously this isn't what they set out to do; they presumably set out to increase engagement, and if that happens to increase engagement, well...
Facebook have, perhaps accidentally, created a monster of perverse incentives. Not sure what the solution is, besides regulation (which would be extremely difficult).
The solution is only difficult if you start from the basis that Facebook must continue to exist. If they cannot run a profitable business that isn’t harmful, that’s noone’s problem but theirs.
When you're being reckless on purpose, none of the damage you create is accidental.
Why anyone think capitalists can actually practice morality? That's never been done in the hundreds of years of history of capitalism.
And capitalists can be quite moral personally. Across the history, the rich and powerful have always had a positive image. But their enterprises have always been requiring regulations.
I could see an employee giving him that data out of concern, but that's a fair point.
A recommendation engine is just an algorithm to maximize an objective function. That objective function being matching users with content that they enjoy and engage with. The algorithm has no in-built notion of political extremism. It almost assuredly seems to be the case that people with radical opinions prefer to consume media that matches their views. If Bob is a three-percenters, it's highly unlikely he'd prefer to read the latest center-left think piece from The Atlantic.
Unless you're willing to ban recommend engines entirely, the only possible alternative I can see is for Facebook to intentionally tip the scales. Extremist political opinions would have to be explicitly penalized in the objective function.
But now you've turned Facebook from a neutral platform into an explicit arbiter of political opinion. It means some humans at Facebook are intentionally deciding what people should and should not read, watch and listen to. Remember Facebook as an organization is not terribly representative of the country as a whole. Fewer than 5% of Facebook employees vote Republican, compared to 50% of the country. Virtually no one is over 50. Males are over-represented relative to females. Blacks and hispanics are heavily under-represented. And that doesn't even get into international markets, where the Facebook org is even less representative.
The cure sounds worse than the disease. I really think it's a bad idea to pressure Facebook into the game of explicitly picking political winners and losers. A social media platform powerful enough to give you everything you want is strong enough to destroy everything you value.
People's higher goals are often counter to their day-to-day instinctive behaviors. We should find ways to optimize those goals, rather than momentary metrics.
The same thing social networks did before.
If I subscribed to 1000 people, show me whatever the hell they wrote, all of it, in chronological order.
Don't show me what my friends wrote on other pages, if they think that's important of interesting, they will link or share manually.
Limit shares/retweets. Limit groups sizes. Surface more information as topics/tags/whatever so that users can do more sophisticated filtering themselves or collaboratively. I want to mute my uncle when he talks about politics, not all the time. Facebook already does more sophisticated analyses than just extracting topic information (I know because I work there and I can see the pipelines). Show those results to me so I can use them as I see fit. That's how you make things better. Chronological vs. algorithmic order? Pfft. In fact, I do want the most interesting things out of that set shown first. I just want to have more control over what's in the set.
Sorting something to 1000-th page is censorship. Legally probably not, it's still available you just need to click page down 1000 times but IANAL and don't care.
I don't want algorithms to do any filtering. If someone shares crap every 10 minutes I can always unfollow. Still, I like your idea about manual filtering with tags.
What should Big Tobacco do? If your business is a net negative for the world... get out of business. This is not hard. Corporations are not precious endangered species that we have some moral obligation to keep alive.
> A recommendation engine is just an algorithm to maximize an objective function.
A cigarette is just dried leaves wrapped in paper. If the use and production of that devices harms the world, stop using and producing it.
> But now you've turned Facebook from a neutral platform into an explicit arbiter of political opinion.
Facebook is already a non-neutral platform. Humans at Facebook chose to use an algorithm to decide recommendations and chose which datasets to use to train that algorithm.
Playing Russian roulette and pointing the gun at someone else before pulling the trigger does not absolve you of responsibility. Sure, the revolver randomly decided which chamber to stop at, but you chose to play Russian roulette with it.
With social media, anecdotal accusations abound of negative impacts on mental health or political polarization. Yet the most carefully conducted research shows no evidence that either[1][2] of these charges are true to any meaningful degree. Simply put the academic evidence is not contagious with the journalistic outrage.
What's more likely is the panic over social media is mirroring previous generations' moral panic over new forms of media. When the literary novel first gained popularity, social guardians in the older generation worried that it would corrupt the youth.[3]
The same story played out with movies, rock music, video games, and porn among other things. The dynamic is propelled by old media having a vested interest in whipping up a frenzy against its new media competitors. In almost every case the concerns proved unfounded or overblown. I'd be pretty surprised if social media proved the exception, when we've always seen the same story again and again.
[1]https://twitter.com/DegenRolf/status/1217307200517033986 [2]https://twitter.com/degenrolf/status/986146855007539201 [3]https://www.economist.com/1843/2020/01/20/an-18th-century-moral-panic-sounds-surprisingly-familiar
It was certainly questioned for many decades before we got to that point. Meanwhile, millions died. And during that entire time Big Tobacco had no difficulty drumming up doctors and scientists willing to argue against the negative health consequences of smoking.
Rejection of science in favor of something you personally want to be true isn’t a new internet age development.
The clear result of this algorithm has been to happily send lies, misinformation, emotionally manipulative opinions, and other content at a scale and speed that was never achieved by a New York Times bestseller, MTV, or Rockstar Games.
All media has always exploited our cognitive biases and irrationality to its end; but to do it worldwide and simultaneously, 24 hours a day, 7 days a week, without rest or remorse, is pure stochastic terror.
Move fast and break things indeed.
If thats accurate, it's freaking me out while thinking about Facebook's role in the Myanmar genocide https://www.nytimes.com/2018/10/15/technology/myanmar-facebook-genocide.html
Tim Kendall has been an outspoken critic for a long time and also a recent central figure in the movie "The Social Dilemma" which is the about the same thing and will lead to more speaking engagements on the topic.
That doesn't dilute the message. If you think it does, what does a better arbiter of this aspect of reality look like? Who would that person be and what would their credentials be?
Honest question: Were they outspoken when they were a director at FB, or when they were president at pinterest? Or did it start two years ago when they became CEO of Moment selling an app to cut down on screen time?
In my mind, an ideal arbiter isn't also selling a product to fix the problem they are raising awareness about.
This doesn't mean what they are saying isn't true, or that they didn't have a real change of heart, but is certainly a conflict of interests.
Don’t get high on your own supply.
Users of drugs know this about their dealers, users of these dopamine producing platforms do not.
Anyone to spread the message, especially an authoritative source, directly to the representatives to do something about it, is the right start.
History has shown this is better. When mayors and representatives try to get their own population off of drugs, it hasn’t ended well for the state.
If a public servant cant effectively do it, then you only have people left that would have a conflict of interest, not controversial.
I don't really follow the rest of your comment.
You say anyone speaking to the representatives is a good start, but representatives are ineffective. Also, why can only those trying profit/exploit an addict be of help?
I’m saying I dont care if there is some way their current predilection can be seen as disingenuous because they made a bunch of money or maybe have a new company that can make a bunch of money.
Those are the things I dont care about
In this case, it doesn't much matter because they didn't say anything new or of substance. Facebook is designed to be "addictive". Any psychology undergrad could tell you this.
“It seems like these mea culpa admissions might be motivated...”
It seems like you’re not willing to state there is another agenda but you want to attack people speaking up anyway — because they were part of the problem or contributed to it, that anything they say now doesn’t matter.
I don’t think this is constructive.
(Do people with exclusively pure selfless motivations even exist? Even people who donate to charities anonymously are plausibly motivated at least in part by the warm tingly feeling they enjoy when giving charitably.)
BTB is quite inflammatory, but the host eloquently puts together a lot of really damning and shocking stories from inside Facebook's doors.
> My path in technology started at Facebook where I was the first Director of Monetization. [...] we sought to mine as much attention as humanly possible and turn into historically unprecedented profits. We took a page from Big Tobacco’s playbook, working to make our offering addictive at the outset.
> Tobacco companies [...] added sugar and menthol to cigarettes so you could hold the smoke in your lungs for longer periods. At Facebook, we added status updates, photo tagging, and likes, which made status and reputation primary and laid the groundwork for a teenage mental health crisis.
> Allowing for misinformation, conspiracy theories, and fake news to flourish were like Big Tobacco’s bronchodilators, which allowed the cigarette smoke to cover more surface area of the lungs.
> Tobacco companies added ammonia to cigarettes to increase the speed with which nicotine traveled to the brain. Extreme, incendiary content—think shocking images, graphic videos, and headlines that incite outrage—sowed tribalism and division. And this result has been unprecedented engagement -- and profits. Facebook’s ability to deliver this incendiary content to the right person, at the right time, in the exact right way... that is their ammonia.
> The algorithm maximizes your attention by hitting you repeatedly with content that triggers your strongest emotions — it aims to provoke, shock, and enrage. All the while, the technology is getting smarter and better at provoking a response from you. [...] This is not by accident. It’s an algorithmically optimized playbook to maximize user attention -- and profits.
> When it comes to misinformation, these companies hide behind the First Amendment and say they stand for free speech. At the same time, their algorithms continually choose whose voice is actually heard. In truth, it is not free speech they revere. Instead, Facebook and their cohorts worship at the altar of engagement and cast all other concerns aside, raising the voices of division, anger, hate and misinformation to drown out the voices of truth, justice, morality, and peace.
The biggest thing you go do to hurt the likes of FB,IG,Twitter would be to brand them as lame and uncool. If people don't want to use it, then it effects their bottom line. Gov't action isn't require for this, but the right campaign attacking the cool factor will motivate people away from it. (I'm currently wearing my positive thinking cap)
One person might say "We created all these statuses and features to be addictive" but it seems just as true to say "We created this stuff because people liked it and we are trying to make something people like."
"Addiction is a brain disorder characterized by compulsive engagement in rewarding stimuli despite adverse consequences." Wikipedia
https://www.psychiatrictimes.com/view/what-does-rat-park-teach-us-about-addiction
https://www.asam.org/Quality-Science/definition-of-addiction
Yes, TV shows can be made to be "addicting" but what is the potential harm? Someone sits around watching too much TV? That's not a very big drain on society at the end of the day. Sure it's not great, but the negative outcomes for the society as a whole don't seem to be too impactful.
Now look at gambling. It's certainly addictive because of various techniques used by casinos to get people hooked. It seems that much of society agrees that it also has some negative impact on society as a whole. It drags people into impossible debts which can have a variety of negative externalities... loan sharks, violence, evaporation of wealth, financial crimes, etc.
It seems clear to me that not only is social media addictive but it is also having a net-negative impact on society and that is why people are concerned. If the impact was just people are spending their evenings glued to the screen but not going out and causing societal issues then I don't think anyone would be too concerned.
Does this significantly negatively impact the lives of viewers or of those around them? Addiction doesn't just mean "want to have it". Addiction means "want to have it so bad it messes up other aspects of my life".
(For what it's worth, I do personally avoid cliff-hanger shows because I find the anxiety and frustration of being left hanging is rarely sufficiently well compensated by the quality of the show.)
It's also a good trick for going to bed on time and breaking the 'just one more episode...' problem!
But now that it has been one month since I last used it, and I noticed that all I did was to replace my Facebook time with Hacker News, I can't but wonder: Does the addiction problem lie with the user, or in the platform? Or is it, more generally, in the way the internet serves us content?
The moment when Netflix execs openly says their competition is sleep, yes.
Honestly, this is a super interesting question. I would say anything designed to succeed by hijacking human brain chemistry instead of providing superior or novel quality is probably worth regulating at some level.
From that standpoint, Breaking Bad would not have an issue - it's superior and novel. Shows that succeed in making a viewer binge with a combination of (effectively) mid-episode endings and autoplay, are somewhat hacky. You can't regulate cliffhanger endings, so autoplay should probably not be legal - Netflix already asks you if you want to continue watching, they should simply do so after every episode. Shows with good content like Breaking Bad would still be easy to binge (just press yes once an hour), and poor quality shows would have a harder time taking advantage of brain chemistry by requiring an affirmative act.
>"I would say anything designed to succeed by hijacking human brain chemistry instead of providing superior or novel quality is probably worth regulating at some level."
My point is that there is no real dichotomy, 'Breaking Bad' and menthol cigarettes are not so different; they each possess both qualities.
Manipulative advertising is an act of malice, particularly with addictive products.
You originally posted that:
>"Advertising is an act of malice, particularly with addictive products."
But changed it to:
>"Manipulative advertising is an act of malice, particularly with addictive products."
What do you see as the difference between "manipulative advertising" and regular "advertising", and how is either (or both) malicious? Advertising is basically telling people that you are offering them something, and trying to persuade them to buy/use it, and I am not sure how that is "characterized by unscrupulous control of a situation or person."
I agree adding a flavor can be superior and novel, but if you read what I originally wrote it was specifically worded to make the addictive quality the overriding concern. Menthol wasn't more addictive because of the flavor, it was addictive because it allowed the user to get more nicotine per hit.
That aside, if you consider the addictive quality to be the overriding, and believe that "Breaking Bad" possesses (some of) it, then why doesn't BB's addictiveness override its superiority and novelty?
Yes, because I wanted to narrow down my originally too broad statement before picking on the generalization will derail the subthread (as it sometimes happens on HN).
> What do you see as the difference between "manipulative advertising" and regular "advertising", and how is either (or both) malicious?
I'm glad you asked! I wrote an essay on this very topic the other day: http://jacek.zlydach.pl/blog/2019-07-31-ads-as-cancer.html.
With respect to your discussion of advertising, as someone who has used various forms of marketing to promote products, I think advertising is much less effective than you seem to believe. Second, you say that informing is okay, but convincing is bad, but the problem is that almost all 'informing' is an attempt to convince. Those points aside, I understand that you find certain advertising patterns unethical or distasteful, but I am not sure exactly how to draw the lines; your post seems to be a polemic rather than an ethical framework, so it expresses your feelings, but doesn't explain your thinking to me.
And then nothing really happened in that woodshed other than some lousy warnings on a toxic product for the consumer and its surroundings.
The Master Settlement Agreement in 1998 [1] had no statistical impact on the rate reduction of smoking - the rate of decline of smokers is the same now as it was in 1965.
The tobacco industry is more profitable than ever and they are diversifying into nicotine delivery vehicles like vapes, gum [2]. So the underlying goal - increase nicotine dependence across the global population and capture the nicotine consumption market is still going strong.
Much like the desire to be intoxicated, the desire to influence people will never go away. It's baked into our biology. Everyone in this thread interacting with each other is trying to influence everyone else. Facebook etc... is just doing successfully what Bernays dreamed of.
You can beat these platforms all you want - just like the tobacco industry was beat. The problems will just surface elsewhere in a different form.
Attack the root issue - ban advertising. oh and do it in a way that allows for "free speech." The challenge of the century.
[0]https://www.lung.org/research/trends-in-lung-disease/tobacco-trends-brief/overall-tobacco-trends
[1] https://en.wikipedia.org/wiki/Tobacco_Master_Settlement_Agreement
[2]https://www.wsj.com/articles/u-s-tobacco-industry-rebounds-from-its-near-death-experience-1492968698
I say this not because I think we should just give up and not ban advertising but because I'm curious how it might be done effectively.
Hence, why I think it really is (one of) the hardest challenges of our century: How do you eliminate or severely restrict influence vectors?
Who/how determines what qualifies as good/bad influence or reality?
Should positive (however defined) influence be allowed/promoted?
Not sure this one is solvable as it would require a global optimization vector which we don't (and maybe can't) generate.
I don't know if that impacts your larger point with regards to nicotine addiction in general, but I think it's worth noting.
However I'm not sure how that would be supported without assuming there is some base-rate that would smoke no matter what, as though smoking specifically is a natural inclination, with everyone above the base rate on some log distribution of "ability to convince to stop smoking."
Suppose someone offered to mow your lawn for free. Great offer, so you take them up on it. Turns out they're also using the access you give them to mine gold you didn't know was in your backyard. Whether or not you were addicted to their mowing services is irrelevant, they're stealing from you.
The problem with Facebook is that they're taking your attention and monetizing it. There's no serious argument against requiring them to disclose their actions - particularly who is buying your attention. It doesn't make any difference if you're addicted or a mere user of their product, they're still using your attention without telling you. They simply know more about science.
This is what every news outlet tries to do. The only difference is that FB is better at it. It reminds me of the controversy about targeting ads towards protected categories (age, gender). This is something all media buys do as well, based on location, event type, but FB just has a better way.
I'm not saying its right, or necessarily wrong, just that this seems to be more about them being good at something than it is about them operating in moral territory that is different than any other business.
Example: The government of Iran use pizza ads targeted towards gay people to track down their identities. Still the same as other media?
And I agree Facebook is not the first company in the world to maximise attention with this kind of content. Go back to when political pamphlets started appearing in the 16th century, it was mostly salacious bullshit about well-known public figures being possessed by the devil or drinking the blood of orphans.
I am not even sure what the problem is anymore, let alone what the solution is...but this is not going to stop with Facebook, this is just a reflection of human nature (and yes, everyone has complained about this kind of "content", it ignores the fact that most humans enjoy consuming it).
(I think the most problematic part of Facebook is just that so many people get their news from there and, like every human that has ever existed, they have been unable to deal with that responsibility in an even-handed way...I don't know though. They are basically a dead platform anyway, it is mainly used by old people to keep up to-date with their grandchildren afaik...I don't really know anyone who uses it, and I have never used it myself).
This is terribly myopic; you don't have to like FB or want to use it to recognize its influence. Consider the possibility that you just haven't really wrapped your head around it yet. Also, I'm gonna guess you don't know a whole lot of older people, and may be falling into the cognitive trap of thinking your experience of social demographics is reflective of the population at large.
You're guess is incorrect (I love that you have considered all the things I don't know whilst jumping to random conclusions). When I said: I don't really know anyone who uses it, I meant I don't know anyone under the age of 35 who uses the platform with regularity (remember, I said that it was dead, not that no-one used it...they have 3bn MAU, people use it but my point is that people don't use FB in the way that is often assumed by politicians...who btw, mainly see FB as a way to target voters...the political use of FB peaked with Obama).
Name three.
It's been a while since I saw one. Even BBC sometimes succumbs to clickbait, and the inverted pyramid is all but forgotten in the journalism world.
It's amazing to see people casually use these words as if they still have universally meaningful definitions. Not anymore. What one half of the country considers misinformation another half of the country considers the truth. Not to mention that social media operates internationally.
You can't have a meaningful discussion without admitting this and doing something to escape the semantic trap of perfect ambiguity. In other words, you first need to establish some sort of information processing principle that is unambiguously defined and everyone (or at least the wast majority of people) agrees with.
I used to smoke, and I also have (very mild) asthma that was diagnosed prior to me starting to smoke. I always said that I could breath better after a cigarette and people would laugh at me. It never occurred to me that of the thousands of chemicals in a cigarette some of them might be geared specifically to "help" you take in more smoke, and by extension, more air after.
Incidentally I've worked for a DoD contractor and the visibility was actually better in that environment. It was a smaller org(<1k employees) though.
"Ex-facebooker blasts facebook to promote new venture"
This time they use a cigarette analogy
Anyone who joined, or stayed at Facebook, in the last say 5 years, 100% knew about it and was OK with it. They’re probably laughing at everyone else taking so long to figure it out!
Or Devils Advocate, with Al Pacino and Keanu Reeves.
I had my own experience, once, leaving a company due to ethical concerns: it took me a year and a half to finally follow through and quit. I had coworkers who felt the same who stayed on for years.
No doubt the thought that occurs to many FB employees when they read these articles is "I need to gather my resolve and get the hell out of here!"
We should hold the decision makers accountable.
While the title is a bit creepy, I can certainly see how it could take awhile to start second-guessing the work when it's your first tech job. Making a platform more interesting, useful, and engaging would certainly be an interesting challenge, particularly at first.
First, I think the most-missed story about the 2016 election is the role that Groups played in Bernie Sander’s ascendance. The volume of meme content and direct voter contact that I received from Bernie volunteers and passive supporters from just a few major pro Bernie groups alone— ones that I was not even part of— exceeds the volume that I have received from all other campaigns online to date.
Second, in the early days of Groups, FB decided I was a very far right-wing activist and recommended that I join a series of groups agitating for a US military coup. I still have screenshots of it. It eventually got better at guessing my tastes.
However I think the safeties they place on this are going to contribute to regulatory capture. Facebook has already benefited from policies as is, changes that put a substantial cost on new media companies will just further aid in Facebooks "clone, advertise and usurp" behaviours.
Maybe they should have some king of regulation specific to them.
But I fail to see how making your product as addictive as you can, without breaking laws, is terrible. I mean, no one is forced to create a FB/TW/IG profile, as far as I know.
I'm not defending Social Networks, or saying that a case against them should not be made, I'm just saying that I can't get behind the "your product is too adictive" argument.
Just my two cents. Maybe I'm missing something right now that will force me to change my mind later.
Many have suspected it for a long while but this testimony proves that Facebook profits from hate groups and the spread of misinformation. That’s not hyperbole, that’s now fact.
Perhaps the real acceleration is in the ballooning expansion of who we consider a "hate-group" -- which seems to have no fixed definition and is thrown around rather cavalierly.
Go on Twitter or Facebook, or 4chan, 8chan, Voat or wherever you can find these crazies, and try to engage them in rational debate, and convince them their ideas are bad and yours are better. Let us know how that turns out.
What is the end goal? To make it impossible for crazy people to be heard online? Wouldn't a better goal be to educate ourselves on how to ignore the crazies and focus on reliable sources?
Do you believe QAnon has gone from a 4chan meme to a political movement which has gained the support of the President and seats in Congress because no rational adult or contributing member of society has ever fallen prey to them?
Human beings are not rational animals, human beings are emotional animals, we're great apes hardwired for paredolia and bigotry because it helped us survive the tall grasses of the Savannah a hundred thousand years ago. The assumption you and others like you make, that given a free (as in unregulated) market of ideas, rationality and truth will always win out, is as naive as the belief that ethics and quality always win in free market capitalism. Bad actors always dominate unless some external regulating force prevents them from doing so.
>What is the end goal? To make it impossible for crazy people to be heard online? Wouldn't a better goal be to educate ourselves on how to ignore the crazies and focus on reliable sources?
False dichotomy - we can do both. It is impossible to effectively educate ourselves or anyone else in an environment in which it is also impossible to distinguish good from bad information, or even attempt to do so, without fear of "censorship". We don't need to pretend Joe Rogan and Alex Jones are sources of truth on par with the BBC and Al Jazeera, or that evolution and the Book of Genesis are equally valid attempts to describe the natural world, or that QAnon represents a legitimate framework of political and social criticism, merely for the sake of allowing controversy, in the false belief that controversy is equivalent to freedom.
I never claimed any such personal faith or right, and that is a disingenuous reading of my comment. The chip on your shoulder is noted, albeit not compelling.
>QAnon isn't really much of a problem in the real world -- nobody is burning down cities and rioting because of it.
Hm... understate the threat of right-wing extremists, overstate the threat of left-wing activists.
Where have I seen that a million times before?
I just stated the facts. There are riots and violence in the streets of America right now. I'm not saying it's rampant, but it DOES exist. You can't point to the same to justify your fears about qanon and their incoherent ramblings.
As for the chip on my shoulder, i've tried to keep this civil even though I do strongly disagree with your desire to suppress the speech of your political foes.
This is an interesting take. Usually I suspect people would say something more like "Making your product as addictive as possible is terrible, but definitely not illegal. And, it's difficult to design laws against something that is addictive and destructive."
I think it's pretty clear that "making your product as addictive as you can" is absolutely terrible. Again, I'm not sure that regulation can solve this problem in a constructive way, (and would love to be proven wrong here) but I fail to see how this isn't bad.
No one is forced to become obese, however it's definitely bad to have a nation full of obese people.
Why? Honest question. For instance, you mentioned obesity. Should a restaurant that makes the most delicious and sugar loaded food be forbidden to do so because its customers can't stop eating it and are getting obese?
IMO obesity is an individual problem. I'm all for helping obese people that want to change, don't get me wrong. I'm just saying that they got themselves in that situation. The restaurant should not be punished for their clients lack of control. They should, however, be forced to let clients know exactly what they're eating, but after that, it's not their fault.
Now, I think they should do it, but because they want to. If anyone is to take action, I think the way to go is to reach the obese people and help them. Explain why they should not visit the restaurant anymore.
To your example if McDonalds added cocaine to their fries, we would likely agree that that’s wrong and we should stop that behavior, right?
If it’s more along the lines of addiction like “people love fast food” but aren’t actually physically addicted to it, then I think it’s fine that the business owners make it more delicious or “more addictive”. In that case I’d agree it’s likely on the consumer to make the call. (I’m going to gloss over the realities of the fast food industry preying on lower economic communities and pretend we’re operating in a vacuum where someone has equal agency/ability to go eat McD’s or eat a healthier alternative.)
As for your McDonalds argument, cocaine is illegal. I stated that as long as it was within the law, I saw no problem.
Food might not be the best comparison to use.
As for the cocaine part, that’s immaterial to the thought experiment I proposed. I was just trying to delineate between true physical addiction and whatever makes me people want to eat unhealthy food. Say it’s something else that causes physical addiction but isn’t illegal.
Agreed, that's why I think companies should be forced to clearly state them, but not forced to stop users from consuming.
Another avenue could be providing proper education to individuals regarding addiction to food, drugs, etc... But this is beyond my scope of understanding.
> As for the cocaine part, that’s immaterial to the thought experiment I proposed. I was just trying to delineate between true physical addiction and whatever makes me people want to eat unhealthy food. Say it’s something else that causes physical addiction but isn’t illegal.
My bad. I didn't get that. But I still think, assuming they clearly state the risk of physical addiction, they should be allowed to sell their fries.
Now, just to convey this one more time, it's a totally different situation if they use something illegal to make the fries addictive. They should be punished.
I tried to cover this in my post, but this is why I believe it's a bit of an impossible situation. I don't believe that in your example the restaurant should be forbidden from selling the addictive and unhealthy food. Because it should not be illegal does not make it good. The law and morality are not one in the same.
The usual way people talk about this sort to thing is to invoke free speech. I should not be legally prevented from insulting you, or saying rude things to you. But, it's still an awful thing for me to do.
Regarding the problem being individual. I agree that's where the blame should rest, but the reality is that moral blame is often not really as useful as people want to believe. For example, with obesity, most people are making the 'wrong' decisions. Again, I'm not suggesting that government regulation should be invoked to try to fix this. But surely, it's not good a thing that so many people are unhealthy. And therein lies the problem. Who cares about blame? I don't care whose fault it is, but I would like to fix it. It's a near guarantee that the general public will not fix it. It's not even an American problem anymore: you're even seeing obesity in some parts of Africa. When most people have access to high calorie food most of the time, they will become overweight and obese. You can (maybe even should) assign blame to people for making the wrong decisions here. But that will do nothing to modify the problem.
And, as I said, I'm not necessarily arguing for regulation. But I would be curious if you think there is any solution here, or if you think there should be any solution here.
> And, as I said, I'm not necessarily arguing for regulation. But I would be curious if you think there is any solution here, or if you think there should be any solution here.
That's a great point. Off the top of my head I am inclined to say there should not be any solution, besides making sure companies act within the law. But that's above my paygrade. I'm only stress testing my opinion.
Yeah, I think we understand each other. And, I appreciate your comments. too. I vary how I feel about this general issue depending on the topic.
I think it's important to be clear about "addictive" because people use it in different ways. If by "addictive" you mean "really compelling" then, sure, it may not be intrinsically terrible. A product that, for example, makes it really compelling for users to improve their physical health or fight climate is probably not terrible.
But the clinical definition of "addiction" which is why "addiction" has a strong negative connotation is that for something that is so compelling that your need to use it causes significant disruption to your quality of life of that of those around you.
Read the testimony again. The argument here is not just that Facebook is super engaging. It's that Facebook use harms its users and the world at large and its level of engagement magnifies that.
But I think I see where you coming from. They're getting people addicted to something wrong, did I understand you?
That is part of it, yes.
Also, the mechanism of addition itself often causes the harm. With chemical addiction, the same components that make the substance addictive also cause miserable withdrawal symptoms.
With social media, this is more nebulous, but I do think part of what makes systems like Facebook "engaging" is the anxiety they create when you aren't on them, and the low self-image that users try to assuage by posting flattering photos of their life.
Part of addiction (and advertising too, for that matter) is creating a need for your product in the mind of the user. They were probably happier before they had that need in the first place.
> Part of addiction (and advertising too, for that matter) is creating a need for your product in the mind of the user. They were probably happier before they had that need in the first place.
I cannot agree with this. Facebook cannot be responsible for people wanting to be on the site/app or for which photos of their life they choose to post. I thought we were discussing the methods by which they make people want to be on FB.
As for your last paragraph, I may be missing your point. Advertising is creating a need for your product, or tapping an existing need. People being happier before they had that need cannot be a reason to stop companies from trying to sell a product. If you bought something that made you feel worse, you would probably just stop using it. Now, if you can't stop using it because you're addicted, but the company didn't do anything illegal to make their product addictive and the risks are clear (not saying this is FB case), why should they be blamed?
If I totally missed your point, please feel free to enlighten me.
The ad (which you never requested enter your life) makes you feel worse. The product just gets you back to your baseline.
The primary purpose of making an addictive product is to remove peoples' agency by hijacking known deficiencies in our minds/bodies. It's a form of coercion, because your goal is to prevent people from being able to choose whether they use your product or not.
If they aim to remove agency, it's because you have it in the first place, meaning you can stop it from happening with proper information.
I understand that some people might not understand they are being targeted and should be clearly told what could happen to them. But the majority of people must know FB is addictive.
After that, I can't see how people still getting addicted is the company's fault.
But then, plenty people still work for Big Tobacco. Many do so voluntarily, not just because it's their only viable option. The trouble is it comes down, in large part, to ethics and morals. And we don't all share the same moral compass.
It got my father. Living in rural area, cable/satellite TV became too expensive and low quality. So, us kids paid for an internet connection for him. Given only YouTube to inform him, he went from a generally relaxed redneck to talking about how "black community is a lost cause" and "we need to glass (nuke) the middle east and take their oil" in a very short time.
We got Netflix for him and he's calmed back down some. But, definitely not back to where he was before.
[1] https://www.nytimes.com/interactive/2019/06/08/technology/youtube-radical.html
That first title was what originally drew me to this story, and I find it to be more informative.
For what it's worth, I don't have that experience with Twitter. There, I seem to have enough control over who I follow and whose tweets Twitter shows me that my Twitter use is generally beneficial and healthy to me. Despite trying very hard to do so, I was never able to tune my Facebook feed to be healthy in that way.
so it is a situation where an organization with shitty incentives that doesn't have good faith alignment with society at large is regulated by an organization with shitty incentives that doesn't have good faith alignment with society at large.
the whole process is completely illegitimate and basically a TV show. i don't have a solution for this, i just know that this is not it.
We are already seeing huge increases in support for things like systems thinking, ecological worldview, decentralization, holism, etc. The future is pluralistic and that's okay.
Edit: A side note, Kendall's current venture is about "Break[ing] your screen and social media addiction". You're free to make any assumptions regarding that in connection to this hearing.
To be clear, that was sarcasm. But sarcasm aside, this is exactly the stance that several members of my family would take if I shared this and asserted that it's not "fake news" because it's on house.gov. The problem is that we're so far through the looking glass that legitimate attempts to pull back the curtain face a huge uphill battle because of the very system that they're trying to expose.