> Pure Intentions: Founders, like myself, genuinely want to connect people, share authentic moments, and build community. The early versions feel magical because they follow this original mission.
No. This idea that SC (or similar business cultures) want to change the world for the better is a cliche. At a certain point, after all the history of these companies, you don’t get a pure-intentions pass any more.
And in particular: I don’t believe that Zuckerberg’s motivations were ever good.
> Different Funding Methods: What if social platforms were funded like utilities or public goods instead of venture-backed and advertisement driven growth machines? Subscription models, cooperatives, or public funding could prioritize user wellbeing over engagement metrics. Wikipedia thrives as a donation-funded cooperative. These models exist - they just don’t scale at venture-required rates.
This is good. You need to rethink the whole model. Even if that means that you don’t get to growth-hack people.
This is far better than the usual model:
1. Admit there’s a problem
2. It’s systemic
3. The fake solution: some feel-good manifesto about making X but “for humans”. But nothing about the incentives have been changed. The fundamental axiom is still there: we are entitled to make money off social media just like before, but we have a lot of words and paragraphs about doing so goodly.
The same thing happened with an attempt at making neoliberalism palpatable. “Stakeholder Capitalism”. The idea is that we just continue with neoliberalism but have a manifesto about how all stakeholders are taken into account. But nothing about the system or the power centers are changed. So you still get a corporations and their boards of directors having as much power as before.
> Regulated Algorithms: We regulate tobacco companies because their products are addictive and harmful. Algorithmic transparency or giving users control could preserve the benefits while reducing the addictive design patterns. The EU’s Digital Services Act already requires algorithmic transparency from large platforms.
I’m not sure. This seems like a half-measure. Regulating something which is inherently harmful (according to the simile) just causes more bloat.
If I’m addicted to nicotine I’m already hooked. No advertisement doesn’t help me. I already know what I want. You’re gonna make the packaging less sexy? Make me go into the Sin Section of the shop to get my fix? I’m gonna do that anyway.
What you need is an effective and mandatory opt-out option. Let me ban myself from buying these products. And provide me with alternatives (grocery shops already sell both tobacco and nicotine gum).
Give me an option to opt-out. Don’t just make a whole regulatory beast which can prey on me but one hundred checkboxes are marked and it’s very ethical and so on.
> Alternative Metrics: Instead of measuring daily active users and time-on-platform, what if platforms were evaluated on user wellbeing, relationship quality, or real-world connections facilitated? What if we measured social platforms like we measure hospitals?
This would have been naive and just a loophole if the premise was to let companies keep doing their thing. They would just rebrand with fake metrics. Oh you have sent X messages this month, that means you are connecting and according to some research people who IM more are happier blah blah blah.
But this could have some merit given the previous utilities/public goods point.
But that’s not exactly true, is it?
Calling out alcohol and tobacco ignores all the vices that were made durably illegal all over the world: prostitution, blood sport, slavery, forced marriage, and so on—and yes, institutionalized slavery was a vice, an economic one rather than a habitual one, but every bit as behaviorally seductive for slavers as speculative investing, MLM, or subprime asset flipping are for some people today.
Sure, not all of those things are illegal everywhere, and reasonable people may disagree as to whether illegality is appropriate for some of them (e.g. prostitution). But in total they do indicate that vice regulation can “stick” better than it did for alcohol and tobacco.
Hell, we used to put cocaine in soda! Whether or not you believe that the current prohibition/penalty practices around that drug are good, I assume most folks agree that it’s better now that we can’t get addicted to it via products available at the supermarket. Even as addiction-engineered as current-generation hyper-processed foods are, it was once much worse, and that was pretty successfully addressed via regulatory prohibition.
Addictive games though don't show such easily detectable effects. So it's more like a discussion on gambling, casinos, etc, but the current forms of addiction-forming experiences are much more underhanded.
If everyone is engaged with addiction machines nobody will use it.
Engineered addiction is mind control. It is abuse. Hacking the human brain is violence — a term that has been robbed of its impact through overuse for things that are not violence, but this is.
Engineering of addiction in any form should not be legal for the same reason that kidnapping someone and raping them or forcing them to do my labor is not legal.
Fix this problem — remove the mind control and violence — and a market niche opens up for honest business models. As it stands nobody can compete with these platforms because volition can’t compete with violence and honest commerce can’t compete with slavery through dopamine system hacking.
BTW if you work for these companies, quit. Ten to fifteen years ago ignorance was an excuse. I don’t think the original inventors of this nightmare knew quite what they were doing. Ignorance is no longer an excuse. If you are “optimizing engagement” in this context and in these ways you are a bad person.
Similarly, suppression of wages, taking away healthcare, food, employee protections (at-will employment), legally required vacation days and maternity leave, and any meaningful safety nets for employees, pushes the social contract for workers toward violent nonconsensual extraction.
Maximizing extraction inevitably requires violence and cruelty.
Media takes a lot of storage and bandwidth, and you basically have unbounded costs if you want to meet user expectations for posting media.
The other challenge is the regulation part is much easier when the product is, say, heroin. Algorithms are technically complex (hard for policymakers to grasp), flexible (can be tweaked to work around guidelines?), and operating in the digital world (harder to monitor/block).
Maybe a major factor here is social acceptance vs stigma. In the future will it be considered extremely weird and antisocial to be on your phone nonstop?
Valid question - however I have a feeling that for shaping perception of such behaviors we need a stronger middle class - and my hope for it shrinks every day