med-mastodon.com is one of the many independent Mastodon servers you can use to participate in the fediverse.
Medical community on Mastodon

Administered by:

Server stats:

411
active users

#contentmoderation

2 posts2 participants0 posts today

"OTI shares the goal of creating a safer internet for our youth, but KOSA continues to pose risks to free expression and privacy. The legislation augments the federal government’s power to limit access to information online and censor online speech. Specifically, the bill’s “duty of care” provision may incentivize platforms to over-moderate or otherwise suppress access to content that they think the FTC considers to contribute to a mental health disorder like anxiety or depression. This subjective category could cover an expansive range of content, including gun violence, LGBTQ communities, reproductive rights, racial justice, or particular political philosophies. Beyond incentivizing this kind of anticipatory self-censorship, KOSA would hand the FTC a legal tool to actively compel online platforms to shape speech, raising First Amendment and fundamental democratic concerns.

These concerns about chilling effects and enabling government-directed censorship apply to any administration. And they are not theoretical risks. On the contrary, these risks are now heightened, given this administration’s dramatic assault on the FTC’s independence, the effort to use the agency to advance an openly politicized agenda, and numerous efforts across the executive branch to expand surveillance and use federal agencies to punish disfavored speech."

newamerica.org/oti/press-relea

New AmericaKOSA Would Boost the Federal Government’s Powers to Shape Online Speech, Says OTIOTI urges Members of Congress to recognize the threats posed by enacting KOSA.

This goofy comment showed up on a LinkedIn post I made criticizing some researchers that had deployed AI bots into a community without permission.

I read it, and it was odd. It seemed kind of like a summary of some of my points, spun around in a blender. I thought it might be AI, so I Googled the name and the first hit was some company calling this an "AI Agent" for "business development."

»Labour pains: #ContentModeration challenges in Mastodon growth«

> The article … investigates challenges experienced by #Mastodon instances post-#Musk, based on 8 interviews with admins and moderators of 7 instances and a representative of @iftas, a NPO that supports Mastodon content moderators

by @charlotte & @talestomaz, Alexander von Humboldt Institute for Internet and Society #HIIG

👉 doaj.org/article/c2016dd9b0174

doaj.orgLabour pains: Content moderation challenges in Mastodon growth – DOAJAfter Elon Musk took over Twitter in October 2022, the number of users on the alternative social media platform Mastodon rose dramatically. The sudden...

‘I didn’t eat or sleep’: a Meta moderator on his breakdown after seeing beheadings and child abuse

Solomon says the scale and depravity of what he was exposed to was far darker than he had ever imagined

Meta faces Ghana lawsuits over impact of extreme content on moderators

#meta #workersrights #workerExpoitation #extremism #laborrights #africa #contentmoderation #ptsd #mentalhealth

theguardian.com/technology/202

The Guardian · ‘I didn’t eat or sleep’: a Meta moderator on his breakdown after seeing beheadings and child abuseBy Rachel Hall

"Meta is facing a second set of lawsuits in Africa over the psychological distress experienced by content moderators employed to take down disturbing social media content including depictions of murders, extreme violence and child sexual abuse.

Lawyers are gearing up for court action against a company contracted by Meta, which owns Facebook and Instagram, after meeting moderators at a facility in Ghana that is understood to employ about 150 people.

Moderators working for Majorel in Accra claim they have suffered from depression, anxiety, insomnia and substance abuse as a direct consequence of the work they do checking extreme content.

The allegedly gruelling conditions endured by workers in Ghana are revealed in a joint investigation by the Guardian and the Bureau of Investigative Journalism."

theguardian.com/technology/202

The Guardian · Meta faces Ghana lawsuits over impact of extreme content on moderatorsBy Rachel Hall

"When Mr. Musk purchased X in 2022, he promised to create a free speech haven and named himself a “free speech absolutist.”
Critics still feared that Mr. Musk would use his control of the app to pick and choose his favorites, amplifying voices he admired while suppressing people or topics he loathed.

The New York Times found three users on X who feuded with Mr. Musk in December only to see their reach on the social platform practically vanish overnight. The accounts are the starkest signs yet that Mr. Musk or others at the company have the power to punish critics and that they may be willing to use it, startling free speech advocates who hoped that the billionaire would be their champion.

Concerns about Mr. Musk’s influence have grown alongside his political ambitions as one of President Trump’s closest allies. He has also set his sights on boosting far-right politics across the world.

“This is working against the type of environment that he claimed he wanted to build,” said Ari Cohn, the lead counsel for technology policy at the Foundation for Individual Rights and Expression, a free speech advocacy group. “Don’t sit here and cloak yourself in the First Amendment and free speech, and then do things like that.”"

nytimes.com/interactive/2025/0

The New York Times · They Criticized Musk on X. Then Their Reach Collapsed.By Stuart A. Thompson

Mods wanted!!

Kolektiva.social has now been around for nearly five years. During that time, we have received lots of valuable feedback. It has and continues to help us better understand problems with our moderation, and what needs to change. A clear takeaway from the issues we've come up against is that we need more help with content moderation.

Over the past several months, It's become more evident than ever that our movements require autonomous social media networks. To be blunt, if we want Kolektiva (and the Fediverse more broadly) to continue to grow in the face of cyberlibertarian co-optation, we need more people to help out. Developing the Fediverse as an alternative, autonomous social network involves more than just using its free, open source, decentralized infrastructure as a simple substitute to surveillance capitalist platforms. It also takes shared responsibility and thoughtful, human centered moderation. As anarchists, we view content moderation through the lens of mutual aid: it is a form of collective care that enhances the quality of collective information flows and means of communication. Mutual aid is premised on working together to expand solidarity and build movements. It is about sharing time, attention, and responsibility. Stepping up to support with moderation means helping to maintain community standards, and to keep our space grounded in the values we share.

Corporate social media platforms do not operate on the principle of mutual aid. They operate on the basis of profit — mining their users for data, that they can process and sell to advertisers. Neither do the moderators of these social media platforms operate on the principle of mutual aid. They do these difficult and often brutal jobs because they are paid a wage out of the revenue brought in from advertisers. Kolektiva's moderation team consists of volunteers. If we want to do social media differently, it requires a shift in the service user/service provider mentality. It requires more people to step up, so that the burden of moderation is shared more equitably, and so that the moderation team is enriched by more perspectives.

If you join the Kolektiva moderation team, you’ll be part of a collective that spans several continents and brings different experiences and politics into conversation. Additionally, you'll build skills in navigating conflict and disagreement — skills that are valuable to our movements outside the Fediverse.

Of course, we know that not everyone can volunteer their time. We want to mention that there are plenty of ways to contribute: flag posts, report bugs and share direct feedback. We are grateful for everyone who has taken the time to do this and has discussed and engaged with us directly.

Since launching in 2020, Kolektiva has grown beyond what we ever expected. While our goal has never been to become massive, we value our place as a landing spot into the Fediverse for many — and a home base for some.

In addition to expanding our content moderation team, we have other plans in the works. These include starting a blog and developing educational materials to support people who want to create their own instances.

If you value Kolektiva, please consider joining the Kolektiva content moderation team!
Contact us at if you’re interested or have questions.

"Regulation that impedes the operation of US digital behemoths – anything short of blanket permission to do as they please – will apparently be treated as a hostile act and an affront to human liberty.

This is an imperial demand for market access cynically camouflaged in the language of universal rights. The equivalent trick is not available in other sectors of the economy. US farmers hate trade barriers that stop their products flooding European markets, but they don’t argue that their chlorine-washed chickens are being censored. (Not yet.)

That isn’t to say digital communications can be subject to toxicity tests just like agricultural exports. There is wide scope for reasonable disagreement on what counts as intolerable content, and how it should be controlled. The boundaries are not easily defined. But it is also beyond doubt that thresholds exist. There is no free-speech case for child sexual abuse images. The most liberal jurisdictions recognise that the state has a duty to proscribe some material even if there is a market for it.

The question of how online space should be policed is complex in principle and fiendishly difficult in practice, not least because the infrastructure we treat as a public arena is run by private commercial interests. Britain cannot let the terms of debate be dictated by a US administration that is locked in corrupting political intimacy with those interests.

It is impossible to separate the commercial and ideological strands of Trump’s relationship with Silicon Valley oligarchs. They used their power and wealth to boost his candidacy and they want payback from his incumbency. There is not much coherence to the doctrine. “Free” speech is the kind that amplifies the president’s personal prejudices. Correcting his lies with verifiable facts is censorship."

theguardian.com/commentisfree/

The Guardian · In Trumpland, ‘defending free speech’ means one thing: submission to the presidentBy Rafael Behr

"A sweeping crackdown on posts on Instagram and Facebook that are critical of Israel—or even vaguely supportive of Palestinians—was directly orchestrated by the government of Israel, according to internal Meta data obtained by Drop Site News. The data show that Meta has complied with 94% of takedown requests issued by Israel since October 7, 2023. Israel is the biggest originator of takedown requests globally by far, and Meta has followed suit—widening the net of posts it automatically removes, and creating what can be called the largest mass censorship operation in modern history.

Government requests for takedowns generally focus on posts made by citizens inside that government’s borders, Meta insiders said. What makes Israel’s campaign unique is its success in censoring speech in many countries outside of Israel. What’s more, Israel's censorship project will echo well into the future, insiders said, as the AI program Meta is currently training how to moderate content will base future decisions on the successful takedown of content critical of Israel’s genocide.

The data, compiled and provided to Drop Site News by whistleblowers, reveal the internal mechanics of Meta’s “Integrity Organization”—an organization within Meta dedicated to ensuring the safety and authenticity on its platforms. Takedown requests (TDRs) allow individuals, organizations, and government officials to request the removal of content that allegedly violates Meta’s policies. The documents indicate that the vast majority of Israel’s requests—95%—fall under Meta’s “terrorism” or “violence and incitement” categories. And Israel’s requests have overwhelmingly targeted users from Arab and Muslim-majority nations in a massive effort to silence criticism of Israel."

dropsitenews.com/p/leaked-data

Drop Site News · Leaked Data Reveals Massive Israeli Campaign to Remove Pro-Palestine Posts on Facebook and InstagramBy Waqas Ahmed

"Yes, Facebook lied to the press often, about a lot of things; yes, Internet.org (Facebook’s strategy to give “free internet to people in the developing world) was a cynical ploy at getting new Facebook users; yes, Facebook knew that it couldn’t read posts in Burmese and didn’t care; yes, it slow-walked solutions to its moderation problems in Myanmar even after it knew about them; yes, Facebook bent its own rules all the time to stay unblocked in specific countries; yes, Facebook took down content at the behest of China then pretended it was an accident and lied about it; yes, Mark Zuckerberg and Sheryl Sandberg intervened on major content moderation decisions then implied that they did not. Basically, it confirmed my priors about Facebook, which is not a criticism because reporting on this company and getting anything beyond a canned statement or carefully rehearsed answer from them over and over for years and years and years has made me feel like I was going crazy. Careless People confirmed that I am not.

It has been years since Wynn-Williams left Facebook, but it is clear these are the same careless people running the company. When I wonder if the company knows that its platforms are being taken over by the worst AI slop you could possibly imagine, if it knows that it is directly paying people to flood these platforms with spam, if it knows it is full of deepfakes and AI generated content of celebrities and cartoon characters doing awful things, if it knows it is showing terrible things to kids. Of course it does. It just doesn’t care."

404media.co/careless-people-is

404 Media · 'Careless People' Is the Book About Facebook I've Wanted for a DecadeI've reported on Facebook for years and have always wondered: Does Facebook care what it is doing to society? Careless People makes clear it does not.

"Protecting #democracy from threats created by Internet #platforms is a laudable goal. But it is not worth the cost imposed by legislative attempts so far: empowering the government to control legal speech online. Lawmakers’ attempts to impose their own top-down speech rules are particularly unwarranted given the far more promising possibilities offered by #usercontrolled and #decentralized #contentmoderation systems."

techpolicy.press/regulated-dem

Tech Policy Press · Regulated Democracy and Regulated Speech | TechPolicy.PressThe First Amendment is meant to protect us from short-sightedness about state power, writes Daphne Keller.

Trust the Trusted Flagger?!
Wer kontrolliert Inhalte im Netz – und wer kontrolliert die Kontrolle?

Am 10. April 19 Uhr diskutieren Dr. Jessica Flint und Chan-Jo Jun im Digital Fight Club über die Rolle der #TrustedFlagger, Content-Moderation, #Hatespeech und #Meinungsfreiheit im digitalen Raum.
👉 Online & offen für alle: t1p.de/trusted

Eine Veranstaltung u. a. von @slpb, @hlz und @mkz. Mit @Anwalt_Jun

t1p.deBefore you continue to YouTube