an appeal to the fediverse regarding anti-abuse 

Dear fediverse:

Fascism joining the fediverse is extremely bad, and we have to do something about it. But please, please, please: give me two weeks before you roll out any new solutions. Some of the solutions being proposed look like they will make the situation better but will make it much worse.

I am dropping nearly everything to write a demo and spec explaining how to do things right. Please give me two weeks. I've been preparing for this.

an appeal to the fediverse regarding anti-abuse 

As a hint as to why the current solutions aren't going to work, I'll point you to what happened when Mastodon rolled out direct messages with OStatus, but they *weren't really* private messages. An admirable attempt but it needed a different approach.

I believe this could be like that, but 10x worse. I've been studying what will happen under different approaches and trying hard to figure out how to map a solution onto what we have.

Show thread

an appeal to the fediverse regarding anti-abuse 

@cwebber what are their solutions that you find bad?

an appeal to the fediverse regarding anti-abuse 

@wilkie That's probably a really hard question for @cwebber to answer, since it'd involve talking about his plan that, as he said, isn't ready.

So I'll try and answer: One issue is they would like to cryptographically-ensure that a communication is proper, which means that the more targeted your server is by harassment, the more costly it will be to continue operating. Another is that it doesn't allow a similar autonomy of moderation as is current.

@emsenn @wilkie @cwebber Please elaborate on the autonomy of moderation point. What do you mean by that?

@Gargron If I understand it right - which I very well might not:

Current discussions of OCAP provide the tools to instance moderators, but don't provide similar tooling for users.

Right now, as I understand it, users can do most of the moderation action moderators can, relative to their own profile: they can autonomously moderate their profile even if their instance doesn't do moderation.

(Again, I could be wrong in understanding how things are or could be.)
@wilkie @cwebber

@emsenn @wilkie @cwebber I don't think that's quite right. Now, I don't know which "some of the solutions" Chris is talking about, because I'm aware of what I'm proposing (authorized fetch, same mechanism as already used for inbox deliveries) and what kaniini is proposing (OCAP), and I am under the impression that Chris's solution is OCAP too. But neither of those would affect any existing self-moderation mechanisms.

@Gargron I didn't think current moderation tools would be changed. As I understand it, kaninii's OCAP is focused on interinstance moderation, not a user moderating their own profile with it. That's my concern: that I will be reliant on my instance admin to handle any abuse that needs to be addressed with OCAP. As I understand it, Chris' solution is OCAP as well, but more generalized so can be an inter-user tool just as easily. (Again, I could be wrong about all this.) @wilkie @cwebber

@emsenn @gargron @wilkie The ways @kaniini and I have currently proposed using ocap stuff are a bit different, though they have overlap. I think it'll be clearer how they can converge once I put it out though.

Follow

Poor @cwebber announces he's dropping everything to work on a thing and by nature of doing so gets a bunch of folk beeping to ask about the thing he hasn't finished working on. Bless instant global communication. @Gargron @wilkie @kaniini

Sign in to participate in the conversation
Ten Forward

The social network of the future: No ads, no corporate surveillance, ethical design, and decentralization! Own your data with Mastodon!