More behaviour

This commit is contained in:
Éibhear Ó hAnluain 2019-09-18 13:56:52 +01:00
parent 15e47da4c1
commit abbcb1b4df

View file

@ -665,7 +665,7 @@
including having to shut down) and where a claimant suffers nothing
for abusive claims, the regime is guaranteed to be abused.
** CONSTODO Behaviour
** CONSDONE Content Moderation
Much of the focus on legislative efforts to deal with harmful or
objectional material on services that permit uploads from users is
@ -690,22 +690,78 @@
liable for a German user seeing statements that are illegal in
Germany?
- Consider the genocide of Armenian people in Turkey in 1915. It is
illegal to claim it happened in Turkey. However, it's illegal in
France to claim it didn't happen. In most other countries,
neither claim is illegal. What can a service like facebook do
when faced with 3 options, 2 of which are mutually exclusive?
Literally, should they be criminally liable both if they do /and/
if they don't?
illegal to claim it happened in Turkey. However, for a period
between 2012 and 2017 it was illegal in France to claim it didn't
happen. In most other countries, neither claim is illegal. What
can a service like facebook do when faced with 3 options, 2 of
which are mutually exclusive? Literally, should they be
criminally liable both if they do /and/ if they
don't[fn:dink:Prior to his assassination in Istanbul in 2007,
Hrant Dink, an ethnic Armenian Turkish journalist who campaigned
against Turkey's denial of the Armenian Genocide had planned to
travel to France to deny it in order to highlight the
contradictions with laws that criminalise statements of fact. ]?
Modertors have no more than a minute to determine whether a
Moderators have no more than a minute to determine whether a
statement complies with the law of not, and this includes figuring
out whether the posting meets the definitions of abusive or
harmful, and whether it is indeed intended to meet that
definition. For example, imagine an abusive tweet, and then the
target of the abuse quoting it
definition. For example, consider an abusive tweet. Should the
harmful, abusive tweet be removed? Who decides? What if the target
of the abusive tweet wants that tweet to be retained, for, say
evidence? What if the tweet was an attempt at abuse, but the target
chose not to be affected by it? Should it stay up? Who decides?
What if the target doesn't care, but others who see the tweet but
who aren't the target of the abuse may be offended by it. Should it
be taken down as abusive even though the target of the abuse
doesn't care, or objects to its removal? Who would be criminally
liable in these situations? What if the target of the abuse
substantially quotes the abusive tweets? Is the target now to be
considered an offender under a criminal liability regime when that
person may be doing nothing other than /highlighting/ abuse?
"Content moderation" is very hard, and is impossible at the scales
that services like twitter or facebook operate in. When context is
critical in deciding whether to decide someone is engaged in
harmful or abusive behaviour, it would be fundamentally unfair to
make a service criminally liable just because it made the wrong
decision as it didn't have time to determine the full context, or
because it misinterpreted or misunderstood the context.
** CONSTODO User Behaviour
Many believe that the way to deal with abusive or harmful material
online is to punish the services that host the material. This is
reasonable if the material was placed onto the service by those who
own or manage the service. It is also reasonable if the material is
put there by users with the clear knowledge of the managers or
owners of the service, or by users following encouragement of the
managers or owners of the service.
However, these specific situations are rare in the world of normal
online services[fn:criminal:Services that are dedicated to hosting
criminal material such as "revenge porn" or child sexual
exploitation material know they are engaged in criminal activities
anyway, and take steps to avoid detection that are outside the
scope of this submission -- those guys will get no support from
me!].
Engaging in harmful and abusive communications is a matter of
behaviour and not a function of the technical medium through which
the communication is made. The idea that internet services are
responsible for abusive communications is as difficult to
understand as the idea that a table-saw manufacturer is responsible
for a carpenter not wearing safety glasses which using to to cut
timber.
Recent history has shown that the most effective ways to change
behaviour are not necessarily punitive. It's hard to see how
punishing an intermediary would stop people being nasty to each
other.
Any new regulations around
** CONSTODO Content moderation
** CONSTODO Investigation support
** CONSTODO Encrypted services
* CONSTODO Answers to consultation questions
The follows are some answers to the questions posed in the call for
submissions.