More proof-reading

This commit is contained in:
Éibhear Ó hAnluain 2019-09-19 19:51:39 +01:00
parent cd3e65d314
commit 6ed44e21c2

View file

@ -445,14 +445,14 @@
relevant protocols.
- Due to the maturity and age of these protocols, software needed
to use them is now abundant and trivially easy to get and install
and run on a computer. Such software is also very easy to develop
for moderately-skilled software engineers.
and run on any general-purpose computer. Such software is also
very easy to develop for moderately-skilled software engineers.
- Neither the protocols that define, nor the software that
implement the internet regard any computer to be superior or
inferior to any other computer. For this reason, there is no cost
or capacity barrier for someone to cross in order to run an
internet service: if you have the software, and the internet
connection, then you can expose such a service.
or capacity barrier to running an internet service: if you have
the software, and the internet connection, then you can expose
an online service.
Clear examples from the past of how the accessibility of the
internet technologies has benefited the world include the
@ -500,14 +500,14 @@
a photographer, John Oringer, who developed the service as a
means to make available 30,000 of his own photographs.
The ease with which internet technology can be accessed has given
rise to the explosion of services that connect people, and people
with businesses.
The ease with which internet technology can be accessed is
instrumental in the explosion of services that connect people, and
people with businesses.
It is critical to note that many of these technologies and services
started out with an individual or small group developing an idea
and showing it can work *prior* to receiving the large capital
investments that result in their current dominance.
investments that resulted in their current dominance.
All of the above technologies and services can be considered truly
disruptive. In their respective domains, their arrivals resulted in
@ -534,7 +534,9 @@
or instance. For administrators of instances, federation means that
they can configure their instances according to their own
preferences, rather than having to abide by the rules or technical
implementation of someone else.
implementation of someone else. For the ecosystem, federation means
that if one node goes down or is attacked, the others can continue
with a minimum of interruption.
** CONSDONE Regulation of self-hosted services
@ -543,14 +545,15 @@
regulations don't harm the desire of those who want to create
their own services.
A regulation that apply liability to a service-provider for someone
else's behaviour, is a regulation that can be adhered to only by
organisations with large amounts of money to hand. For example, if
the regulation was to apply liability on me for a posting made by
someone else that appears on one of the services that I run (and
originally posted *somewhere* else -- these are federated services
after all), I would have to shut it down; I am not able to put in
place the necessary infrastructure that would mitigate my
A regulation that applies liability to a service-provider for
someone else's behaviour is a regulation that can be adhered to
only by organisations with large amounts of money to hand. For
example, if the regulation was to apply liability on me for a
posting made by someone else that appears on one of the services
that I run (and likely originally posted *somewhere* else -- these
are federated services after all), I would have to shut it down; I
am not able to put in place the necessary technical or legal
infrastructure that would mitigate my
liability[fn:copyrightDirective:This assumes that my services
aren't forced to shut down by the new EU Copyright Directive
anyway]. Given that my services are intended to provide a positive
@ -572,24 +575,24 @@
regulations have the effect[fn:unintended:unintended, one hopes] of
harming such small operators, the result will not just be the loss
of these services, but also the loss of opportunity to make the Web
richer by means of the imposition of artificial barriers to
entry. Such regulations will inhibit the development of ideas that
pop into the heads of individuals, who will realise them with
richer because artificial barriers to entry will be imposed by
those regulations. They will inhibit the development of ideas that
pop into the heads of individuals, who would realise them with
nothing more than a computer connected to the internet.
* CONSDONE Other considerations
While the main focus of this submission is to highlight the
potential risk to self-hosters from regulation that neglect to
potential risk to self-hosters from regulations that neglect to
consider the practice, I would like to take the opportunity to
briefly raise some additional concerns
** CONSDONE Abuse
** CONSDONE Abuse of the systems
To date, all systems that seek to protect others from harmful or
other objectionable material (e.g. copyright infringement,
terrorism propaganda, etc.) have, to date, been very amenable to
abuse. For example, in a recent court filing, Google claimed that
99.97% of infringement notices it received in from a single party
terrorism propaganda, etc.) have been easily amenable to abuse. For
example, in a recent court filing, Google claimed that 99.97% of
copyright infringement notices it received in from a single party
in January 2017 were
bogus[fn:googleTakedown:https://www.techdirt.com/articles/20170223/06160336772/google-report-9995-percent-dmca-takedown-notices-are-bot-generated-bullshit-buckshot.shtml]:
@ -609,19 +612,16 @@
index.
#+END_QUOTE
That a single entity would submit more than 16 million URLs for
delisting in a single month is staggering, and demonstrates a
compelling point: there is no downside for a bad-faith actor
seeking to take advantage of a system for suppressing
information[fn:downside:The law being used in this specific case is
the US Digital Millennium Copyright Act. It contains a provision
With the US' Digital Millennium Copyright Act, there is no downside
for a bad-faith actor seeking to take advantage of a system for
suppressing information[fn:downside:The law contains a provision
that claims of copyright ownership on the part of the claimant are
to be made under penalty of perjury. However, that provision is
very weak, and seems not to be a deterrent for a determined agent:
https://torrentfreak.com/warner-bros-our-false-dmca-takedowns-are-not-a-crime-131115].
The GDPR's /Right to be Forgotten/ is also subject to abuse. An
individual from Europe continues to have stories related to him
individual from Europe continues to force stories related to him
excluded from Google searches. However appropriate on the face of
it, the stories this individual is now getting suppressed relate to
his continued abuse of the /Right to be
@ -637,87 +637,94 @@
need to.
In systems that facilitate censorship[fn:censorship:While seeking
to achieve a valuable and socially important goal, this
legislation, and all others of its nature, facilitates censorship:
as a society, we should not be so squeamish about admitting this.],
it is important to do more than merely assert that service
providers should protect fundamental rights for expression and
information. In a regime where sending an e-mail costs nearly
nothing, where a service risks serious penalties (up to and
including having to shut down) and where a claimant suffers nothing
for abusive claims, the regime is guaranteed to be abused.
to achieve a valuable and socially important goal, legislation of
nature facilitates censorship: as a society, we should not be so
squeamish about admitting this.], it is important to do more than
merely assert that service providers should regard fundamental
rights for expression and information. In a regime where sending an
e-mail costs nearly nothing, where a service risks serious
penalties (up to and including having to shut down) and where a
claimant suffers nothing for abusive claims, the regime is
guaranteed to be abused.
** CONSDONE Content Moderation
Much of the focus on legislative efforts to deal with harmful or
objectionable material on services that permit uploads from users is
on what the service providers do about it. Many argue that they are
not doing anything, or at least not enough.
Much of the focus of legislative efforts to deal with harmful or
objectionable material that appear on services that permit uploads
from users is on what the service providers do about it. Many argue
that they are not doing anything, or at least not enough.
However, this is an unfortunate mischaracterisation of the
situation. For example, facebook employs -- either directly or
through out-sourcing contracts -- many 10s of thousands
"moderators", whose job is to make a decision to remove offensive
material or not, to suppress someone's freedom of expression or
not, based on a set of if-then-else questions.
not, based on a set of if-then-else questions. These questions are
not easy:
- It's illegal in Germany to say anything that can be construed as
glorifying the Holocaust. In the UK it isn't. Facebook can
glorifying the Holocaust. In the US it isn't. Facebook can
suppress such information from users it believes are in Germany,
but to do so for those in the UK would be an illegal denial of
but to do so for those in the US would be an illegal denial of
free expression, regardless of how objectionable the material
is. What is facebook to do with users in Germany who route their
internet connections through the UK? Facebook has no knowledge of
this unusual routing, and to learn about it could be a violation
of the user's right to privacy. Should facebook be criminally
liable for a German user seeing statements that are illegal in
Germany?
- Consider the genocide of Armenian people in Turkey in 1915. It is
illegal to claim it happened in Turkey. However, for a period
this unusual routing, and to seek to learn about it could be a
violation of the user's right to privacy. Should facebook be
criminally liable for a German user seeing statements that are
illegal in Germany?
- Consider the genocide of Armenian people in Turkey in 1915. In
Turkey it is illegal to claim it happened. However, for a period
between 2012 and 2017 it was illegal in France to claim it didn't
happen. In most other countries, neither claim is illegal. What
can a service like facebook do when faced with 3 options, 2 of
which are mutually exclusive? Literally, should they be
which are mutually exclusive? Literally, they would be
criminally liable both if they do /and/ if they
don't[fn:dink:Prior to his assassination in Istanbul in 2007,
Hrant Dink, an ethnic Armenian Turkish journalist who campaigned
against Turkey's denial of the Armenian Genocide had planned to
travel to France to deny it in order to highlight the
contradictions with laws that criminalise statements of fact. ]?
contradictions with laws that criminalise statements of fact.]?
Moderators have no more than a minute to determine whether a
statement complies with the law of not, and this includes figuring
statement complies with the law or not, and this includes figuring
out whether the posting meets the definitions of abusive or
harmful, and whether it is indeed intended to meet that
definition. For example, consider an abusive tweet. Should the
harmful, abusive tweet be removed? Who decides? What if the target
of the abusive tweet wants that tweet to be retained, for, say
evidence? What if the tweet was an attempt at abuse, but the target
chose not to be affected by it? Should it stay up? Who decides?
What if the target doesn't care, but others who see the tweet but
who aren't the target of the abuse may be offended by it. Should it
be taken down as abusive even though the target of the abuse
doesn't care, or objects to its removal? Who would be criminally
liable in these situations? What if the target of the abuse
substantially quotes the abusive tweets? Is the target now to be
considered an offender under a criminal liability regime when that
person may be doing nothing other than /highlighting/ abuse?
future evidence in a claim? What if the tweet was an attempt at
abuse, but the target chose not to be affected by it? Should it
stay up? Who decides? What if the target doesn't care, but others
who see the tweet and are not the target of the abuse may be
offended by it. Should it be taken down as abusive even though the
target of the abuse doesn't care, or objects to its removal? Who
would be criminally liable in these situations? What if the target
of the abuse substantially quotes the abusive tweets? Is the target
now to be considered an offender under a criminal liability regime
when that person may be doing nothing other than /highlighting/
abuse?
All of these scenarios are valid and play out every day. Content
moderators need to consider these and many more questions, but get
very little time to do so. The result: a public perception,
promoted by public figures, that these large services are doing
nothing about abuse.
"Content moderation" is very hard, and is impossible at the scales
that services like twitter or facebook operate in. When context is
critical in deciding whether to decide someone is engaged in
harmful or abusive behaviour, it would be fundamentally unfair to
make a service criminally liable just because it made the wrong
decision as it didn't have time to determine the full context, or
because it misinterpreted or misunderstood the context.
critical to decide that someone is engaged in harmful or abusive
behaviour, it would be fundamentally unfair to make a service
criminally liable just because it made the wrong decision as it
didn't have time to determine the full context, or because it
misinterpreted or misunderstood the context.
** CONSDONE User Behaviour
Many believe that the way to deal with abusive or harmful material
online is to punish the services that host the material. This is
reasonable if the material was placed onto the service by those who
own or manage the service. It is also reasonable if the material is
put there by users with the clear knowledge of the managers or
owners of the service, or by users following encouragement of the
managers or owners of the service.
operate the service. It is also reasonable if the material is put
there by users with the clear knowledge of the service operator, or
by users following encouragement of the operators of the service.
However, these specific situations are rare in the world of normal
online services[fn:criminal:Services that are dedicated to hosting
@ -732,8 +739,7 @@
the communication is made. The idea that internet services are
responsible for abusive communications is as difficult to
understand as the idea that a table-saw manufacturer is responsible
for a carpenter not wearing safety glasses which using to to cut
timber.
for a carpenter not wearing safety glasses.
Recent history has shown that the most effective ways to change
behaviour are not necessarily punitive. It's hard to see how