harmful-communications-201910/HarmfulCommunications201908.org

1317 lines
73 KiB
Org Mode
Raw Normal View History

#+latex_class: article
#+latex_class_options:
#+latex_header:
#+latex_header_extra:
#+description:
#+keywords:
#+subtitle:
#+latex_compiler: pdflatex
#+date: \today
#+TITLE: Submission to the Committee on Justice and Equality on /issues of online harassment, harmful communications and related offences/.
#+AUTHOR: Éibhear Ó hAnluain
#+EMAIL: eibhear.geo@gmail.com, 086 8565 666, http://www.gibiris.org/eo-blog/
2019-09-08 14:28:59 +00:00
#+OPTIONS: ^:{} toc:2 H:4 num:t author:t email:nil
2019-09-16 19:53:18 +00:00
#+TODO: CONSTODO CONSNOTES | CONSDONE CONSDONTDO
* Planning :noexport:
** Resources :noexport:
- [[https://www.oireachtas.ie/en/committees/submissions/20190808-committee-on-justice-and-equality-calls-for-submissions-on-online-harassment-and-harmful-communications/][Call for submissions]]
- [[https://data.oireachtas.ie/ie/oireachtas/committee/dail/32/joint_committee_on_justice_and_equality/other/2019/2019-08-08_possible-issues-for-address_en.pdf][List of possible issues]]
** Web page (captured [2019-08-24 Sat])
The Committee on Justice and Equality invites written submissions
from stakeholders and interested parties on the issues of online
harassment, harmful communications and related offences.
[[https://www.oireachtas.ie/en/committees/32/justice-and-equality/][Go to the Committee on Justice and Equality]]
A separate document can be obtained at the following [[https://data.oireachtas.ie/ie/oireachtas/committee/dail/32/joint_committee_on_justice_and_equality/other/2019/2019-08-08_possible-issues-for-address_en.pdf][link]] outlining
in detail the list of possible issues the Committee wishes to
address under this broad heading.
In summary, the Committee wishes to examine the nature and extent
of the problems of online cyber bullying, harassment, stalking,
revenge porn and other forms of harmful communications;
international best practice for addressing these problems; whether
self-regulation of harmful communications by social media companies
is the best approach; or whether new laws are necessary to cover
such activities, and what forms such laws should take.
The Committee will commence a series of public hearings on these
issues on 2 October 2019, with a view to publishing a report.
*** Closing date
The closing date for receipt of submissions is Friday, 20
September 2019.
*** How to send your submission
Please email an electronic document (PDF/MS Word or equivalent) to
[[mailto:onlineharassment@oireachtas.ie][onlineharassment@oireachtas.ie]].
Please do not send hard copies of your submission; hard copies
will not be accepted.
Please do not send your submission to individual Committee
members. The Clerk will ensure all members receive copies of all
submissions.
*** What to include in your submission
Your submission should comprise your submission document and a
separate covering letter. This allows the Committee to publish
your submission without your contact details.
**** In the covering letter, please include:
- your name, postal address, email address and contact telephone
number
- if the submission is on behalf of an organisation, your
position in the organisation
- a brief outline of why you are making the submission
**** In the submission document please include:
- a brief introduction, for example, explaining your area of
expertise
- any factual information that you have to offer from which the
Committee might be able to draw conclusions, or which could be
put to other parties for their reactions
- links to any publications you refer to; there is no need to
send such publications as attachments
- any recommendations to the Committee; be as specific as
possible and summarise your recommendations at the end of the
document
- if your document is more than 10 pages long, an executive
summary of the main points made in the submission
Please remember to be concise and to number your pages.
*** Important information
Submissions sent to any other email address may not be accepted.
Anonymous submissions cannot be accepted and will be rejected.
Petitions and form letters may not be accepted or published.
Submissions made to a Committee may be published as received,
either as part of a Committee report or separately, if the
Committee decides to do so.
*** Making a submission is a public process
The Committee is not obliged to accept your document once it has
been submitted, nor is it obliged to publish any or all of the
submission if it has been accepted. However, the operations of a
parliament are a public process, and you should be aware that any
submissions made to a Committee including your identity may be
published either as part of a Committee report, or separately, if
the Committee decides to do so.
*** Need more guidance?
If you would like more detailed guidance, please read the guidance
note Making Submissions and Presentations to Oireachtas Committees
below or contact the clerk to the Committee.
*** Clerk to the Committee
Damian Byrne
[[mailto:damian.byrne@oireachtas.ie][damian.byrne@oireachtas.ie]]
(01) 618 3899
Committee on Justice and Equality
Committee Secretariat,
Houses of the Oireachtas Service,
Kildare Street,
Dublin 2,
D02 XR20
** Possible issues document (captured [2019-08-24 Sat])
2019-09-19 14:53:31 +00:00
*** Online Harassment, harmful communications and related offences
*Possible issues for address*
**** Definition of communication in legislation
2019-09-14 15:28:36 +00:00
1. There are currently significant gaps in legislation with
regard to harassment and newer, more modern forms of
communication. Is there a need to expand the definition of
communications to include online and digital communications
tools such as WhatsApp, Facebook, Snapchat, etc. when
addressing crimes of bullying or harassment?
- Éibhear comment :: (/Address in introduction/) It is
necessary not to assume that the current services that
operate will be the primary services in 5 or 10 years'
time.
2. What lessons can be learned from models used in other
jurisdictions such as the UK, New Zealand, Australia and other
European countries where legislation is now in place to
address these issues? How do we establish an appropriate model
without compromising free speech?
- Éibhear comment :: (/Address in answer to specific
questions/) UK: duty of care is inappropriate. New
Zealand: allowing a committee to decide what is
objectionable, thus restricting not only those who want
to share objectionable material, but also those who want
to report on it.
3. How do we ensure that any legislation that is enacted is
flexible enough to keep up with changing and advancing
technologies, new apps and other online forums, including the
more familiar social media sites?
- Éibhear's comments :: (/Core concern/) Hmm. This is the meat
of the submission.
**** Harassment, stalking & other forms of online abuse
4. [@4] Online harassment can take the form of on-consensual
taking and distribution of intimate images or videos,
otherwise known as revenge porn, upskirting,
downblousing and other forms of sharing of imagery online
without consent. What approaches are taken to addressing these
issues in other jurisdictions?
- Éibhear's comment :: No answer for this
5. New offences are proposed to cover these issues in Deputy
Brendan Howlins Private Members Bill on this subject. Is the
creation of new offences necessary, or is existing legislation
sufficient? Should other forms of image-sharing issues - such
as exposure - also be addressed?
- Éibhear's comment :: No answer for this
6. What kind of oversight and regulation of online service
providers is possible/used in other jurisdictions? Currently,
online providers are self regulated. Is a proactive,
self-regulating approach from online companies to activities
such as revenge porn and other forms of harassment preferable
to the creation of more laws?
- Éibhear's comment :: Important to know the difference
between "self regulated", and pro-active
moderation. These service moderation according to their
own rules; there is no industry authority like the press
council or the advertising standards authority, which are
self-regulatory regimes.
7. Is any data provided by online service providers in relation
to the reporting or prevalence of activities such as
upskirting/revenge porn/cyberbullying and other online
behaviour that can be used to develop and draft future
legislation?
- Éibhear's comment :: No data. However, services should be
encouraged to issue reports on their moderation efforts.
8. To what extent are An Garda Síochána equipped and resourced to
deal with the issues arising from harmful online
communications such as these?
- Éibhear's comment :: No answer for this
9. Should cyberstalking be treated as a separate offence to
online harassment? What constitutes stalking-type behaviour
online? Is there a need to legislative specifically for this
activity?
- Éibhear's comment :: No answer for this
10. Based on the findings of other jurisdictions such as in the
UK, An Garda Síochána will require consistent training in
order to maintain an appropriate level of knowledge with
regard to indictable behaviours. Are resources available for
this?
- Éibhear's comment :: No answer for this
11. Fake accounts/troll accounts used to harass or target others
with abuse what measures can be taken in relation to these
without effecting freedom of expression?
- Éibhear's comment :: Care needs to be taken to ensure
manage/prevent false identification of accounts as 'fake'
or 'troll'.
12. Do other jurisdictions have statutory measures to protect
victim identities in cases of online harassment being
released online posthearings, etc?
- Éibhear's comment :: No answer for this
**** Harmful online behaviour and young people
13. [@13] How do we most appropriately regulate social media
platforms to prevent cyberbullying and inappropriate sharing
of personal images?
- Éibhear's comment :: take details from earlier submission.
14. For young people who participate in such online behaviour as
consensual image sharing, how can it be ensured that they are
not inadvertently criminalised when legislation is enacted?
What safeguards can be put in place?
- Éibhear's comment :: No answer for this
15. Deputy Brendan Howlins Private Members Bill provides that
those under 17 should not be fined/imprisoned but put into
relevant education or supports. Would these supports be part
of the same educational supports offered to all young
people/schools or would they be a separate entity? Are
current supports being utilised? Are there sufficient
resources to provide for such a provision when enacted?
- Éibhear's comment :: No answer for this
** CONSTODO Eibhear's initial thoughts :noexport:
1. Focus on two core principles:
- Self-hosting -- individuals and groups hosting their own
services should not be neglected.
- Abuse -- services and systems should be protected from abuse
2019-09-02 19:59:42 +00:00
*** Facts
- Tweets per day: 500,000,000
+ Active accouts: 326,000,000
+ Reported accounts: 11,000,257 (July - December 2018)
* Abuse, child sexual exploitation, hateful conduct, private information, Sensitive media, voilent threats
* => 60,000 account reported/day
* => 0.02% of accounts reported
2019-09-16 19:53:18 +00:00
* CONSDONTDO The distinction between user behaviours and online services :noexport:
2019-09-08 14:28:59 +00:00
The internet is awash with online harassment and harmful
communications, and responsible governments and legislators have
been trying for decades to do something about it.
However, it's no less true in this sphere than in any other that
"doing something" is not necessarily enough to address the problem:
doing only the /right thing/ it what's required.
In the first of his 6 Laws of
Technology[fn:6laws:https://en.wikipedia.org/wiki/Melvin_Kranzberg#Kranzberg's_laws_of_technology],
Dr. Melvin Kranzberg determined that "Technology is neither good nor
bad; nor is it neutral." The tempation on observers is to decide
that the extent of online harassment, abuse and harmful
communications is because of the existence of online services, and
that if only we could force the services to implement their
technologies in a particular manner, all the problems will be
solved.
For instance, the United States of America recently enacted a law
known as the "Stop Enabling Sex Traffickers Act", or
/FOSTA-SESTA/[fn:FOSTA-SESTA:https://en.wikipedia.org/wiki/Stop_Enabling_Sex_Traffickers_Act]. This
was a law to show that the U.S. Congress was doing something to stop
sex-trafficking. The law made it an offence for online services to
"knowingly [assist], [support], or [facilitate]" sex-trafficking,
and it removed from online services speech-related protections that
had been previously provided under another U.S. law known as the
"Section 230 of the Communications Decency Act".
Accounts show, however, that doing *this* was not effective, and has
been counter-productive. As expected, a number of websites that had
been used to legally advertise sex services in the United States
either shut down that section of their service (e.g. Craigslists'
"Erotic Services"), or shutdown completely[fn:SOSTAEffect:Lura
Chamberlain, FOSTA: A Hostile Law with a Human Cost, 87 Fordham
L. Rev. 2171 (2019). Available at:
https://ir.lawnet.fordham.edu/flr/vol87/iss5/13]. If the goal of the
law was to protect sex workers, and women in particular, it has had
the opposite effect:
- Independent sex workers now have no online means to promote their
services, forcing them to turn to pimps for this.
- There has been a notable increase in the number of sex workers who
have gone missing.
- Some sex-workers have died by suicide.
- Assault and rape of sex workers has increased, and many fear that
murders of sex workers are also
increasing[fn:craigslisthomicide:http://www.econlib.org/archives/2018/01/craigslist_redu.html].
- Sex workers have no means to learn about their potential clients
prior to the client knowing about them: where they could vet
people who made contact with them over these services before
identifying themselves, this is not possible anymore, and
dramatically increases their risk.
- Ironically, one of the negative effects of /FOSTA-SESTA/ is that
it is now much harder for the police to investigate rapes,
assaults and murders of sex workers than before, because a
critical trail of evidence -- the online communications between
offenders and sex-workers -- now can no longer be
laid[fn:FOSTAPolice:https://www.techdirt.com/articles/20180705/01033440176/more-police-admitting-that-fosta-sesta-has-made-it-much-more-difficult-to-catch-pimps-traffickers.shtml]. This
is not least because the websites are no longer there, but because
when they were (e.g. Backpage), they assisted the police
investigating these crimes against sex workers; advertising was
legal back then, and now it's not, the police won't get the help
from web sites when they need
it[fn:SESTAPolice:https://www.techdirt.com/articles/20180509/13450339810/police-realizing-that-sesta-fosta-made-their-jobs-harder-sex-traffickers-realizing-made-their-job-easier.shtml].
This was predicted, but by advocates for sex workers and for free
speech, and legislators failed to heed the warnings. In fact, when
considering this law, legislators were presented with statistics
that were false, and misrepresented the landscape prior to enacting
/FOSTA-SESTA/[fn:buzzfeed:https://www.buzzfeednews.com/article/jennyheineman/sex-trafficking-myths-sesta-fosta].
I highlight this law in particular because it is both recent
(early 2018) and relevant. However it's not alone, and as we look at
pending legislation coming to us both domestically and from the EU,
it's hard not to see the same failures repeating:
2019-09-14 15:28:36 +00:00
- Pat Rabbitte's and Lorraine Higgins' bills, since withdrawn
2019-09-08 14:28:59 +00:00
- The EU Terrorism Content Directive...
- The new Copyright Directive...
2019-09-19 05:48:37 +00:00
-
2019-09-16 19:53:18 +00:00
* CONSDONE Introduction
My name is Éibhear Ó hAnluain and I have been working in software
engineering and IT systems design since 1994. I thank you for the
opportunity to submit this contribution to your analysis of /issues
of online harassment, harmful communications and related offences/.
2019-09-19 17:27:59 +00:00
In this submission I am seeking to highlight 2 core concern
2019-09-16 19:53:18 +00:00
- The nature of the online services from the perspective of small
operators
- The potential damage legislative measures can have on small
operators of online services
2019-09-19 17:27:59 +00:00
I will also address some additional concerns I believe are relevant
to this analysis.
2019-09-16 19:53:18 +00:00
* CONSDONE Self-hosting
** CONSDONE Self-hosting
2019-09-14 15:28:36 +00:00
For the purposes of this submission, /self-hosting/ is where an
individual or small group has opted to provide their own internet
services, making use either of computer capacity provided by an ISP
(for example, Blacknight.com, Amazon AWS) or by maintaining the
2019-09-19 17:27:59 +00:00
underlying computer technology themselves.
2019-09-14 15:28:36 +00:00
The services that the self-hoster exposes, then, are either
developed specifically by the self-hoster or runs software that has
been installed by the self-hoster.
The self-hoster also takes responsibility for the quality of the
service that they provide, including ensuring that it is kept
running and updates are applied appropriately, and so on.
2019-09-19 14:53:31 +00:00
This submission is primarily concerned about self-hosting as a
2019-09-14 15:28:36 +00:00
hobby and self-hosting engaged in by charity, non-governmental or
community organisations. However, self-hosting for commercial
purposes is a valid use-case, but implications of regulations on
self-hosting has more a direct implication on the former use-cases,
as the effect of poor regulation on vulnerable people would be more
direct, immediate and serious.
*** CONSTODO Real examples of self-hosting
2019-09-19 17:27:59 +00:00
I host a number of services:
- [[http://www.gibiris.org/eo-blog][/Éibhear/Gibiris/]] is my blog site.
- [[https://social.gibiris.org/][/Social Gibiris/]] is a micro-blogging service that is federated
with others using the /AtomPub/ technology. Thus, /Social
Gibiris/ is federated with many other instances of /GNU Social/,
2019-09-16 19:53:18 +00:00
/Mastodon/ and /Pleroma/. This network of federated services,
operated by individuals, groups and businesses, all connected
together as peers, facilitate connections and communication in a
way that is very little different to twitter.
- [[https://git.gibiris.org/][/git.gibiris.org/]] is a source-code sharing site that I use to
make publicly available some of the software that I develop for
myself.
2019-09-19 17:27:59 +00:00
- [[https://news.gibiris.org/][/news.gibiris.org/]] is a news-aggregation service that allows me
to gather all the news sources of interest to me into one
location, which I can then access from wherever I am.
- [[https://cloud.gibiris.org/nextcloud][/cloud.gibiris.org/]] is a file-sharing platform that I use with
my family when we are collaborating on projects (e.g. school
projects, home improvement projects, etc.)
- [[https://matrix.gibiris.org/][/matrix.gibiris.org/]] is an instant-messaging system which I set
up for the purposes of communicating with my family and close
friends.
Most of these services are hosted on a computer within my home. 3
of these services provide information to the general public, and
the other three are accessible only to those who set up accounts.
2 of those services, /git.gibiris.org/ and /Social Gibiris/ can
process or post user-uploaded information.
2019-09-16 19:53:18 +00:00
*** CONSTODO Why self-host?
There is a myriad of reasons for choosing to host one's own
service. Some examples might be:
2019-09-19 17:27:59 +00:00
- Privacy -- until recently many of the most popular services were
careless or outright abusive users' privacy
- Tracking -- many organisations, particularly those whose
business models are based on advertising, facilitate the
tracking of internet users as they conduct their business or
2019-09-16 19:53:18 +00:00
personal activities across the internet.
- Autonomy -- to be able to configure ones own service is often a
powerful experience.
- Community -- While some of the global services with household
2019-09-19 17:27:59 +00:00
names offer features to small businesses and community groups
2019-09-19 14:53:31 +00:00
(like football clubs or debating societies), often the lock-in
2019-09-16 19:53:18 +00:00
and exclusivity involved can make it hard to include everyone
2019-09-19 17:27:59 +00:00
who needs to be included. Hosting your own services allows you
to set the rules and codes of conduct appropriate for your
groups specific needs.
2019-09-16 19:53:18 +00:00
- Experimentation -- just by means of playing with interesting
software projects can people often learn about the tools and
systems they use, and grow their knowledge of the technologies
involved.
2019-09-19 14:53:31 +00:00
- Collaboration -- the software that implements self-hosted
2019-09-19 17:27:59 +00:00
services often come under the terms of a Free or Open Source
2019-09-19 14:53:31 +00:00
Software copyright licence, which allows for people to copy and
2019-09-19 17:27:59 +00:00
improve the software, and these improvements often find their
back to the original project for others to benefit.
2019-09-16 19:53:18 +00:00
- Protection -- Governments in countries where civil rights are
not regarded as highly as they are in Ireland very often delight
in the greater ease involved in surveilling their populations
2019-09-19 17:27:59 +00:00
when the records of all that activity are centralised in a
single service.
2019-09-16 19:53:18 +00:00
Very often, as with me, the reason to self-host is a combination
of more than 1 of these reasons.
** CONSDONE How accessible is self-hosting.
In a previous, similar, submission[fn:dccae:Available [[http://www.gibiris.org/eo-blog/posts/2019/04/15_harmful-content-consultation.html][here]] and
[[https://www.dccae.gov.ie/en-ie/communications/consultations/Documents/86/submissions/Eibhear_O_HAnluain.pdf][here]].], I provide an outline of the challenges before someone who
2019-09-19 17:27:59 +00:00
wants to set up their own services. They are few, and they are
2019-09-16 19:53:18 +00:00
small. In summary, the reasons for this are:
- The Internet is mechanism for computers to find each other and
then to share information with each other. The mechanism is
defined in a set of publicly-available documents describing the
relevant protocols.
- Due to the maturity and age of these protocols, software needed
to use them is now abundant and trivially easy to get and install
2019-09-19 18:51:39 +00:00
and run on any general-purpose computer. Such software is also
very easy to develop for moderately-skilled software engineers.
2019-09-16 19:53:18 +00:00
- Neither the protocols that define, nor the software that
implement the internet regard any computer to be superior or
inferior to any other computer. For this reason, there is no cost
2019-09-19 18:51:39 +00:00
or capacity barrier to running an internet service: if you have
the software, and the internet connection, then you can expose
an online service.
2019-09-16 19:53:18 +00:00
Clear examples from the past of how the accessibility of the
internet technologies has benefited the world include the
following:
- The /Linux/ operating system kernel began life in 1991 as a
college project -- Linus Torvalds wanted to write a computer
operating system that was accessible to all. Linux-based
operating systems now form the basis of a significant proportion
of internet connected computing devices
globally[fn:LinuxProportions:https://en.wikipedia.org/wiki/Usage_share_of_operating_systems]
(including 73% of smartphones and tablet computers, somewhere
between 36% and 66% of internet-facing server computers), and
100% of supercomputers.
- The /Apache/ web server started development when a group of 8
software developers wanted to add functionality to one of the
original web server software packages, /NCSA httpd/. The Apache
web server now powers 43.6% of all web
sites[fn:apacheProportions:[[https://w3techs.com/technologies/overview/web_server/all][https://w3techs.com/technologies/overview/web_server/all]]. Incidentally,
the no. 2 on that web page, with nearly 42% share of websites is
/nginx/. It also started out as a project by an individual who
wanted to solve a particular project.].
- The /Firefox/ web browser was initiated by three software
developers who wanted to make a light-weight browser based on the
Mozilla code-base. At the height of its popularity, /Firefox/ was
used in 34% of web-page requests, despite not coming installed by
default on any computer or mobile device. However, its real
impact is that it was instrumental in breaking the monopoly that
Microsoft's Internet Explorer held since the late '90s, resulting
in far richer and more secure web.
When we look at the main services that society is currently
struggling with, we need to consider the following historical
facts:
- Facebook started out as a crude service, developed in Mark
Zuckerberg's room in Harvard University, to allow users (men, of
course) to rate the women in the university in terms of
"hotness".
- Google started out as a search engine called
"Backrub". Development initially took place in a garage.
- eBay was originally an auction service tagged onto the personal
website of its founder, Pierre Omidyar.
- LinkedIn was initially developed in Reid Hoffman's apartment
in 2003.
- Shutterstock, a leading provider of stock images, was founded by
a photographer, John Oringer, who developed the service as a
means to make available 30,000 of his own photographs.
2019-09-19 18:51:39 +00:00
The ease with which internet technology can be accessed is
instrumental in the explosion of services that connect people, and
people with businesses.
2019-09-16 19:53:18 +00:00
It is critical to note that many of these technologies and services
started out with an individual or small group developing an idea
and showing it can work *prior* to receiving the large capital
2019-09-19 18:51:39 +00:00
investments that resulted in their current dominance.
2019-09-16 19:53:18 +00:00
All of the above technologies and services can be considered truly
disruptive. In their respective domains, their arrivals resulted in
a dramatic improvements in internet technologies and services.
However, There are many alternatives to the systems that we are
familiar with, all developed by individuals, or small, enthusiastic
teams:
- /Twitter/ isn't the only micro-blogging service: there's also
/GNU Social/, /Pleroma/, /Mastodon/.
- An alternative to /Facebook/ is /diaspora*/
- /Nextcloud/ and /Owncloud/ are examples of alternatives to
/Dropbox/.
In the cases of all these alternatives, users can sign up for
accounts on "instances" operated by third-party providers, or users
can set up their own instances and operate the services themselves.
Many of these services can federate with others. Federation in this
context means that there can be multiple instances of a service,
communicating with each other over a defined protocol, sharing
updates and posts. For users, federation means that they can
interact with other users who aren't necessarily on the same node
or instance. For administrators of instances, federation means that
they can configure their instances according to their own
preferences, rather than having to abide by the rules or technical
2019-09-19 18:51:39 +00:00
implementation of someone else. For the ecosystem, federation means
that if one node goes down or is attacked, the others can continue
with a minimum of interruption.
2019-09-16 19:53:18 +00:00
** CONSDONE Regulation of self-hosted services
2019-09-14 15:28:36 +00:00
While it is attractive to create regulations to manage the large,
profit-making organisations, it is imperative that such
regulations don't harm the desire of those who want to create
their own services.
2019-09-19 18:51:39 +00:00
A regulation that applies liability to a service-provider for
someone else's behaviour is a regulation that can be adhered to
only by organisations with large amounts of money to hand. For
example, if the regulation was to apply liability on me for a
posting made by someone else that appears on one of the services
that I run (and likely originally posted *somewhere* else -- these
are federated services after all), I would have to shut it down; I
am not able to put in place the necessary technical or legal
infrastructure that would mitigate my
2019-09-14 15:28:36 +00:00
liability[fn:copyrightDirective:This assumes that my services
aren't forced to shut down by the new EU Copyright Directive
anyway]. Given that my services are intended to provide a positive
benefit to me, my family members and my friends, and that I have no
desire to facilitate harmful behaviour on these services, a law
forcing me to shut these services down benefits no one.
Similarly, a regulation that demands responses from services on the
assumption that the service will be manned at all times, requires
individuals who are self-hosting their services to be available at
all times (i.e. to be able to respond regardless of whether they
are asleep, or overseas on a family holiday, too ill to respond,
etc.)
This submission comes from this perspective: that small operators
should not be unduly harmed by regulations; the likelihood of this
harm coming to pass is greater when such small operators are not
even considered during the development of the regulations. If
regulations have the effect[fn:unintended:unintended, one hopes] of
harming such small operators, the result will not just be the loss
of these services, but also the loss of opportunity to make the Web
2019-09-19 18:51:39 +00:00
richer because artificial barriers to entry will be imposed by
those regulations. They will inhibit the development of ideas that
pop into the heads of individuals, who would realise them with
2019-09-14 15:28:36 +00:00
nothing more than a computer connected to the internet.
* CONSDONE Other considerations
2019-09-16 19:53:18 +00:00
While the main focus of this submission is to highlight the
2019-09-19 18:51:39 +00:00
potential risk to self-hosters from regulations that neglect to
2019-09-16 19:53:18 +00:00
consider the practice, I would like to take the opportunity to
briefly raise some additional concerns
2019-09-19 18:51:39 +00:00
** CONSDONE Abuse of the systems
2019-09-16 19:53:18 +00:00
To date, all systems that seek to protect others from harmful or
other objectionable material (e.g. copyright infringement,
2019-09-19 18:51:39 +00:00
terrorism propaganda, etc.) have been easily amenable to abuse. For
example, in a recent court filing, Google claimed that 99.97% of
copyright infringement notices it received in from a single party
2019-09-16 19:53:18 +00:00
in January 2017 were
bogus[fn:googleTakedown:https://www.techdirt.com/articles/20170223/06160336772/google-report-9995-percent-dmca-takedown-notices-are-bot-generated-bullshit-buckshot.shtml]:
#+BEGIN_QUOTE
A significant portion of the recent increases in DMCA submission
volumes for Google Search stem from notices that appear to be
duplicative, unnecessary, or mistaken. As we explained at the San
Francisco Roundtable, a substantial number of takedown requests
submitted to Google are for URLs that have never been in our search
index, and therefore could never have appeared in our search
results. For example, in January 2017, the most prolific submitter
submitted notices that Google honored for 16,457,433 URLs. But on
further inspection, 16,450,129 (99.97%) of those URLs were not in
our search index in the first place. Nor is this problem limited to
one submitter: in total, 99.95% of all URLs processed from our
Trusted Copyright Removal Program in January 2017 were not in our
index.
#+END_QUOTE
2019-09-19 18:51:39 +00:00
With the US' Digital Millennium Copyright Act, there is no downside
for a bad-faith actor seeking to take advantage of a system for
suppressing information[fn:downside:The law contains a provision
2019-09-16 19:53:18 +00:00
that claims of copyright ownership on the part of the claimant are
to be made under penalty of perjury. However, that provision is
very weak, and seems not to be a deterrent for a determined agent:
https://torrentfreak.com/warner-bros-our-false-dmca-takedowns-are-not-a-crime-131115].
The GDPR's /Right to be Forgotten/ is also subject to abuse. An
2019-09-19 18:51:39 +00:00
individual from Europe continues to force stories related to him
2019-09-16 19:53:18 +00:00
excluded from Google searches. However appropriate on the face of
it, the stories this individual is now getting suppressed relate to
his continued abuse of the /Right to be
Forgotten/[fn:RTBF:https://www.techdirt.com/articles/20190320/09481541833]. That
the "right" can be abused in this way is counter to the public
interest, as it can now be used like a "Super Injunction".
While the GDPR allows for search engines "... exercising the right
of freedom of expression and information", if they are presented
with /Right to be Forgotten/ demands, they have to choose between
serious sanctions if they don't filter the results when they should
have, or no sanctions if they suppress the results when they didn't
need to.
In systems that facilitate censorship[fn:censorship:While seeking
2019-09-19 18:51:39 +00:00
to achieve a valuable and socially important goal, legislation of
nature facilitates censorship: as a society, we should not be so
squeamish about admitting this.], it is important to do more than
merely assert that service providers should regard fundamental
rights for expression and information. In a regime where sending an
e-mail costs nearly nothing, where a service risks serious
penalties (up to and including having to shut down) and where a
claimant suffers nothing for abusive claims, the regime is
guaranteed to be abused.
2019-09-16 19:53:18 +00:00
2019-09-18 12:56:52 +00:00
** CONSDONE Content Moderation
2019-09-16 19:53:18 +00:00
2019-09-19 18:51:39 +00:00
Much of the focus of legislative efforts to deal with harmful or
objectionable material that appear on services that permit uploads
from users is on what the service providers do about it. Many argue
that they are not doing anything, or at least not enough.
2019-09-17 06:55:10 +00:00
2019-09-18 08:08:45 +00:00
However, this is an unfortunate mischaracterisation of the
situation. For example, facebook employs -- either directly or
through out-sourcing contracts -- many 10s of thousands
"moderators", whose job is to make a decision to remove offensive
material or not, to suppress someone's freedom of expression or
2019-09-19 18:51:39 +00:00
not, based on a set of if-then-else questions. These questions are
not easy:
2019-09-18 08:08:45 +00:00
- It's illegal in Germany to say anything that can be construed as
2019-09-19 18:51:39 +00:00
glorifying the Holocaust. In the US it isn't. Facebook can
2019-09-18 08:08:45 +00:00
suppress such information from users it believes are in Germany,
2019-09-19 18:51:39 +00:00
but to do so for those in the US would be an illegal denial of
2019-09-18 08:08:45 +00:00
free expression, regardless of how objectionable the material
is. What is facebook to do with users in Germany who route their
internet connections through the UK? Facebook has no knowledge of
2019-09-19 18:51:39 +00:00
this unusual routing, and to seek to learn about it could be a
violation of the user's right to privacy. Should facebook be
criminally liable for a German user seeing statements that are
illegal in Germany?
- Consider the genocide of Armenian people in Turkey in 1915. In
Turkey it is illegal to claim it happened. However, for a period
2019-09-18 12:56:52 +00:00
between 2012 and 2017 it was illegal in France to claim it didn't
happen. In most other countries, neither claim is illegal. What
can a service like facebook do when faced with 3 options, 2 of
2019-09-19 18:51:39 +00:00
which are mutually exclusive? Literally, they would be
2019-09-18 12:56:52 +00:00
criminally liable both if they do /and/ if they
don't[fn:dink:Prior to his assassination in Istanbul in 2007,
Hrant Dink, an ethnic Armenian Turkish journalist who campaigned
against Turkey's denial of the Armenian Genocide had planned to
travel to France to deny it in order to highlight the
2019-09-19 18:51:39 +00:00
contradictions with laws that criminalise statements of fact.]?
2019-09-18 12:56:52 +00:00
Moderators have no more than a minute to determine whether a
2019-09-19 18:51:39 +00:00
statement complies with the law or not, and this includes figuring
2019-09-18 08:08:45 +00:00
out whether the posting meets the definitions of abusive or
harmful, and whether it is indeed intended to meet that
2019-09-18 12:56:52 +00:00
definition. For example, consider an abusive tweet. Should the
harmful, abusive tweet be removed? Who decides? What if the target
of the abusive tweet wants that tweet to be retained, for, say
2019-09-19 18:51:39 +00:00
future evidence in a claim? What if the tweet was an attempt at
abuse, but the target chose not to be affected by it? Should it
stay up? Who decides? What if the target doesn't care, but others
who see the tweet and are not the target of the abuse may be
offended by it. Should it be taken down as abusive even though the
target of the abuse doesn't care, or objects to its removal? Who
would be criminally liable in these situations? What if the target
of the abuse substantially quotes the abusive tweets? Is the target
now to be considered an offender under a criminal liability regime
when that person may be doing nothing other than /highlighting/
abuse?
All of these scenarios are valid and play out every day. Content
moderators need to consider these and many more questions, but get
very little time to do so. The result: a public perception,
promoted by public figures, that these large services are doing
nothing about abuse.
2019-09-18 12:56:52 +00:00
"Content moderation" is very hard, and is impossible at the scales
that services like twitter or facebook operate in. When context is
2019-09-19 18:51:39 +00:00
critical to decide that someone is engaged in harmful or abusive
behaviour, it would be fundamentally unfair to make a service
criminally liable just because it made the wrong decision as it
didn't have time to determine the full context, or because it
misinterpreted or misunderstood the context.
2019-09-18 12:56:52 +00:00
** CONSDONE User Behaviour
2019-09-18 12:56:52 +00:00
Many believe that the way to deal with abusive or harmful material
online is to punish the services that host the material. This is
reasonable if the material was placed onto the service by those who
2019-09-19 18:51:39 +00:00
operate the service. It is also reasonable if the material is put
there by users with the clear knowledge of the service operator, or
by users following encouragement of the operators of the service.
2019-09-18 12:56:52 +00:00
However, these specific situations are rare in the world of normal
online services[fn:criminal:Services that are dedicated to hosting
criminal material such as "revenge porn" or child sexual
exploitation material know they are engaged in criminal activities
anyway, and take steps to avoid detection that are outside the
scope of this submission -- those guys will get no support from
me!].
Engaging in harmful and abusive communications is a matter of
behaviour and not a function of the technical medium through which
the communication is made. The idea that internet services are
responsible for abusive communications is as difficult to
understand as the idea that a table-saw manufacturer is responsible
2019-09-19 18:51:39 +00:00
for a carpenter not wearing safety glasses.
2019-09-18 12:56:52 +00:00
Recent history has shown that the most effective ways to change
behaviour are not necessarily punitive. It's hard to see how
punishing an intermediary would stop people being nasty to each
other.
2019-09-19 05:48:37 +00:00
Any new regulations around controlling abusive or harmful
behaviours online must start with changing user's behaviours. If
there is no attempt to change behaviour, then abusive people will
simply work around the controls and continue to abuse.
** CONSDONE Investigation support
2019-09-19 05:48:37 +00:00
In response to the live-streaming of that horrific shooting dead of
more than 50 people in New Zealand earlier this year, that country
has proscribed the video recorded by that white supremacist
terrorist as "objectionable", making it a criminal offence to share
it[fn:banNotice:https://www.classificationoffice.govt.nz/news/latest-news/christchurch-attacks-press-releases/#christchurch-attack-video-footage-and-document-has-been-banned-in-nz-what-this-means-for-you].
While one can understand the thinking that sharing the material
could only be done by people who support the atrocity, this is not
necessarily true. Other reasons to share the video or portions of
it might include
2019-09-19 14:53:31 +00:00
- to appeal for help in finding someone caught up in the massacre
2019-09-19 05:48:37 +00:00
- Legitimate news reporting of such an event.
- to help investigate the shooting and its
circumstances[fn:ForArch:Forensic Architecture,
https://forensic-architecture.org/, is a research group that
investigates alleged abuses of human rights using image and video
records of events. To criminalise the sharing of such imagery and
videos with no regard as to the purpose for the sharing plays
directly into the hands of those who disregard victims' civil
rights.]
2019-09-19 05:48:37 +00:00
- training for law enforcement or terrorism- or disaster-response
personnel.
However, if the law says that no form of sharing is permitted, then
none of the entirely legitimate purposes would be possible, and the
world would be that bit less safe as a result.
There is a similar consideration for abusive material posted
online. If a communication is deemed to be an offence, care needs
to be taken to ensure that the "removal" of such a communication
(or a set of such communications) is not equivalent of the
destruction of evidence. This is particularly true in the context
that it is now very easy for anyone to forge screen-shots of online
postings.
** CONSDONE Encrypted services
2019-09-19 05:48:37 +00:00
Some believe that if end-to-end encryption services that prevent
security services from accessing material were banned or
controlled, there would be less abusive behaviour online. This is
not true, nor is it a good public policy.
Encryption is just mathematics, and it knows neither whether its
use is for ill or good. However, when you consider the extent to
which encryption is being used -- every website that uses =https=
as part of its address encrypts the traffic between itself and its
users, and that is nearly every website around the world -- the
good uses vastly outnumber the bad uses. If people are forced to
use an encryption system that has been modified to make it easy for
security services to gain access to the messages, it means that all
the good, innocent uses of encryption are at risk. Recent news that
Russian spies managed to infiltrate the
FBI[fn:Oath:https://news.yahoo.com/exclusive-russia-carried-out-a-stunning-breach-of-fbi-communications-system-escalating-the-spy-game-on-us-soil-090024212.html
(Please note that to access this story the user has to agree to
many hundreds of forms tracking or spend many up to an hour
examining those forms and disabling each one individually. It is
recommended that this story be access using "Incognito" or "Private
Browsing" mode in order to be protected against tracking).],
highlights how unreliable are assurances from security services
that they can keep secrets such as the keys to all encryption safe
from harm.
All it takes is one determined intruder, and all the good uses of
encryption are put at risk in order to safe money and effort on
investigating illegal activities.
I have written a number of articles on this matter providing more
details:
- [[http://www.gibiris.org/eo-blog/posts/2015/03/12_the-value-of-encryption.html][The value of encryption]]
- [[http://www.gibiris.org/eo-blog/posts/2015/03/18_how-can-encryption-be-regulated.html][How can encryption be regulated]]
- [[http://www.gibiris.org/eo-blog/posts/2018/08/21_you-cant-stop-people-from-using-encryption.html][You just can't stop people from using encryption, so stop trying]]
- [[http://www.gibiris.org/eo-blog/posts/2018/08/22_stop-people-from-using-encryption-postscript.html][Post-script on why you should stop trying to stop people from
using encryption]]
- [[http://www.gibiris.org/eo-blog/posts/2018/09/04_some-questions-5-eyes-countries-what-can-they-do.html][Some questions for the "5 Eyes" countries on what they think they
can do]]
2019-09-19 17:27:59 +00:00
* CONSDONE Answers to consultation questions
2019-09-16 19:53:18 +00:00
The follows are some answers to the questions posed in the call for
submissions.
** CONSDONE Definition of communication in legislation
- Question 1 :: There are currently significant gaps in legislation
with regard to harassment and newer, more modern
forms of communication. Is there a need to expand
the definition of communications to include
online and digital communications tools such as
WhatsApp, Facebook, Snapchat, etc. when addressing
crimes of bullying or harassment?
+ Answer :: Yes. However, it is important to consider the following:
* Not all such tools are as large and have such human and
financial resources as the specific services referred
to. Legislation that makes the assumption that such
communication can take place only through services that are
as large and wealthy as these will stand a very good chance
of restricting or limiting competition in these services'
domains by imposing regulatory barriers of entry. I expand on
this in the "Self-hosting" section of this submission.
* Legislation should focus not on the tool, but on the
behaviour. In the main, therefore, it's the behaviour of
those performing the bullying or abuse that should be
2019-09-19 14:53:31 +00:00
targeted and not the "tool" used as the communications
medium. I expand on this in the "User behaviour" section of
this submission.
- Question 2 :: What lessons can be learned from models used in
other jurisdictions such as the UK, New Zealand,
Australia and other European countries where
legislation is now in place to address these
issues? How do we establish an appropriate model
without compromising free speech?
+ Answer :: The incentives need to be present to ensure that the
balance is managed correctly. Any legislation, such
as
/FOSTA-SESTA/[fn:FOSTA-SESTA:https://en.wikipedia.org/wiki/Stop_Enabling_Sex_Traffickers_Act]
in the US, that seeks merely to punish web sites,
will to more harm than good[fn:SOSTAEffect:Lura
Chamberlain, FOSTA: A Hostile Law with a Human Cost,
87 Fordham L. Rev. 2171 (2019). Available at:
https://ir.lawnet.fordham.edu/flr/vol87/iss5/13]. The
incentive for US-based web site operators in this
case is either *never* to host information for or by
sex workers for fear of falling foul of the law, or
to cease operations altogether. The result has been a
human rights disaster, as sex workers, particularly
women, are now at greater risk than before due to the
failure of the law to consider the effect of a
straight ban.
The recently-passed new EU Copyright directive
mandates the filtering of user uploads based on prior
notice that such uploads *may* be infringing
copyright, subject to severe penalties, but requires
mere respect for users' freedom of speech with no
penalties attaching to failing to do so. The
incentive for the service operators here is to err on
the side of suppressing material regardless of
anyone's freedom of expression, as the consequences
of not doing so could be catastrophic for the service
operator.
The proposal in the UK to apply a duty of care to
service operators is also destined for failure, as a
duty of care is a physical-world concept that has no
suitable analogy in the context of internet services.
Ironically, the likely best regulatory approach is
one that online services currently operate under in
the US and to a large degree in Europe: intermediary
liability protection. All these services maintain
terms and conditions ("Community Rules", "Code of
Conduct", etc.) and confirmed violations of these
result in sanctions on the users. However, where
services aren't aware of violations, they are
protected on the grounds that the behaviour that is
objectionable is not that of the service operator,
but is of the user. In short, punish the user, not
the service provider, unless -- of course -- the
service provider is complicit.
- Question 3 :: How do we ensure that any legislation that is
enacted is flexible enough to keep up with changing
and advancing technologies, new apps and other
online forums, including the more familiar social
media sites?
+ Answer :: This is this submissions core concern. For
legislation to focus on the technology, and not on
the behaviour, to focus on the service operator and
not on the real offender, runs real risks of damaging
human rights of totally innocent parties, as well as
stifling innovation and consolidating the market
2019-09-19 14:53:31 +00:00
positions of the major operators
** CONSDONE Harassment, stalking & other forms of online abuse
- Question 4 :: Online harassment can take the form of
on-consensual taking and distribution of intimate
images or videos, otherwise known as revenge
porn, upskirting, downblousing and other forms
of sharing of imagery online without consent. What
approaches are taken to addressing these issues in
other jurisdictions?
+ Answer :: This submission is not offering any answer to this
question.
- Question 5 :: New offences are proposed to cover these issues in
Deputy Brendan Howlins Private Members Bill on
this subject. Is the creation of new offences
necessary, or is existing legislation sufficient?
Should other forms of image-sharing issues - such
as exposure - also be addressed?
+ Answer :: This submission is not offering any answer to this
question.
- Question 6 :: What kind of oversight and regulation of online
service providers is possible/used in other
jurisdictions? Currently, online providers are self
regulated. Is a proactive, self-regulating approach
from online companies to activities such as revenge
porn and other forms of harassment preferable to
the creation of more laws?
+ Answer :: If a measure of self-regulation to address these
concerns is acceptable, then it would be necessary
for public-perception reasons, to be clear on what
that means. /Self-regulation/ could mean either where
each service operator manages matters of harassment
and harmful communications according to their own
rules and processes. This is currently how the large
service providers we're most familiar with
operate. However, /self-regulation/ may also refer to
regulation by a non-governmental industry-funded
body, following the model of the press council or the
advertising standards authority, where rules and
processes are agreed among the operators as a set of
standards, and where decisions of compliance to these
are made by this body.
Aside from making this comment on the term, what is
more important is getting the competing rights
correctly balanced, rather than the model of
regulation that asserts that balance.
- Question 7 :: Is any data provided by online service providers in
relation to the reporting or prevalence of
activities such as upskirting/revenge
porn/cyberbullying and other online behaviour that
can be used to develop and draft future
legislation?
+ Answer :: Each of the major sites prepare what are called
"Transparency Reports". However, many of these
reports are constrained by rules laid out by (in
particular) the so-called "Intelligence Community" of
the United States. Thus these reports are not as
transparent as they could be.
It should be a requirement for such services to issue
a periodic report detailing the following statistics
in each:
* The number of reported postings, broken down by nature of the
complaint
* Number of reports that were appealed to the service, broken
down by the nature of the complaint and the basis of appeal
* Number of appeals upheld, broken down by reason for appeal
* Number of appeals rejected, broken down by reason for
rejection.
* Number of complaints/appeals that were appealed further to
the regulator or courts system.
- Question 8 :: To what extent are An Garda Síochána equipped and
resourced to deal with the issues arising from
harmful online communications such as these?
+ Answer :: This submission is not offering any answer to this
question.
- Question 9 :: Should cyberstalking be treated as a separate
offence to online harassment? What constitutes
stalking-type behaviour online? Is there a need to
legislative specifically for this activity?
+ Answer :: This submission is not offering any answer to this
question.
- Question 10 :: Based on the findings of other jurisdictions such
as in the UK, An Garda Síochána will require
consistent training in order to maintain an
appropriate level of knowledge with regard to
indictable behaviours. Are resources available for
this?
+ Answer :: This submission is not offering any answer to this
question.
- Question 11 :: Fake accounts/troll accounts used to harass or
target others with abuse what measures can be
taken in relation to these without effecting
freedom of expression?
+ Answer :: The assumption that an account that isn't clearly
associated with a personal identity is "fake" needs
to be challenged. It is the /behaviour/ of the
account than needs to be considered. This is true of
2019-09-19 14:53:31 +00:00
accounts that are associated with identifiable
individuals as well as of pseudonymous
accounts[fn:trolls:A well-known Irish public figure
who offers commentary on many aspects of society
frequently posts messages on Twitter designed to
elicit angry responses. I describe this person as "A
master of the false equivalence". This is the classic
online trolling behaviour. Similarly, on the 18th
2019-09-19 14:53:31 +00:00
September 2019, a prominent UK journalist tweeted
personal details of a father who publicly challenged
UK Prime Minister Boris Johnson regarding the state
of the NHS. This was construed by many as a
deliberate trolling to inflict a measure of
unofficial retribution on the man.].
It should not be assumed that pseudonymous accounts
are created in order for the users to escape legal
consequences for criminal communications. There are
2019-09-19 14:53:31 +00:00
many reasons for maintaining a pseudonymous presence
online, some of which I have personally encountered
being:
- To protect against a physically abusive family
member
- To protect against an employer that monitors online
activities
- To engage online in a manner that deals with prejudices
(e.g. many respond to women differently than to men, to
people of a different religion or skin colour than to those
of the same religion or skin colour, etc.)
- To protect against action from their own governments whose
laws are less respectful of civil rights as we would think
Ireland's are.
It should not be assumed that a pseudonymous account has been
created for reasons of abuse or harmful communication. In fact,
there's good reason to assume that the significant majority of
pseudonymous accounts operate for completely innocent
2019-09-19 14:53:31 +00:00
reasons[fn:realnames:facebook excepted. However, facebook's
real-name policy is itself wrong, and does a great deal of
damage to people who have good reasons for their names not to
be associated with their online presences.].
- Question 12 :: Do other jurisdictions have statutory measures to
protect victim identities in cases of online
harassment being released online posthearings,
etc?
+ Answer :: This submission is not offering any answer to this
question.
** CONSDONE Harmful online behaviour and young people
- Question 13 :: How do we most appropriately regulate social media
platforms to prevent cyberbullying and
inappropriate sharing of personal images?
+ Answer :: I refer you to the details of this submission.
- Question 14 :: For young people who participate in such online
behaviour as consensual image sharing, how can it
be ensured that they are not inadvertently
criminalised when legislation is enacted? What
safeguards can be put in place?
+ Answer :: This submission is not offering any answer to this
question.
- Question 15 :: Deputy Brendan Howlins Private Members Bill
provides that those under 17 should not be
fined/imprisoned but put into relevant education
or supports. Would these supports be part of the
same educational supports offered to all young
people/schools or would they be a separate entity?
Are current supports being utilised? Are there
sufficient resources to provide for such a
provision when enacted?
+ Answer :: This submission is not offering any answer to this
question.
2019-09-16 19:53:18 +00:00
* CONSDONTDO Answers to consultation questions :noexport:
** CONSTODO Strand 1 -- National Legislative Proposal
*** CONSTODO Question 1 -- Systems
- The legislation should state in an unequivocal manner that it is
not the role of web services to adjudicate on whether specific
user-uploaded pieces (text, videos, sound recordings, etc.) can
be considered harmful under the legislation. The law should make
it clear that where there is a controversy on this matter, the
courts will make such rulings.
- As regard a system, this submission would support a
notice-counternotice-and-appeal approach. Such an approach
affords the service operator and the accused party an
opportunity to address the complaint before the complained-of
material is taken offline. The following should be incorporated:
1) A notice to a service operator that a user-uploaded piece is
harmful should contain the following information:
- That the notice is being raised under this legislation
(citing section, if relevant).
- That the person raising the notice is the harmed party, or
that the person raising the notice is doing so on behalf,
and at the request, of the harmed party. Where the harmed
party doesn't want to be identified, the notice could be
raised on their behalf by someone else. However, totally
anonymous notifications under this legislation should not
be permitted, as it would not be possible to determine the
good-faith nature of the notice.
- The specific (narrowly tailored) definition of "harmful
content" in the legislation that is being reported.
2) A notice to the user who uploaded the complained-of material
regarding the complaint. This will allow the user to remove
the material, or to challenge the complaint. An opportunity
to challenge a complaint is necessary to forestall invalid
complaints that seek to have information removed that would
not be considered harmful under the legislation.
3) Adequate time periods for both the complainant and the
posting user to respond.
4) Where responses aren't forthcoming...
- ... if the posting user doesn't respond to the initial
complaint, the posting is to be taken down
- ... if the complaining user doesn't respond to the posting
user's response, the posting is left up.
5) Within a reasonable and defined period of time, the service
provider will assess the initial complaint, the
counter-notice, and the complainant's response to the
counter-notice, and will decide whether to take the material
down or to leaving it up, /citing clear reasons for the
decision./
6) Where either party is not happy with the decision, they can
appeal to the regulator, and if the regulator contradicts the
service operator's decision, the service operator must abide
by the regulator's ruling. In its consideration of the
ruling, the regulator must be required to consider the rights
of both parties.
- Responsibilities and obligations of the service provider *must*
relate to the size of the service. For example, it's not
reasonable to ask the service provider to respond within an
amount of time for those services that would not have someone
available within that time. Self-hosters or small,
single-location, operations would not be able to respond within
an hour if the complaint is made at 4am!
- This system should not apply to complaints that a posting violates the service's terms and conditions. If the complaint isn't explicitly made under this legislation, it should not fall within the regulator's remit. *Under no circumstances should merely violating a service's terms and conditions (or "community standards") be considered an offence under this legislation.*
*** CONSTODO Question 2 -- Statutory tests
The service operator should be protected from liability under the
rules if the service can show the following:
- That the initial complaint was responded to appropriately and
within a reasonable amount of time.
- That an appeal was responded to within a reasonable amount of
time.
- That the poster and complainant were each offered an opportunity
to respond
- That the responses, and any appeals, were given due
consideration.
- That the final decision (whether to keep the post up or pull it
down) was well-reasoned, and considered the context in which the
post was made.
- That, where appeals have been made to the regulator, the service
responds to any order from the regulator in a reasonable manner
and within a reasonable amount of time.
*** CONSTODO Question 3 -- Which platforms to be considered in scope
This submission is concerned to ensure that assumptions not be
made that all affected platforms will be large, for-profit
organisations with scores, or hundreds, or thousands of staff
acting as moderators of user-uploads.
The legislation should also not assume that platforms that want to
deal with user uploads *should* be of a particular nature, or
size.
To make either assumption would be to chill lawful interactions
between internet-connected parties, and would further entrench the
larger players on the internet.
*** CONSTODO Question 4 -- Definitions
- Please see my introductory comments on this matter.
- Definitions of "harmful content" must aim to be as narrow as
possible, in order to avoid the potential of the legislation
being used to target political speech.
- In respect of serious cyberbullying, it should be considered
harmful content under the legislation not just when it targets a
child. It should be considered cyberbullying and harmful even if
it is an adult, if the complaint states that s/he is being
harmed or fears harm should the complained-of behaviour
continue.
+ In the event that the target of the cyberbullying is a public
figure, there should be an additional burden on the
complainant to state that the behaviour represents real intent
to cause harm, and is more than people with opposing political
or social views "shooting their mouths off".
** CONSTODO Strand 2 -- Video Sharing Platform Services
*** CONSTODO Question 5 -- What are video-sharing services
This submission is not providing an answer to this question.
*** CONSTODO Question 6 -- Relationship between Regulator and VSPS
This submission is not providing an answer to this question.
*** CONSTODO Question 7 -- Review by Regulator
The regulator should require the following reports to be published
by online services regarding complaints made under this
legislation:
- Number of complaints, broken down by nature of complaint
- Number of complaints that were appealed to the service, broken
down by nature of complaint and basis of appeal
- Number of appeals upheld, broken down by reason for appeal
- Number of appeals rejected, broken down by reason for rejection.
- Number of complaints/appeals that were appealed further to the
regulator.
** CONSTODO Strands 3 & 4 -- Audiovisual Media Services
*** CONSTODO Question 8 -- "Content" rules for television broadcasting and on-demand services
This submission is not providing an answer to this question.
*** CONSTODO Question 9 -- Funding
RTÉ and its subsidiary services should continue to be funded by
the government, either through the licence fee, general taxation
or a mixture of both. RTÉ's editorial independence should be
re-iterated in this law (and strengthened, if required,
specifically to assure independence from the editorial demands of
advertisers). It should be anticipated that RTÉ will eventually
broadcast only over the internet, and that it will be both a
live-streaming service (e.g. providing programming in a manner
similar to it's current broadcast schedule), *and* an on-demand
service.
Funding of services other than RTÉ should only be considered for
services operated by non-profit organisations such as trusts or
charities, and such funding should also come with an assurance of
editorial independence for the recipients.
** CONSTODO Strands 1 & 2 -- European & International Context
*** CONSTODO Question 10 -- Freedoms
- Core to the consideration of the legislation is that everyone
posting to services are presumed to be innocent of an offence,
and their postings should also be presumed *not* to offend the
law.
- Accusations of harm *must* be tested to determine if they are
being made to suppress legal speech. This is particularly true
where the person making the allegation is a public figure, or is
representing a public figure.
- Where a service applies -- or is required to apply -- sanctions
on users who repeatedly post harmful information, similar
sanctions should also be applied to users who repeatedly make
*false* accusations under the law.
*** CONSTODO Question 11 -- Limited liability
Any regulatory system that makes service providers liable for what
their *users* say on those services will result in one or a
combination of the following effects:
1) Service will stop permitting users to make postings.
2) Where the value of a service is wholly, or in part, that it
allows its users to post to it, the service may have to shut
down.
3) Services will be sued or prosecuted for the actions of its
users *regardless* of the effort and good faith they put in to
"moderating" what is posted on their service -- a concept that
is borderline ludicrous in the off-line world. This would be
analogous to a car manufacturer being liable for the
consequences of car occupants not wearing their seat-belts.
There must be clarity in the regulations that a service is
protected as long as it acts in a good-faith manner to deal with
postings made by users that are determined to have been
illegal. This reflects Ireland's obligations under various trade
agreements to grant safe-harbour protections to internet services.
The regulation must also protect platforms and their users against
bad-faith accusations of harm, particularly from public
figures. If it is easier to use an accusation of "harmful content"
than to claim libel, public figures will use that facility to
suppress information they would like not to be known.
** CONSTODO Strands 1-4 -- Regulatory Structures
*** CONSTODO Question 12 -- Regulatory structure
This submission is not providing an answer to this question.
*** CONSTODO Question 13 -- Funding of regulatory structure
This submission is not providing an answer to this question.
** CONSTODO Strands 1 & 2 -- Sanctions/Powers
*** CONSTODO Question 14 -- Functions and powers
This submission is not providing an answer to this question.
*** CONSTODO Question 15 -- Sanctions
The following should be taken into account when considering
sanctions on platforms
- The nature of the operation
+ Large, global, profit-based private organisations providing
services to the general population. (examples include YouTube,
Facebook, Twitter).
+ Smaller, local, profit-based private organisations providing
services to the general population, focused on the region
(examples might include boards.ie, everymum.ie, etc.)
+ Small, non-profit forums set up by locally-based and -focused
organisations such as soccer clubs, or school parents'
associations[fn:useFacebook:There is often the temptation to
advise these organisations to use larger platforms like
Facebook or Google. Some organisations may not want to avail
of those services, and the reasons for this are not
relevant. What's important is that deciding not to use these
platforms is valid, and these decisions should be protected
and encouraged, not inhibited.]
+ Individuals, hosting their own platforms.
- The good-faith efforts of the operation to respond to
accusations of harm.
- The capacity of the service to respond -- smaller operations
can't afford 24-hour monitoring to respond to such accusations,
and the law should not require it. Such services should be able
to avail of bad-faith actors seeking to interfere with their
operations by overwhelming them with false accusations of harm
that need to be dealt with.
- Who the accuser is -- public figures should be prevented from
using accusations of "harmful content" to remove information
that is merely critical of them or their behaviour.
*** CONSTODO Question 16 -- Thresholds
This submission is not providing an answer to this question.