KatsBits Community

General Category => Blog => Topic started by: kat on August 08, 2017, 11:41:49 PM

Title: YouTube's continued commitment to conflation
Post by: kat on August 08, 2017, 11:41:49 PM


In "[a]n update on our commitment to fight terror content online (https://youtube.googleblog.com/2017/08/an-update-on-our-commitment-to-fight.html)", YouTube has posted another blog clarifying the companies policies towards Demonetization (demonetisation) (https://www.katsbits.com/smforum/index.php?topic=925.0) and Controversial Content (https://www.katsbits.com/smforum/index.php?topic=934.0). Given the title, "fight[ing] terror[ism] online", the context of the discussion is clear, to developer policies and procedures that better "identify and remove violent extremism and terrorism-related content" that often centres around "hate speech and violent extremism", or "hate speech, radicalization, and terrorism ... [as]... used to radicalize and recruit extremists", all of which are clear violations of YouTube's Terms of Service (https://www.youtube.com/t/terms) (cf. 7.5, 7.6, 7.9) and their Community Guidelines (https://www.youtube.com/yt/policyandsafety/en-GB/communityguidelines.html) (cf. "Hateful content (https://support.google.com/youtube/answer/2801939)", "Threats (https://support.google.com/youtube/answer/2801927)", "Violent or graphic content (https://support.google.com/youtube/answer/2802008)", "Harmful or dangerous content (https://support.google.com/youtube/answer/2801964)")[^].

At face value the policies as they stand essentially appear to align with current legislation (https://www.katsbits.com/smforum/index.php?topic=846.0) governing threatening (https://www.katsbits.com/smforum/index.php?topic=899.0) or hateful behaviour or conduct (https://www.katsbits.com/smforum/index.php?topic=885.0), or anything ostensibly in support of or promoting terrorism[1]. In other words, Users aught not to be posting content otherwise considered objectively material to a Criminal Offense, that anyone doing so, can have their content removed and/or their account suspended, without warning ("notice") on both counts (cf. ToS 7.8 (https://www.youtube.com/t/terms))[2].

Under normal circumstances this would be more than adequate; where it appears an individual is actually threatening another person, group or establishment with harm, and not just saying "mean words on the Internet (https://www.google.co.uk/search?site=&source=hp&q=%22mean+words+on+the+internet%22&oq=%22mean+words+on+the+internet%22)", or expressing poorly worded or advised jokes or comments, or is posting material that is intended to be used to cause harm, bomb-making tutorials for example, the appropriate action can be taken, and the authorities involved where necessary, to investigate whether there is genuine and real cause for concern. Should nothing come of such investigations content and accounts could then be returned (at YouTube's discretion of course[2]).

But this isn't about being reasonable (or wanting an "reasonable discussion") from the consumers point of view (video uploaders and those watching), especially given the amount of flack YouTube has received of late from the a number of European (http://www.independent.co.uk/news/world/europe/germany-fake-news-fine-facebook-twitter-youtube-social-networks-50-million-euros-illegal-posts-a7629306.html) Governments (https://www.theguardian.com/technology/2016/dec/17/german-officials-say-facebook-is-doing-too-little-to-stop-hate-speech) and the European Parliament (https://www.katsbits.com/smforum/index.php?topic=933.0) in particular, who allege the corporation isn't doing enough to suppress certain types of speech, or isn't remove 'offending content' fast enough, especially material critical of government activities.

With this in mind YouTube is in fact acting on what they're being told to do rather than risk loosing access to those markets; although one or two regions might not seem too big a deal given YouTube/Googles monopolistic bravado, the concern for them would be the 'domino effect' this might cause, once one goes they all might go as soon as Governments and Regimes realise they can use local law to shutter the service, something that's useful to any Government wanting to stop an opposition rising - the policies of today that would have shuttered yesterdays protests, imagine the likelihood of the Arab Spring without the Internet.

In that context YouTube has to respond by developing and implementing dubiously thought-out, ill-conceived policies ostensibly policed not by YouTube itself but vested third parties - their way of paying lip service to impartiality; if harmful content remains it would be because the "experts" didn't properly advise them[3].

This naturally leads to political bias when the so-called "experts" are self-appointed political advocates[4] and not content analysts, their assessments are based entirely on their own predilections and hunger to be taken seriously, or at least for the issues they tout. Only then does it become possible, acceptable, to target 'conservative' viewpoints on any given platform when service providers and moored experts have 'progressive' leanings[5]. Given the ability to develop or advise on policies that remove harmful content, their doing so can only be wholly partisan towards their politiks rather than the development of more useful universal 'rule sets' that benefit and apply to everyone, equally.
Quote
"... if you want to test a wo/man's character, give [them] power".


FootNotes:
[^] Through legal Council content can be removed from YouTube where its found to be defamatory - "Defamation Complaint (https://www.youtube.com/reportingtool/defamation)". Or where personal information has been exposed, through the "Privacy Complaint Process (https://support.google.com/youtube/answer/142443?hl=en-GB)".

[1] The Courts test for 'hate' or 'threats' typically requires an impartial person, given all the facts, reasonably concluding there to be a genuine fear or concern for an individuals safety or well being. For example the Patriot Act (https://www.gpo.gov/fdsys/pkg/BILLS-107hr3162enr/pdf/BILLS-107hr3162enr.pdf) in the USA, or the UK's Anti-Terror, Crime & Security Act (http://www.legislation.gov.uk/ukpga/2001/24/schedule/5).

[2] Its important to note that YouTube is under no legally binding obligation to actually do anything about content that violates their content policies, that doing so should they decide to, is discretionary - "YouTube reserves the right (but shall have no obligation) to...".

[3] Whilst YouTube/Google can be held accountable for their service (they can be sued or fined (http://www.bbc.co.uk/news/technology-40444354?ocid=socialflow_twitter)), the same cannot be said of the content "experts" advising them, who are wholly unaccountable to anyone but perhaps their members (their executive boards not their subscribers and donators). In other words, YouTube receives all the flack for poorly implemented decisions instead of the 'expert' panel advising on the policies used.

[4] YouTube's content "experts" are "select contributing member of YouTube’s Trusted Flagger program (https://www.adl.org/news/press-releases/adl-applauds-google-and-youtube-in-expanding-initiative-to-fight-online-hate)", the (YouTube Contributor (https://support.google.com/youtube/answer/7124236)" ("YouTube Hero's" and "Trusted Flagger") program, more likely due to their agreeable politics and rhetoric as much as for their expertise, bypassing the normal application and selection process).

[5] It's difficult to ascertain actual versus perceived instances of bias, i.e., whether the alleged attacks on conservatives is real or not. A number of studies/research/investigations do indicate bias in other areas (e.g. "Lackademia: Why do academics lean left? (https://www.adamsmith.org/research/lackademia-why-do-academics-lean-left)", "The Institutionalization of Ideology in Sociology (https://heterodoxacademy.org/2017/01/12/the-institutionalization-of-ideology-in-sociology/)", "Social media for large studies of behavior (http://science.sciencemag.org/content/346/6213/1063.summary)", "The Political Environment on Social Media (http://www.pewinternet.org/2016/10/25/the-political-environment-on-social-media/)", "Politics on Social Networking Sites (http://www.pewinternet.org/2012/09/04/politics-on-social-networking-sites/)", "Twitter Reaction to Events Often at Odds with Overall Public Opinion (http://www.pewresearch.org/2013/03/04/twitter-reaction-to-events-often-at-odds-with-overall-public-opinion/)") but without the numbers bias can only be implied or suggested.