Click to get the RSS master feed

Bethesda CreationClub or paid mods, again

June 12, 2017, 08:03:29 PM by kat

[image courtesy CreationClub]

Bethesda introduced CreationClub at this years E3 and has taken steps to reiterate the new initiative is NOT "paid mods" (#notpaidmods).
Is Creation Club paid mods?
No. Mods will remain a free and open system where anyone can create and share what they’d like.
So just what is what is CreationClub then?
Creation Club is a collection of all-new content for both Fallout 4 and Skyrim. It features new items, abilities, and gameplay created by Bethesda Games Studios and outside development partners including the best community creators. Creation Club content is fully curated and compatible with the main game and official add-ons.
It appears then that CreationClub is not just a new iteration of Paid Mods, at least not directly. With that said however, and although paid content didn't work out quite so well the last time it was tried via Steam, it looks like Bethesda learned a few things from the experience nonetheless, that gamers;
  • do want new content and more frequently.
  • are more than happy to pay for it (subject to "3").
  • don't want to pay for previously free content (being especially leery of 'leaches'[1]).
  • wanted guaranties co-dependent mods worked properly.
  • and copyright/authorship wouldn't be an issue[2].
The only way to resolve a lot of these questions is for publishers to 'curate' or 'manage' User Generated Content to some degree. This is what Bethesda appears to have done with CreationClub; in limiting participation to a select few, proved, developers (individuals or otherwise)[3], Bethesda is then able to provide (premium/paid) content at an accelerated rate that works (subject to the usual caveats about games and content ordinarily working), absent ownership issues, whilst simultaneously leaving freely available community mods untouched.
Most of the Creation Club content is created internally, some with external partners who have worked on our games, and some by external Creators. All the content is approved, curated, and taken through the full internal dev cycle; including localization, polishing, and testing. This also guarantees that all content works together.

Further Reading
- Paid Mods and why they don't work

[1] a big concern over paid mods was the potential it carried for the more unscrupulous to 'leach' from the community by simply reselling mods and content without change they had no hand in authoring, and the difficulties this subsequently presents from a remedial perspective (how to hold 'anon' individuals to account for misappropriation).

[2] from a creators point of view anytime publishing content involves money its becomes subject to abuse. Aside from concerns over content leaching, Copyright and Authorship can be difficult to prove, to such an extent that production can be stymied having to constantly deal with disputes over Copyright or ownership.

[3] participants in the program are likely treated as though they are commercial content developers, likely required to provide identifying information so they can be held to account should there be issues, especially so given the fact that money is involved.

YouTube Clarifying Demonetization & Controversial Content

June 03, 2017, 02:11:36 AM by kat
YouTube demonetization isn't censorship

Following on the previous post on the topic of YouTube using demonetization as a form of 'soft' censorship, it appears, at least from a public-facing perspective, that Google and YouTube are indeed demonetizing controversial topics simply as a means to "restore advertiser confidence" - notwithstanding the 'lack' of confidence this implies being used by certain corporations and advocacy groups as leverage against controversial topics (advertisers have always had the ability to 'block' certain content against which their adverts might have appeared) and their political opposition. But that's by-the-by as the real meat of the matter is the clarification on what Google/YouTube consider "controversial" content;

Hateful content: Content that promotes discrimination or disparages or humiliates an individual or group of people on the basis of the individual’s or group’s race, ethnicity, or ethnic origin, nationality, religion, disability, age, veteran status, sexual orientation, gender identity, or other characteristic associated with systematic discrimination or marginalization.

Inappropriate use of family entertainment characters: Content that depicts family entertainment characters engaged in violent, sexual, vile, or otherwise inappropriate behavior, even if done for comedic or satirical purposes.

Incendiary and demeaning content: Content that is gratuitously incendiary, inflammatory, or demeaning. For example, video content that uses gratuitously disrespectful language that shames or insults an individual or group.

It's clear from this YouTube/Google are placing more of their eggs into the YouTube TV and "family-friendly" content basket (as is their 'right'), a move obligating them to tone-down or obfuscate troubling material so they can comfortably court the big networks (Disney, et al). In other words, they are shifting away from the politics of being the "platform for all voices" to one that's 'safe', more unified and neutered in outlook, at least from the outside.

For creators with contrarian or controversial politics, points of view or axes to grind, YouTube has made it quite clear that whilst such content is still welcome, it won't be promoted or easily monetised. It's up to Creators to decide what to do with this in mind as YouTubes loss of revenue, which prompted this change, isn't going anywhere.

Further Reading
- EU Commission & Restricting YouTube for the Public Good
- YouTube (Google), demonetization and censorship
- Illegal Hate Speech, the EU and Tech
- Improving Content ID for creators
- Twitters Trust & Safety Council and "free expression"
- Free Speech & Expectations of Privacy on Social Media

EU Commission & Restricting YouTube for the Public Good

May 27, 2017, 12:35:14 AM by kat

Under the newly established (legislatively proposed) regulatory group, the "European Regulators Group for Audiovisual Media Services (ERGA)"[1], the European Commission is to grant itself oversight over 'audio-visual' service like YouTube, or those capable of providing similar/comparable services (social media sites like Facebook, Twitter et al), such that content can be restricted "in the public interest" in a way that would otherwise be contrary to the protections previously afforded them through the EU's equivalent of US 'safe harbor' laws. In the European Commission doing this, YouTube & Co. are essentially released from the incumbent liabilities that might otherwise be applicable to the provision of service were content not restricted; the European Commission is essentially saying "you can have your precious 'freeze peach' but you will liable for the content served (you American/English pig-dog)!".

In other words, YouTube et al can have their 'freedom of speech' to provide content to consumers as they see fit (subject to User ToS compliance), but they will be wholly liable for content deemed 'offensive' and 'objectionable'. Or... they can allow the EU Commission regulative authority over content/their respective services and as a result (essentially) be granted a liability exemption. In either case Service Providers are still obligated with addressing complaints, especially where they concern "hate speech"[2].

Video-sharing platform services provide audiovisual content which is increasingly accessed by the general public and in particular by young people. This also applies to social media services that have become an important medium to share information, entertain and educate, including by providing access to programmes and user-generated videos ... Furthermore they also have a considerable impact in that they facilitate the possibility for users to shape and influence the opinions of other users. Therefore, in order to protect minors from harmful content and all citizens from incitement to hatred, violence and terrorism, it is reasonable to require that these services should be covered by this Directive (emphasis added). [pg.5]

Additional Reading
- Illegal Hate Speech, the EU and Tech ("Code of Conduct on Countering Illigal Hate Speech Online")
- Freedom of speech ends where threats abound ("Violence against Women & Girls: on gender equality and empowering women in the digital age")
- Consultation on Interim Revised CPS Guidelines on Prosecuting Social Media Cases
- Draft Investigatory Powers Bill
- More Police interested in harassment as hate
- Harassment of women now a "hate crime"
- UK Government pushed to consider "sexism" rating for games
- Twitters Trust & Safety Council and "free expression"
- Convention on the Elimination of All Forms or Discrimination against Women

[1] The European Commission ostensibly self-authors the establishment of regulatory bodies that are, as in this instance, typically only accountable to the Commission itself (they are not specifically accountable to EU member Countries although the Commission is, in principle, supposed to be representative).

[2] The Directive proposal appears to the consequence of an earlier ERGA report on "Protection of Minors in the Audiovisual Media Services: Trends & Practices" - "The report focuses on the tools currently being used by the audiovisual media service providers to help parents to protect children from content that may be unsuitable or potentially harmful to their development or overall well-being. By outlining the types of measures with concrete examples from the representative sample of the audiovisual media providers active in various EU member states, it is laying the foundations for further ERGA activities with the aim of fostering cooperation among stakeholders to protect children in the audiovisual media environment".

Code of Conflict or the complicated Ethics of VR/AR/MR

May 06, 2017, 01:40:44 AM by kat

Subtitle: "the toxic infiltration of politicised "Code of Conduct" documents in open source communities - one 'covenant' to rule them all".
The following should not be construed as legal or otherwise formal advice. Where appropriate consult a suitably qualified contract, business or legal representative for assistance.
Keeping Users 'safe' in VR isn't a new issue, the games, interactive and social media industries have long wrestled with the problem to varying degrees of success which, at the end of the day, ostensibly hinge on Terms of Service violations rather than perceived behavioural transgressions. In other words when an individual is reported for harassing or abusing another, they are not reprimanded for the actual abuse or harassment, but instead for the terms of service violation this constitutes[1]

It's important to understand the distinction here, developers are bound by service agreements unless there are broader violations of societal law involving the service itself, trafficking credit-card information for example. In this sense internet harassment or abuse are not 'crimes' from a service providers point of view (despite media rhetoric on the matter), as such this makes perceived abuses and harassment of the individual the domain of the individual, it is they who are obliged to prosecute in the absolutist legal sense[2], subpoenaing the provider for records[3] where necessary. Essentially outside service agreement violations service providers cannot offer, provide or imply remedial punishments for subjectively perceived criminal behaviour, they can only record incidents and reprimand Users based on what's defined by the User Agreements.

Where potentially 'criminal' or 'offensive' activities do occur, the service providers obligation is typically to the security of the service rather than the User, at least to the degree the provider ensures 'offensive' (in the criminal sense of an 'offense' having occurred), illegal or criminal activities don't affect Users as a direct consequence of service provision. Even then if the service somehow facilitates the theft of personal data for example, they are generally indemnified, another binding term typically included in their respective service agreements. In totality this means it's important Users read them.

With all this in mind, the current pathologically anxiety about who to blame for bad in-game, in-VR experiences, has advocates, activists, acolytes and supporters fronting politicised Codes of Conduct[4] policies as a solution to VR's "ethics" problems and the general toxicity of gaming and the internet, missing the point entirely; Codes of Conduct's are not politicised manifestos or the domain of thinly disguised progressive politics, they are functional, binding documents backed by the force of (contract) law[5].

For business this means these generic documents are not worth the trouble they represent, more so when their respective authors are rarely if ever held to account or can be found responsible for fall-out from such ill-conceived, poorly defined, politically driven policies[6]. The use of boiler plate Terms of Service agreements, User Agreements, Code of Conducts, or other generic, third-party 'rule' or 'policy' documents should be avoided because there are just too many statutory risks involved and unintended legal consequences to not being in full ownership of a given service and any accompanying legal documents[7].

Further Reading
- 50% of women are misogynists.
- Consultation on Interim Revised CPS Guidelines on Prosecuting Social Media Cases.
- Harassment of women now a "hate crime".
- More Police interested in harassment as hate.
- Violence against males in games doesn't count... another study that 'proves' it.

[1] there's a reason why "by clicking "submit" you indicate agreement to our Terms of Service" exists.

[2] barring the obvious, it's up to the individual to determine whether another's behaviour is actionable. Once that has been determined, and law enforcement is involved, pursuit is up to the individual. This is notwithstanding the fact that there are strict evidentiary tests in place to determine whether the accused is 'shit-posting' versus being genuinely harassing. In the UK for example the Crown Prosecution has this to say; "[a] communication sent has to be more than simply offensive to be contrary to the criminal law. Just because the content expressed in the communication is in bad taste, controversial or unpopular, and may cause offence to individuals or a specific community, this is not in itself sufficient reason to engage the criminal law".

[3] this is why its crucial the first port of call is to report any incidents of harassment or abuse so there is a record, irrespective as to whether anything being done about the incident - these records form the basis upon which a criminal case can be built. Unfortunately too many advocacy and activist groups advise victim to not waste their time doing this, wholly missing the point why victims should make or file reports. Anyone advocating this should be resoundingly ignored.

[4] the most popular Code of Conduct used in FoSS and OSS communities is the "Contributor Covenant". It is a political document whose author(s) appears to have little or  no grounding in Law, Business or respective Contracting.

[5] the code of conduct documents at the heart of this discussion are not community aids, their point is exactly political leverage, to gather "allies", "advocate" and "activists", collateral and agents willing to spread of the politics the documents espoused and/or endorsed. They are not intended to be used in any formal binding sense, which is why their authors won't allow themselves to be held accountable for the fallout from their use (cf. 6 below).

[6] on the (thankfully) rare occasions something does happen, the Code of Conduct authors either disappear, become unreachable, or in rare instances issue pithy Twitter tweets or Facebook messages absolving themselves of any wrong-doing, insisting the project or business wasn't forced to adopt the Code of Conduct - glossing over the social-ostracising and shaming tactics typically employed by acolytes and supporters in the press and across social media to (almost universally passive-aggressively) coerce compliance. In other words they never, if very rarely admit fault. This puts all the onus and legal consequences of a problem squarely in the hands of the project or business that employed someone else's conduct policies whilst having little inkling as to the authors intent behind the language used.

[7] for business the danger of using third-party user agreements, be they the types of Codes discussed in the above or not, that have not been specifically drawn up by legal council to match the service provided is ostensibly two-fold; 1) is the implied transfer of liability (real or not) and 2) it presents a lack of ownership over whatever service is being provided. Both have potentially actionable consequences for business owners.

MarkMonitor, AWS and site scanning abuse

April 22, 2017, 10:36:26 PM by kat

[image courtesy Amazon]

The last time MarkMonitor was mentioned here on KatsBits was back in 2011 when their aggressive BOT was discovered to be consuming a disproportionate amount of bandwidth to scour the entire server KatsBits ran from. Scrapers, snoopers and other types of BOT that intentionally ignore robots.txt whilst mooching around a website aren't normally a problem because they are often indexing content for custom built search engine products (the fact they do this is for another conversation). What's special about MarkMonitor's BOT however, is its offensive (meaning "preemptive", "active") aggressiveness; it simply does not care how much bandwidth is consumed as it move through a target website like a bull in a china shop, to the extent that bandwidth averages can be significantly different after their BOT has paid a visit. Especially troubling for image heavy websites.
Long story short, MarkMonitor are a "global leader in brand protection". Big brands task them to paparazzi their way around the internet looking for brand infringement ("paparazzi" because like that particular beast, they intentionally ignore common protocols to do what they do). They're not specifically looking for Copyright violations so much as broader 'brand' abuse they can take action against.

Back then MarkMonitor used to serve their brand tracking/investigation BOT from their own IP address making it relatively straightforward to block its bandwidth abuses. Now however, MarkMonitor uses Amazon Web Services as a third-party content distribution system to offset their own bandwidth use, and more importantly, obfuscate their presence in the scanning and network abuse the bot is engaged in. The nefarious nature of this latter point cannot be stressed enough regardless as to how it might be argued (justified).

What this now means for webmasters versus perhaps five or so years ago, is that abuse logs simply reference IP addresses associated with AWS server instances instead of MarkMonitors own domain name/IP (e.g., markmonitor.com/209.200.xxx.xxx). In other words, at face value it's slightly more difficult to trace the abuse back to the abuser, a fact that for them, reduces their liabilities.

What's more, whilst these abuse instances can be reported to Amazon using their EC2/AWS abuse reporting system (or directly mailing ec2-abuse@amazon.com), there is little assistance for those caught in Saurons MarkMonitors glare (their network abuse has been an ongoing problem for KatsBits for the better part of 10 years). Even then if abuse is found to have occurred, Amazon simply reiterates privacy policies prohibitions preventing the revelation of pertinent information about the abuser and what they were/are doing. Fortunately they don't need to as there are plenty of other ways to find this out. But that's by-the-by.

To get an idea of the extent of the abuse perpetrated by MarkMonitor, below is a list of the most recent instances of AWS abuse traced back to MarkMonitor, a few from a list of hundreds reported to Amazon this month (caveat: the nature of AWS means that whilst the addresses listed below currently resolve to MarkMonitor, they may be  dynamically reassigned to another entity at some point in the future - when in doubt perform a "reverse lookup" to see what's at the end of the rainbow before then reporting the suspicious activity to Amazon so a record exists);
  • ec2-34-209-69-182.us-west-2.compute.amazonaws.com
  • ec2-34-209-175-241.us-west-2.compute.amazonaws.com
  • ec2-54-70-139-144.us-west-2.compute.amazonaws.com
  • ec2-52-39-89-248.us-west-2.compute.amazonaws.com
  • ec2-34-209-98-91.us-west-2.compute.amazonaws.com
  • ec2-54-148-122-132.us-west-2.compute.amazonaws.com
  • ec2-52-35-141-43.us-west-2.compute.amazonaws.com
  • ec2-54-68-155-195.us-west-2.compute.amazonaws.com
  • ec2-54-154-207-210.eu-west-1.compute.amazonaws.com
  • ec2-52-27-158-160.us-west-2.compute.amazonaws.com
Discovering all this is one thing. Knowing what to do with it is another. At the very least some pointed and pertinent questions need to be asked of MarkMonitor:
- Why do they ignore robots.txt  (beyond "bad people can block our bots").
- Why are they so aggressive in pursuit of protecting managed brands.
- Why do they persist when no evidence of brand infringement is discovered.
- Why do they not have an ABUSE policy in place.
- Why do they obfuscate their scraper/scanner/bot.
- and more...
KatsBits Web
Search KatsBits using StartPage
Hottest item in Store right now!
Hot Product in Store
Visit the Store Now