Monash University Publishing | Contacts Page
Monash University Publishing: Advancing knowledge

What Matters?

CHAPTER THREE

Parable of Value 2: Digital Disruption / There’s an App for That!

Disruption is a loaded word. But it is also an empty signifier, hollowed out by misuse and overuse, consumed and regurgitated by corporations hungry for the next slick management term. As we go to press, happenings in this field are fluid. It is too soon to tell how the #metoo movement and the 2018 Facebook/Cambridge Analytica data sharing scandal (and the algorithmic reality underpinning it) will affect our lives long-term. Digital disruption, however, means something more particular. Wikipedia (a senior member of the Digital Disruptors’ Club) says that in the field of business the term refers to ‘an innovation that creates a new market and value network and eventually disrupts an existing market and value network, displacing established market leading firms, products and alliances’.44 The phrase was coined by Clayton Christensen in his 1995 book The Innovator’s Dilemma. But the term has mutated in usage, as terms tend to do. In the NPR program ‘All Things Considered’, Kevin Roose points out: ‘these days [disruption]’s used to sort of mean cool … [and] anything that’s sort of vaguely new or interesting’.45 The word ‘digital’ needs some investigation too. It is just as ubiquitous, if seemingly less controversial.

What does ‘digital disruption’ promise and/or threaten for arts and culture? There are a variety of opinions on this. In the 2016 Brian Johns lecture, Julianne Schultz, one of Australia’s leading public intellectuals, called for action to protect Australian culture from a suffocating globalisation driven by digital production, consumption and dissemination mechanisms. For Schultz, in the age of the FAANG (Facebook, Apple, Amazon, Netflix and Google), ‘we’re all global citizens, which threatens to make national cultural institutions both more vulnerable, but also more important than ever’.46 For most people, digital disruption refers to the idea that the digital provides new ways of doing things, including cultural things, that upend traditional ways of creating and participating, of making and sharing. Culture was always ripe as a key site for this to occur. Media scholars the world over are busy conceptualising what Netflix means, or Spotify, or Google Books. The shift from consuming single episodes to binge watching, for example, changes the experience of television drama. But digital disruption also changes our understanding of value. Value, it is claimed, is now to be found not so much in the content as in the curation, in the infrastructure that allows for discovering, queuing, sharing and favouriting (an interesting neologism, that one). It’s in the convenience, in the way the service fits into our way of life. It’s in the platform.

Platform versus Stuff

Let’s take a quick tour around the major digital disruptors. Netflix originally grew out of a video and DVD postal service. Its catch-phrase was ‘no late fines, ever’. Revolutionary for people for whom getting to a video store was a problem. This is a classic fable of modern entrepreneurship and marketing. It addresses a real but relatively minor issue, and wipes out an industry because its service is more convenient to use. A lot of disruption for a little improvement. Now Netflix creates its own original content, using its algorithmic knowledge of viewing habits to direct production budgets. It has sold its services to Australians – once hailed as the biggest illegal downloaders in the world – off the back of the argument that we will be happy to pay for movies and television shows if quick and convenient access is provided to allow equal(ish) participation with US and UK audiences. Netflix uses a collaborative filtering method of generating recommendations, including a star system that asks users to vote on programs in their catalogue. It compares users’ viewing histories to predict the percentage likelihood a user will enjoy a particular title, offering recommendations filtered for recency and other less visible factors, such as which titles they are actively promoting. David Beer calls an algorithm the ‘decision-making parts of code’ and in that way they clearly have an inherent power to manipulate.47

Spotify, the music streaming service, grew out of a response to file-sharing practices, capitalising on the early failure of the less-than-legal Napster, established in 1999 as a peer-to-peer file-sharing platform. Where Napster had no connection with the artists whose work it distributed, Spotify paid royalties to its musicians, albeit insultingly low ones. Music lovers rejoiced to find a convenient and responsible way to listen to old favourites and discover new ones. For many of its users, it is Spotify’s recommendation engine that makes the subscription fee attractive. Again, this engine employs algorithms that note what you are adding to your playlists, what you are listening to and, crucially, what you are skipping over, to shape a suite of 30 new songs, a customised mixtape ‘for your listening pleasure’, once a week.

There are many different versions of recommendation engines, employing different approaches to the ‘value-add’ role of curation or discovery. Think about Amazon’s prompt: ‘people who bought this also bought …’. Sometimes it’s useful, sometimes it’s hilariously dumb. It’s a crude system relying on the punt that similar-seeming customers will have similar interests. When Spotify’s metadata style guide was leaked in 2015,48 it revealed the usual technical advice: how to deal with different or non-standard spellings of a name; how to account for creative roles (including producer and lyricists, as well as performing artists); the problem of remastered releases; the categorical distinction between a single, an EP, an album, a compilation; and so on. But in doing so it also released a lot of less innocent information about their techniques for generating recommendation lists.

Pandora is Spotify’s best-known antecedent, though there is also Last.fm, and Apple Music is currently seeking a stronger market foothold. Pandora’s curation depends on tagging music by attributes. Its Music Genome Project ‘captures the essence of music’ by reducing music to 450 attributes, or ‘genes’, via an in-house team of musicologists.49 These musicologists listen to 20–40 seconds of a song then attach metadata, a list of relevant attributes, to classify it. Sub-genomes determine the fields to be populated (a folk music song will generate a different set of possibilities to swing or heavy metal). The attributes count some things that can be measured precisely: beats per minutes, use of particular harmonies or instruments, etc. Other traits are less objective, such as ‘musical influence’ or how dominant a rhythm is or the intensity of a track. There is training for this, calibration, peer review. But in the end it is what it appears to be: personal judgment.

This inevitable subjectivity raises inevitable questions about partiality. Why do women artists appear less frequently than men in the recommendation list? Are the reasons for this systemic or cultural? Is it because fewer women are played on the radio or get recording contracts, so fewer women appear in self-generated playlists, so fewer women appear in recommendation lists? One of Laboratory Adelaide’s research team tried to alter this, by adding only women artists for several weeks in a row, thus expressing a clear musical preference. But it didn’t have much effect on the recommendations arriving in the playlist each Monday morning.

Recommendation engines tend to be opaque for commercial reasons, which means that even though we know the result, we can’t discover what drives the choices.50 The engine in Spotify is a big data project that depends on and deploys our ‘taste profiles’, generated from our listening habits. These are correlated with the more than two billion playlists generated by its 140 million people, of which 70 million are paying users.51 The Spotify team has made some of its technical information available through a Slide Share presentation, ‘From Idea to Execution: Spotify’s Discover Weekly’.52 According to this inside information, the big data of users’ playlists is then processed using collaborative filtering and natural language processing. Spotify treats a playlist as a document, and the songs in a playlist as words, and their team uses commonly available text mining tools to drill deep into the data.

Like Netflix, Spotify uses curation as a ‘value-add’. Users can both access and discover content through their services and the big data algorithms developed behind the scenes. Value in this context is conditional to the key terms applied to discover it, and the tail of metadata inevitably wags the dog of content. What needs to be much better understood, therefore, is the decisive impact of this hidden curation on our actual cultural experience.

Box 4 The Politics of Metadata, Tully’s Experience

A few years ago, I worked as an indexer for AustLit, the Australian Literature Resource. This is a digital database of literature written in Australia or by Australians. For an online project, it has great longitudinal credibility. It was established in 2000 by combining a number of disparate literature databases around the nation. Between 2008 and 2013, I spent some hours a week contributing to the big task of keeping AustLit up to date by adding newly published works and plugging gaps in its historical content. ‘Adding in’ a new published work meant starting a new record in the database and entering the standard bibliographic attributes: title, author, publisher, place of publication, date of publication, and so on; but then also contributing some subject-content indexing: that is, key terms to indicate what a particular work was about.

This last step is the crucial one. Subject-content indexing means that anyone looking up works about, say, FJ Holdens, or lesbian relationships, or Uluru, can find the range of texts (novels/poems/ short stories) that contain the themes and content they are looking for. There is a list of key indexing terms that can be added to a record, provided as a thesaurus. New terms can be used, but only if really needed. Too many key terms in the thesaurus make the search exercise less useful because it doesn’t connect like with like. It’s too specific. Someone researching pythons in literature may or may not be interested in the broader question of snakes, or the Rainbow Serpent, or lizards. The trick is to create a record for a text so it is as discoverable as possible without appearing too often as a false positive.

When I began this indexing work, it was slow going. I would agonise over every poem. What did it mean? What did the author intend? How could it be appropriately situated in the vast field of Australian literature? After a while, my indexing speed increased. I got more efficient, pumping my way through book after book of poetry. I congratulated myself on my skills. But this newfound speed had a darker side. I was reading and interpreting the texts according to the structure of the database and its thesaurus of key terms. Many poems became about ‘conflict in relationships’ or ‘growing up’ or ‘spiritual journeying’. The act of interpretation – the value-add – was outsourced to a series of murky and often contested metadata categories. What does this tell us about the way that we make decisions about content, value and relevance in arts and culture in digital environments every day? Metadata is an informational structure. Even when completely accurate, metadata is political. Informational structures and infrastructures directly influence the work people do inside them.

Think, for example, about the 2015 controversy over Google’s image-recognition algorithm that auto-tagged pictures of black people as ‘gorillas’. Or Microsoft’s 2016 AI chatbot Tay, sent into Twitter to chat with real people and learn from them. Microsoft had to hit the kill switch within 24 hours because Tay fell began tweeting racist, anti-Semitic, sexist and transphobic comments. It had learned not how to be convincingly human, but how to dehumanise others, spewing back content derived from what people were saying to it. Politicians often swallow the utopian claims of digital technology while only being dimly aware of how it actually works. In a 2014 interview with David Spears on Sky News, George Brandis, then Attorney General as well as Arts Minister, clearly demonstrated that he didn’t understand the concept of metadata, or how it can give information away.53 And yet it was central to the laws on mandatory data retention he was trying to introduce.

Government is a latecomer to algorithmic supervision and control, though, if the Cambridge Analytica scandal is anything to go by, it is a superuser of algorithmic practices for instrumental ends. Other players have been working in the space for a long time. Just as Spotify looks at what you play and what you skip to determine the difference between what you say you like (songs you save to your playlist) and what you really like (songs you play in their entirety or repeatedly), Amazon collects data on what, how fast, and how much you read on your Kindle and sells the data on.54 An author may have a bestselling book but if the data collected shows that readers don’t finish it there is unlikely to be a market for a follow-up. This is useful business information for publishers. Amazon sells it to them. Where is the boundary between the optimisation of investment and customer demand on the one hand and the place of literary judgment and longitudinal value (value that develops over a longer time period)? Inevitably the practice of making publishing decisions based on e-reader data shapes the future literary record. Did Eliot’s publishers survey her readers on their preferred book length before printing Middlemarch? There are privacy concerns inherent in this new situation too. The Amazon Kindle’s ‘Notes and Highlights’ functions have potential for strong positive pay-offs for reading: they may facilitate different and perhaps deeper kinds of reading across social networks. They may motivate reluctant readers, or support readings in educational settings, or enable guided readings through the involvement of authors themselves. Yet social reading on Kindle also raises concerns about the misuse of deeply networked and commercially-oriented technology for ‘black-box’ supervision and manipulation of a once private act.55

The FAANG tech giants keep their motivations hidden behind a shroud of marketing blather about consumer choice. But utopian and dystopian potentials are never far apart, and examples in the digital realm are not hard to find.

The Google Books project began, according to their own myth of origins, with the dream of its young college creators to have sources at their fingertips:

In 1996, Google co-founders Sergey Brin and Larry Page were graduate computer science students working on a research project supported by the Stanford Digital Library Technologies Project. Their goal was to make digital libraries work, and their big idea was as follows: in a future world in which vast collections of books are digitized, people would use a ‘web crawler’ to index the books’ content and analyze the connections between them, determining any given book’s relevance and usefulness by tracking the number and quality of citations from other books.56

To make this dream a reality, in 2002 Google partnered with a number of prominent university libraries and began digitising millions of works, shipping some to Mountain View, their California headquarters, and digitising others onsite at libraries by bringing in teams and technology to do so. Despite lawsuits from the Authors’ Guild, class actions by publishers, and calls to stop by eloquent authors such as Ursula Le Guin57, they steamrolled ahead.58 Later Microsoft sought to compete, but by then Google Books had too much critical mass. Microsoft terminated a planned project with the British Library, conceding that no-one could take on Google. Why so much time, effort, lawyers’ fees and force of will to create the Google Books project? There is a simple answer: the data collected from people’s use of these resources help Google sell targeted advertising. But there is also a more complicated answer: Google uses the content of books to train artificial intelligence.59 Frankenstein’s monster learned to behave like a human being by listening to and then reading literature in Mary Shelley’s novel published two centuries ago. Now AI is reading works of literature great and small in much the same way.

Google’s handling of the Google Arts and Culture Institute (GACI) is less controversial, but just as instructive in thinking about the value of culture in digital platforms. January 2018 saw a flurry of interest in the Google Arts and Culture app’s selfie feature. There is a acrimonious debate over the relationship between selfies and arts and culture. In 2014, the New Republic’s Chloe Schama demanded that people ‘Stop Taking Selfies in Front of Works of Art!’, complete with exclamation mark to drive home her frustration at the advent of ‘Museum Selfie Day’.60 But by 2017, selfies as ‘engagement activities’ had reached top galleries worldwide. In March 2017, the Saatchi Gallery in London opened its ‘Selfie to Self-Expression’ exhibition and #SaatchiSelfie competition. Naturally, GACI sought to use selfies to drive engagement with its own platform:

The Google Arts & Culture platform hosts millions of artifacts and pieces of art, ranging from prehistory to the contemporary, shared by museums across the world. But the prospect of exploring all that art can be daunting. To make it easier, we dreamt up a fun solution: connect people to art by way of a fundamental artistic pursuit, the search for the self … or, in this case, the selfie.61

GACI hadn’t made much of a splash until January 2018 when Google introduced a function to enable users to filter cultural artefacts not by year, genre, nationality or location, but by visual similarity to the users themselves. That is, through the app it is now possible to find people in art who look like you. You use the camera to take a photo of yourself and the app identifies artworks that resemble it along with a ‘resemblance percentage’ score. ‘Finding yourself ’ in art has never been quite so narcissistic. But it is driving up attendances at visual arts events and participation with culture.

Underneath all this lies the same politics of association as the AustLit thesaurus of key terms. It needs careful analysis rather than glib enthusiasm, as the rise of Critical Algorithm Studies attests.62 Ned Rossiter and Soenke Zehle argue that ‘algorithmic experience is the new terrain of extractive industries within contemporary capitalism whose structural logic is itself algorithmic’.63 Put more bluntly: ‘the rise of algorithmic architectures’ is ‘central to the capture of experience’. People need metadata systems and recommendation engines. But they also need to understand the restrictive intellectual and cultural conditions under which they do their work.64 When a headline passes our screens telling us that 6,000 works of children’s literature have just been digitised and made available for free online – an accessibility that is contingent on access to literacy and technology – open-access warriors may leap for joy. But do we open the source? Do we actually read them? Rarely. Because 6,000 works is too many. How can we find anything of value even to peruse? As author Neil Gaiman has famously said ‘Google can bring you back 100,000 answers. A librarian can bring you back the right one.’65

Openness

That skill of finding the right answer, the right book, the right piece of information is even more crucial in the move towards open access, open data and big data. These movements are crucial for exposing modern democracy and its governments to public scrutiny. Big data disrupts exponentially. ‘Big Data Means Big Disruption’, wrote Daniel Newman in Forbes in 2014.66

The vast majority of research data is created with public money and so there is a strong argument for public access to it. The data opened up might come from science, health, or government itself – a wide variety of different areas. This would allow research collaboration beyond the boundaries of one research team or project, and findings to be available for revision and reuse. So the open data movement is a public good. But it can also be a public relations exercise: the appearance of openness, adhering to few principles of open data in practice. No-one looks at the data, but the (paying) public are comforted by the fact that the research findings are ‘out there’.

Quantity can be the enemy of quality, or even of accessibility. Open data can be a means of obfuscation, for ‘hiding in plain sight’, not by withholding data from public scrutiny but by creating a deluge – by providing more data than can be properly considered, examined, contextualised, or even located. Think of legal dramas, where document deluge in discovery processes is used as a way of overwhelming smaller law firms. ‘Give them everything’, says the big wig with a malicious sneer.

Open access privileges choice as if it were an innocent and wholly free activity. We all like to think we are choosing. But we often allow ourselves to be herded. Choice is good, but not an absolute good. Some herding occurs through public sentiment, some through metadata (the categories things are put in and the relationships between those categories). Much of it now occurs through algorithms. Search engines appear to be neutral, but the information you are seeking is undiscoverable until a search engine interprets your keyword request and lays down a pathway to the content. Few searchers look past the first screen of hits. Advertisements dominate the opening screen that any search engine returns. Search Engine Optimisation can be applied to get your webpage higher up the list of returns but there are limits to the effectiveness of this. Safiya Umoja Noble argues that search engines are definitely not ideologically neutral tools but rather systems, designed by humans embedded in particular power structures. They reflect the problems, assumptions, perspectives and biases of the contexts from which they come, and in which they are complicit.67

While open access makes some sense for the results of publically funded research, the case for similarly free ‘creative content’ is predicated on the false notion that all people in society are equitably rewarded for their work. The consumer expectation that digital stuff should be cheap or free exploits the creators of content who are often artists making a living precariously in the so-called ‘gig economy’. The owners of platforms (the tech companies and engineers employed by them) have high salaries and secure employment, even as they use the creative outputs of people expected to provide their labour for free or low prices. They often do, victims of the passion they demonstrate, or are expected to demonstrate, for their work. Thus a life in arts and culture gets harder, even as distribution methods get more efficient.

Open access may make the inequalities in society we have not yet resolved more extreme. There’s no app for that. The age of FAANG brings with it challenges that evaluation strategies must learn to deal with. As Julianne Schultz points out:

we are seeing a massive redistribution of wealth from the cultural sector, where meaning is created, to the technology sector, which has figured out how to market, distribute, reach and make money out of it in ways the cultural industries never imagined possible.68

The role of Google (with its company motto ‘Don’t be evil’ disappearing in 201869) needs particular vigilance. Noble reminds us:

Digital media platforms like Google and Facebook may disavow responsibility for the results of their algorithms, but they can have tremendous – and disturbing – social effects. Racist and sexist bias, misinformation, and profiling are frequently unnoticed by-products of those algorithms. And unlike public institutions (like the library), Google and Facebook have no transparent curation process by which the public can judge the credibility or legitimacy of the information they propagate.70

David Beer calls this ‘the social power of algorithms’.71 If these systems are going to control our experience of culture and our means of communication about it, we have to have better ways of understanding them.

Examined more closely, the notion of digital disruption for arts and culture looks like a sleight of hand. If we leave it to the technology of big data, we will have no meaningful role in curating our stories and creativity. It is hard to see how this would turn out well for a medium-sized Anglophone country with a history of adopting a cargo-cult mentality. For Schultz it’s about how we value culture in an environment where the currency is ‘likes’ or ‘shares’ rather than any kind of deeper engagement:

The purpose of cultural investment in the Age of FANG needs to be restated, funding maintained and opportunities to innovate and export enhanced. Otherwise we will become invisible at best and tribal at worst. If that happens we will be reduced as citizens and countries to passive consumers in a digital marketplace that values us only for our ability to pay.72

Box 5 The My Cultural Organisation Website

In 2014, Laboratory Adelaide gave a presentation on our research to the Australian Major Performing Arts Group. We talked about the problems arts and culture face in respect of language, time, and the balancing of quantitative and qualitative information. The usual. We used this thought experiment to illustrate the suffocating hold quantitative data has over our idea of value.

‘Some of you’, we said, ‘will have children in primary or secondary school. So you know all about the MySchool website and how it exists to make transparent 10,000-plus schools in Australia. The site provides “statistical and contextual information” to help parents make good decisions, meaningful decisions, about where to send their children to school. It creates standard entries for each school so that data is comparable across sites. You can see the enrolment numbers, the diversity statistics and the “Index of Community Socio-Educational Advantage” (ICSEA – providing the school’s ICSEA value, the Average ICSEA value, and the distribution of students across the index) of the environment in which the school is located.

‘School performance is based on the National Assessment Program– Literacy and Numeracy (NAPLAN) tests that students do in years 3, 5, 7 and 9. It tells you how many students are enrolled in each year level. It presents data going back to 2008. It delineates whether the school is government or private, what years it caters for, whether it is metropolitan, rural or remote. How many teaching staff there are and the full-time equivalency (FTE) of those staff. How many non-teaching staff the school employs. Details about a school’s finances. The site allows schools to add a context statement so they can tell a narrative about their school and the community in which it is embedded.

‘But does anyone really read that? Educators criticise the MySchool site for many reasons. It bases the value of a school on results of a standardised literacy and numeracy test. Teachers say a student’s success on the test depends more on whether they got a good sleep the night before or had breakfast than on the quality of teaching they have received.

‘What if’, we jokingly suggested, ‘the quantitative data that arts and cultural organisations are required to collect were used to generate a MyCulturalOrganisation website that the government and the public used to make (supposedly) informed decisions about which cultural activities to invest time and money in?’

To say it again: there is a mismatch between our drive for quantitative data and the quality of information it provides. These are approaches open to significant political pressure, misuse and caricature. They are expensive exercises that divert resources away from meaningful to meaningless evaluation.

44‘Disruptive Innovation’, Wikipedia: en.wikipedia.org/wiki/Disruptive_innovation

45Audie Cornish, ‘The Distracting Problem with the Term “Disruption”’. Interview with Kevin Roose. NPR’s All Things Considered’ program 15 December 2014: www.npr.org/2014/12/15/371010839/the-distracting-problem-with-the-term-disruption.

46Julianne Schultz, ‘Australia Must Act Now to Preserve its Culture in the Face of Global Tech Giants’, The Conversation, 2 May 2016.

47David Beer, ‘The Social Power of Algorithms’, Information, Communication & Society, 20.1 (2017), 5.

48Spotify Metadata Style Guide Version 2, September 2015. As leaked on the website DailyRindBlog: www.dailyrindblog.com/newsletter/SpotifyMetadataStyleGuideV1.pdf.

49This, in contrast to crowd-sourced attribute tagging or folksonomies. See Tim Westergren, ‘The Music Genome Project’: pandora.com/mgp (2007).

50On the socially retrograde consequences of this ‘black box’ tendency among algorithms, see Cathy O’Neil, Weapons of Math Destruction.

51As of January 2018 (data from the Spotify website).

52Chris Johnson, Engineering Manager, Recommendations and Personalization, Spotify ‘Discover: From Idea to Execution: Spotify’s Discover Weekly’. Published 16 November 2015: www.slideshare.net/MrChrisJohnson/from-idea-to-execution-spotifys-discover-weekly/5-Discover.

53See ‘David Speers – PM Agenda’, Sky News. Uploaded 13 October 2014, YouTube: www.youtube.com/watch?v=HGURYRjEiRI.

54Alison Flood, ‘Big E-reader is Watching You’, The Guardian, 4 July 2012.

55See Tully Barnett, ‘Social Reading: The Kindle’s Social Highlighting Function and Emerging Reading Practices’, Australian Humanities Review (2014).

56‘Google Books History’: www.google.com/intl/en/googlebooks/about/history.html.

57Alison Flood, ‘Authors Denied Appeal to Stop Google Scanning Books’, The Guardian, 20 April 2016.

58Tully Barnett, ‘The Human Trace in Google Books’, in Border Crossings, edited by Diana Glenn and Graham Tulloch (Kent Town: Wakefield Press, 2016), 53–71.

59Richard Lea, ‘Google Swallows 11,000 Novels to Improve AI’s Conversation’, The Guardian, 28 September 2016.

60Chloe Schama, ‘Stop Taking Selfies in Front of Works of Art!’, The New Republic, 22 January 2014.

61Google Arts and Culture website: artsandculture.google.com.

62For more information, see Tarleton Gillespie and Nick Seaver, ‘Critical Algorithm Studies: A Reading List’, Socialmediacollective.org (2016) (socialmediacollective.org/reading-lists/critical-algorithm-studies/) or the special section of Big Data & Society on ‘Algorithms in Culture’: journals.sagepub.com/page/bds/collections/algorithms-in-culture.

63Ned Rossiter and Soenke Zehle, ‘The Aesthetics of Algorithmic Experience’, in The Routledge Companion to Art and Politics, edited by Randy Martin (London: Routledge, 2015), 214–21.

64Nick Seaver, ‘Algorithms as Culture: Some Tactics for the Ethnography of Algorithmic Systems’, Big Data & Society, 4.2 (2017).

65Neil Gaimain, ‘Neil Gaiman on Libraries’. YouTube clip on the Library Stuff website, uploaded 20 April 2010: www.librarystuff.net/2010/04/20/neil-gaiman-on-libraries/.

66Daniel Newman, ‘Big Data Means Big Disruption’, Forbes, 3 June 2014.

67Safiya Umoja Noble, Algorithms of Oppression: How Search Engines Reinforce Racism (New York: NYU Press, 2018).

68Schultz, ‘Australia Must Act Now’.

69Kate Conger, ‘Google Removes ‘Don’t Be Evil’ Clause From Its Code Of Conduct’ Gizmodo. 18 May 2018.

70Safiya Noble. ‘Google and the Misinformed Public’, Chronicle of Higher Education, 15 January 2017.

71David Beer, ‘The Social Power of Algorithms’, Information, Communication & Society, 20.1 (2017), 1–13.

72Schultz, ‘Australia Must Act Now’.

What Matters?

   by Julian Meyrick, Robert Phiddian and Tully Barnett