Instagram. Tama Leaver
Читать онлайн книгу.timeline include how often a user opens Instagram, how many people they follow, and how much time they tend to spend on Instagram each time it is opened (Constine 2018b).
Of course, there are many other algorithms at work on Instagram, from those which determine suggested accounts to follow, through to those that flag content for moderation or removal, through to those that curate the Explore area, matching content and accounts with the recorded activity of each user. Indeed, even the previous chronological timeline was delivered by an algorithm, although the difference here is that the operation of that algorithm was transparent to users. While the scope of algorithmic activities are often invisible to users, and difficult to map or even track, it’s important to recall that all large platforms using algorithms necessarily include cultural assumptions and social norms of some kind in those algorithms, often perpetuating inequalities of various kinds (Gillespie 2018; Noble 2018).
As social media scholar Taina Bucher has argued, the introduction of an algorithmic timeline, or anything labelled as algorithmic, invites users to imagine and respond to the perceived cultural logic of the algorithm. Many Instagram users took to Twitter and other platforms to decry the upcoming timeline changes as something that would destroy their existing experiences of the platform as it would codify elements of popularity and ranking that were perceived to be antithetical to their everyday uses. Thus, in mapping responses to the introduction of Instagram’s algorithmic timeline, the response from users revealed that for them ‘algorithms are seen as powerbrokers that you either play with or play against’ (Bucher 2018, p. 112). The algorithmic timeline was met with the hashtag #RIPInstagram being used on Twitter to lament the changes, and discuss them in a largely negative way (Skrubbeltrang, Grunnet & Tarp 2017). That said, just like Facebook’s newsfeed changes, Instagram did not appear to lose many, if any, users as the algorithmic timeline was rolled out for everyone. Indeed, interpreting and gaming the algorithm has become something of a social media dark art, as discussed in chapter 4.
Communities, Boundaries and Content Moderation
The notion of the Instagram Community as a singular group, or even vaguely meaningful collection of people beyond the basic fact that these are all people who have chosen to download and create an account on Instagram, stretches the idea of community further than is meaningful. The use of the term community is, however, not accidental, and instead is part of a broader strategy to name and, to some extent, control or constrain the behaviour of Instagram users (Gillespie 2018). Perhaps the most notable place where the Instagram community is officially named into being is in the Community Guidelines which outline what was is, and what is not, permitted on the platform. In the short, summary version, of these guidelines, Instagram sums up their position thus: ‘We want Instagram to continue to be an authentic and safe place for inspiration and expression. Help us foster this community. Post only your own photos and videos and always follow the law. Respect everyone on Instagram, don’t spam people or post nudity’ (Instagram 2018b). There are far more details after that short summary, but in essence the description reminds users that Instagram is a policed platform, and there are rules to follow. Internet researcher Tarleton Gillespie describes the Instagram guidelines as a positioning of the platform as a community keeper, meaning that the Instagram community is spoken about as a ‘fragile community, one that must be guarded so as to survive’ (Gillespie 2018, p. 49). This rationale leads to a number of different rules and forms of content moderation, both algorithmically automated – where algorithms detect and delete the most obvious forms of nonpermitted content such as explicit pornography – and manual, where human moderators view and judge content explicitly reported by other Instagram users.
Initially the Instagram Community Guidelines explicitly banned all nudity, regardless of context, including any display of female nipples. However, after a number of Instagram users publicly reported their accounts being shut down for showing breast-feeding photos, Instagram eventually responded to strong community sentiment that this activity should not be positioned as sexual or worthy of banning (Grossman 2015). The community outcry over the hypocrisy of removing, amongst others, many images of breastfeeding mothers, saw Instagram and Facebook revise their guidelines to create exceptions to this rule (Locatelli 2017). As such, Instagram’s updated Community Guidelines show a more contextually aware approach to the visibility of nipples on the platform:
We know that there are times when people might want to share nude images that are artistic or creative in nature, but for a variety of reasons, we don’t allow nudity on Instagram. This includes photos, videos, and some digitally-created content that show sexual intercourse, genitals, and close-ups of fully-nude buttocks. It also includes some photos of female nipples, but photos of post-mastectomy scarring and women actively breastfeeding are allowed. Nudity in photos of paintings and sculptures is OK, too. (Instagram 2018b)
This tempering of the guidelines serves as a reminder that guidelines are constantly being revised, as are the ways that human moderators judge flagged content, and this process can often be perceived as responding to potential bad publicity (such as the media outcry around breastfeeding photos being removed) rather than necessarily internally driven reviews of what the singular, imagined Instagram community actually want (Gillespie 2018). The constant revision of the Community Guidelines also reflects the potential arbitrariness of the moderation process, in terms of the reporting options available to users, the feedback they are given after a report, and questions about the consistency of moderation decisions (Witt, Suzor & Huggins 2019).
Instagram, like many social media platforms, has also had real difficulty in managing content which valorizes and promotes eating disorders, which we refer to collectively as ‘pro-ED’, but is usually referred to by those who post and search for this material as ‘pro-ana’ (promoting anorexia) content (Gerrard 2018). The line between pro-ED material and more socially acceptable depictions of, and aspirations for, thinness, are blurred at best. As gender and media researcher Gemma Cobb (2017) argues, on many platforms pro-ED material is deliberately disguised as health motivation posts, aspirational (healthy) weight images or something else which is – in terms of culture promoted by the platform – socially acceptable. For several years, Instagram’s Community Guidelines explicitly banned eating disorder accounts and content, stating they would remove ‘any account found encouraging or urging users to embrace anorexia, bulimia, or other eating disorders’ (quoted in Cobb 2017). While that absolute ban has been lifted, Instagram now provides warnings and resources rather than erasing all pro-ED material. Building on advice from health professionals, a new sub-section of Instagram’s Help Centre called ‘About Eating Disorders’ (Instagram 2018a) provides suggestions on how to engage with people with eating disorders, and refers to explicit resources and services that can provide support.
While Instagram employs a mix of algorithmic filtering, hashtag bans (some permanent, some temporary), account removal and limiting content from search and explore, the policing of hashtags has received the most attention, but is also the easiest to do since this is where content has been explicitly labelled by the user posting it. Yet hashtag banning is far from perfect; when thin inspiration tags #thinspo and #thinspiration were blocked, for example, thinly veiled alternative spellings would quickly emerge such as #thynspiration or #thinspoooo (Cobb 2017). Also, when images are ambiguous (possibly about eating disorders, possibly not), often users will use many hashtags, disrupting the potential of a hashtag to clearly provide context and situate a post. So, a post might include #diet, #healthy, #gymlife and many other tags before also including #thinspo and #bonespo, confusing easy (and automated) banning and classification of images (Cobb 2017, p. 109). At the time of writing this chapter, for example, #bonespo returned a health warning before directing users to (a) Get Support, (b) See Posts Anyway or (c) cancel the search (see figure 1.2). And while #bonespo returns the health warning screen, the tag #bonespoo (with one extra o), which was suggested when searching for #bonespo, does not have a warning screen. However, top #bonespoo posts clearly include pro-ED content from accounts which have in their description the request ‘Don’t report, just block’, showing these users have an active awareness of Instagram’s policing of these images, and are trying to circumvent being deleted. Similarly, while hashtags for pro-ED are being banned and policed,