Government against the giants: can government effectively regulate social media?
The Christchurch Call and Australia’s subsequent proposal for new social media laws have been criticised for a narrow emphasis on online terror and failure to address other vexing problems such as online hate speech, cyber bullying and anti-gay rhetoric. The issue of governance and regulation in the social media space begs the question whether government has the ability to introduce and monitor effective regulatory policy, and whether strict laws will infringe on freedom of opinion and expression.
The ‘Christchurch Call’, a voluntary pledge by governments and tech giants to improve the regulation of live, extremist content on social media platforms, launched on 15th May 2019, exactly two months after the terror attacks and first mass shooting in New Zealand in twenty-two years. New Zealand’s Prime Minister Jacinda Arden, who drove the initiative alongside French President Emmanuel Macron, referred to the attacks at the Christchurch Mosque as “one of […] the darkest days” in the country’s history. The attack was livestreamed on Facebook and viewed several hundred times before police had made an arrest. Large companies such as Amazon, Google, Microsoft and Twitter have pledged support for the initiative, promising greater regulation of content on their platforms but this has heightened political discussion over the role of government.
The Christchurch massacre prompted Prime Minister Scott Morrison to question the role of government in regulating how social media giants can control uploaded content, including violent, graphic and uncensored media. Subsequently, the Coalition Government proposed social media laws through the Criminal Code Amendment (Sharing of Abhorrent Violent Material) Bill 2019 that would prevent the “weaponisation of social media platforms” and enforce “abhorrent violent content” is reported to the Australian Federal Police. Under the proposed Bill, it would be a criminal offence to keep violent material online, punishable by imprisonment and substantial fines. Essentially these proposed new laws are, in part, a political response reflective of the increasing global populous policy trend to suppress the use of online platforms as a tool of terrorism. The extent of regulation by government and subsequent policies drafted should not be rushed, but instead carefully considered as there is a delicate balance between purposeful censorship and hindering expression. Chinese President Xi Jing Ping has demonstrated to other nations that social media can be brought to heel, however it is a complicated task to decide where the line should be drawn and perhaps not an conducive one-size-fits-all approach, particularly within Western democratic countries.
It has been over a decade since the ‘age of social media’ first dawned. Users have shared knowledge, written blogs, grown their social networks, and often in real time (‘live’). But connecting through cyberspace, with an ability to transcend geographical borders has had both advantages and disadvantages for local regulators. As social media continues to evolve and expand, important questions are routinely being raised by parents, organisations and decision makers concerning the nature of content on these social platforms and whether or not it is (or can be) regulated. The presence of auto play video, sharing and uploading, trending hashtags and the use of algorithms has brewed the perfect storm. Content can spread so rapidly that platforms such as YouTube are attempting to remove them as soon as they are uploaded in an effort to reduce widespread damage and controversy. The difficulty of such activity can be seen in the Christchurch example; imagery of the shooting was uploaded to YouTube at a rate of one copy per second with several thousand accounts created for the sole purpose of spreading the content. Facebook reported that it removed over one million copies in the first 24 hours after the first video went live.
The extent of regulation by government will heavily rely on collaboration of other supportive Federal governments and the tech sector because they too must be held to account. Regulatory issues are compounded and dependent on striking a balance between what is considered harmful and in need of censorship versus the need to protect Australia’s expected (and implied) right to freedom of opinion and expression. Moreover, social media companies need to consider that their terms of agreement (of use) for users need to be updated and monitored regularly to reduce misuse that occurs on their platform along with reviewing how to more closely monitor algorithms that enable and drive explicit content. The notion of personal accountability may need to be reformed by considering the transforming social issues and how social media can better protect both users and the wider community.
This article originally appeared on fpladvisory.com.au