What is the Future of Live Streaming?
By Emmeline Plachecki
Monday, 10 June 2019
Technology is very powerful, and nothing is as good as technology at improving life. It can impact our lives both positively and negatively. Live streaming is one example, which has both positive and negative sides. We are the deciders and we have to choose how to use it.
The usage of technology for over exploitation of resources should be always avoided. If we use it for positive things, it will have positive effect on our lives and vice versa. Nobody would oppose the development of technologies in any sector but the developments should be in a positive way and they should not have any negative impact on present or future generations.
Positively, live streaming can give us instant gratification through the connection with others and we can gain much pleasure in this.
However, negatively there is a dark-side that comes with too much immediate information. This came to our attention and we were faced with the stark reality of what this dark-side of live streaming can do, recently through the devastating Christchurch terror attack in which fifty-one innocent people lost their lives.
The alleged attacker cruelly live streamed his act of violence to reach an even bigger audience. Regrettably some viewers may have innocently entered this live stream video, not realizing the brutal content.
While there were innocent alleged parties to this setup they inadvertently helped in making this horrific event go viral on the Internet, through no fault of their own.
The questions that we have been left with after this horrific event is why was this video not controlled? And what is being done to prevent other live stream violence in the future?
Social media struggled to contain the video, this prompts Facebook and other social media platforms to reexamine how live stream videos are flagged after this tragic event.
New Zealand Police called for people not to share the harming video, which showed the gunman shooting repeatedly at worshippers from close range. Despite Facebook's efforts to eradicate the video, individuals attempted to re-upload the video 1.5 million times the video was made to go viral.
Videos of the attack were posted across various social networks, and links to the live stream and a manifesto were posted on 8chan [see wikipedia] prior to the attack.
Facebook has said that in the first twenty-four hours after the massacre shooting, it removed one and a half million videos after the attack. Around two hundred people had viewed the seventeen-minute video of the Christchurch shootings while it was live, and the first user report of the video came twelve minutes after it ended.
Had it been flagged while the footage was live, Facebook said they might have been quicker to remove it.
Facebook says it prioritises user reports of a live stream for "accelerated review." "We do this because when a video is still live, if there is real-world harm we have a better chance to alert first responders and try to get help on the ground."
Facebook users who saw a potentially violent or abusive live stream footage after it aired could alert Facebook moderators with haste. But Facebook said this process covered only videos flagged for suicide. Other dangerous events — including the New Zealand attack — were not covered under the expedited review process.
Facebook said this may change.
Officials acknowledged, "This particular video did not trigger our automatic detection systems." Specific types of content, such as terrorist propaganda and graphic violence have been successfully limited on the social network through automated filters. Facebook said, their effectiveness is tied to volume and repeated exposure to such content. Social media platforms now have to examine content in their system and would need to be exposed to large volumes of similar data to automatically detect the horrific imagery seen in the New Zealand video. Facebook also pointed to the additional challenge of potentially flagging innocuous content that resembles offending video.
"This call to action is not just about regulation, but instead about bringing companies to the table and saying, 'You have a role too, and we have expectations of you,'" said Prime Minister Jacinda Ardern.
When escalating a potential case of terrorism in a live stream, moderators are told to fill in a selection of questions about the offending content:
- What is happening that indicates the user is committing an act of terrorism?
- When has the user said or indicated that harm will occur?
- Who is being threatened?
- Does the user show weapons in the video?
Only just recently, Facebook has introduced some new rules so that people who violate these serious policies will be immediately banned from using Facebook Live for a period of time. Under the new policy, the alleged Christchurch shooter would not have been able to live stream the massacre from his account in March.
Prime Minister Jacinda Ardern expresses frustration that the footage remained online four days after the massacre. The video has been banned in New Zealand, making it a crime for anyone to store or distribute the content.
New Zealand, as other countries do, urges social media platforms to do more to prevent extreme content online in the future.
New Zealand's deepest sympathies and thoughts go out to the family and friends of all the victims with the tragic loss of their loved ones.