For the past For the past ten years, the biggest companies in the tech industry have been allowed to grade their own homework. They’ve protected their power through extensive lobbying while hiding behind the infamous tech industry adage, “Move fast and break things.”
Food and beverage companies, the automotive industry and financial services companies are all subject to regulatory and accountability measures to ensure high levels of ethics, fairness and transparency. Tech companies, on the other hand, have often argued that any legislation would limit their ability to trade effectively, make profits, and do what they have become powerful to do. There are currently a number of bills and laws around the world that aim to finally limit these powers, such as the UK’s long-awaited Online Safety Act. This law will be passed in 2023, but its limitations mean it will not be effective.
The Online Safety Act has been in the works for several years and effectively shifts the duty of care for monitoring illegal content to the platforms themselves would set a dangerous precedent for freedom of expression and the protection of marginalized groups.
In 2020 and 2021, YouGov and BT (along with the charity I run, Glitch) found that 1.8 million people surveyed said they had suffered from online threatening behavior in the past year. 23 percent of respondents were members of the LGBTQIA community, and 25 percent of respondents said they had experienced racist attacks online.
In 2023, legislation will come into force in the UK aimed at addressing some of this harm, but it will not go far enough. Activists, think tanks and experts in the field have raised numerous concerns about the effectiveness of the online safety law in its current form. Think tank Demos emphasizes that the bill does not name minority groups – such as women and the LGBTQIA community – even though these communities tend to be disproportionately affected by online abuse.
The Carnegie UK Trust noted that while the term ‘significant harm’ is used in the bill, there are no specific processes to define what that is or how platforms would need to measure it. Academics and other groups have raised alarms over the bill’s proposal to drop the previous Section 11 requirement that Ofcom “should encourage the development and use of technologies and systems to regulate access [electronic] Material.” Other groups have raised concerns about the removal of clauses related to education and future-proofing, making this legislation reactive and ineffective as it will not be able to account for harm that may be caused by platforms that are still have not gained in importance.
Platforms need to change, and other countries have passed laws trying to make that happen. We’ve already seen Germany enact the NetzDG in 2017, the first country in Europe to speak out against hate speech on social networks – platforms with more than 2 million users have a seven-day window to remove illegal content or face a maximum fine expected to be up to 50 million euros. In 2021, EU legislators laid down a set of rules for big tech giants with the Digital Markets Act, preventing platforms from giving preferential treatment to their own products, and in 2022 we made progress on EU AI Law seen this included extensive consultations with civil society organizations to adequately address concerns about marginalized groups and technology, a working arrangement activists in the UK have been calling for. In Nigeria, the federal government issued a new online code of conduct to combat misinformation and cyberbullying, which included specific clauses to protect children from harmful content.
In 2023, the UK will pass legislation to combat similar harms and finally make headway on a regulator for tech companies. Unfortunately, the Online Safety Act will not provide the adequate measures to actually protect vulnerable people online, and more needs to be done.
https://www.wired.com/story/online-harm-moderation/ An Online Safety Bill Is Coming to the UK—But It’s Not Enough