Hi, it’s Sam. Welcome to the eighth edition of Top10VPN’s Week in Review, I hope the January blues aren’t hitting too hard!
This week there’s been more concerning AI news, with the IMF warning that the technology could soon affect 40% of all jobs worldwide and significantly worsen inequality. Meanwhile, the UK’s Information Commissioner's Office (ICO) has launched a consultation to work out how “data protection law should apply to the development and use of the technology.”
Given AI’s potential to entirely reshape our digital privacy and perhaps even society, it makes sense to look ahead and prepare accordingly. But this week, I want to look at an area where AI has already been used for several years and is actively undermining our rights online: internet censorship.
At the beginning of last year, Russia began using a new AI-enabled internet censorship tool called Oculus. According to reports in Russian media, the tool can scan 200,000 images a day and automatically “detect extremist materials, calls for riots […] and LGBT propaganda.”
The technology represents a significant development in the Russian censorship apparatus. Instead of relying on individuals to trawl through the internet to find “offensive” material, Russia’s censorship agency can now use an AI solution that promises to be both more efficient and comprehensive.
As with anything related to AI, it’s worth being skeptical of the hype. Last year, for example, we may have found evidence of the tool making errors while researching the websites blocked in Russia. Specifically, we found sites that appeared to have been blocked just because they contained images that included the colors of the Ukrainian flag.
China has also long used AI to supplement the work of human monitors to identify and block access to content online — from specific social media posts to entire websites. Ironically, it’s plausible that AI-enabled tools are being used to help identify and restrict access to generative AI material in the country today.
But this trend is certainly not limited to authoritarian countries. According to Freedom House: “Legal frameworks in at least 21 countries mandate or incentivize digital platforms to deploy machine learning to remove disfavored political, social, and religious speech.”
I spent a few hours looking at some of the biggest firewall manufacturers in the US and found countless examples of how AI could be used to bolster internet filtering efforts. Take this, from California-based Fortinet, as an example:
The FortiGuard cloud-delivered, AI-driven web filtering service provides comprehensive threat protection […] It leverages AI-driven behavioral analysis and threat correlation to immediately block unknown malicious URLs with near-zero false negatives. Also, it provides granular blocking and filtering for web and video categories to allow, log, and block for rapid and comprehensive protection and regulatory compliance.
These products can, of course, be used to increase the security of internet users. However, depending on who is setting the regulations, they can also be used for draconian internet censorship.
The use of AI to supplement existing internet censorship systems is a significant worry in terms of the amount of content that can be scanned and the speed at which it can be blocked.
In high censorship countries, people have often been able to bypass restrictions simply by virtue of there being too much material for the censors to keep up with. Once a website is blocked, people simply spin up mirror-sites or switch to a new platform altogether.
However, with AI tools, there’s a risk these traditional censorship circumvention tactics will no longer work, with websites being monitored constantly and blocked almost instantly.
The use of AI in internet censorship also dramatically increases the risk of overblocking in cases where the technology is unable to understand nuance, sarcasm and metaphors. As we’ve seen repeatedly, this often impacts society’s most vulnerable the hardest — like the blocking of LGBTQ+ educational resources due to the inclusion of language around sex and gender.
However, these technological limitations also present an opportunity for those looking to circumvent the blocks as we’ve seen countless times on Chinese social media.
Moreover, AI can also be used to bolster anti-censorship techniques. Geneva, a new tool built by researchers at at the University of Maryland, uses AI to establish a “novel experimental genetic algorithm that evolves packet-manipulation-based censorship evasion.” In other words, it automatically learns how internet censorship is being implemented and deploys tactics to overcome it.
The dangers and possibilities of AI in relation to internet censorship are very much in their very infancy. However, monitoring the ways AI is already being used in this space is vital to uphold our digital rights globally.
What We’ve Been Reading
The Intercept: OpenAI Quietly Deletes Ban on Using ChatGPT for “Military and Warfare”
OpenAI recently removed language from its usage policy that explicitly prohibited the use of its technology, including ChatGPT, for military purposes. The original policy included a clear ban on activities like weapons development and military use, but the revised policy omits the specific prohibition on "military and warfare" use, instead focusing on a broader ban against harming others.
TechCrunch: Hackers begin mass-exploiting Ivanti VPN zero-day flaws
Hackers have started mass-exploiting two critical zero-day vulnerabilities in Ivanti’s corporate VPN appliance, affecting over 1,700 devices worldwide in various industries. Despite the widespread exploitation, Ivanti has not yet released patches but plans to do so starting the week of January 22, while advising administrators to apply interim mitigation measures.
Dark Reading: OpenAI's New GPT Store May Carry Data Security Risks
The new app store for ChatGPT, launched by OpenAI, could expose users to security risks due to the potential for malicious bots and legitimate ones that may transfer user data to insecure external locations. While chats with these custom GPT models are largely protected, API-integrated functionalities could share parts of chats with third-party providers, who are not subject to OpenAI's privacy and security commitments, raising concerns about where and how this data might be used.
The Guardian: Google promised to delete location data on abortion clinic visits. It didn’t, study says
Despite Google's pledge 18 months ago to delete location data of users' visits to abortion clinics, a new study reveals that the company still retains this data in 50% of cases. Retaining this sensitive location information, despite Google's promise, raises serious concerns about the potential use of such data by law enforcement in states where abortion is restricted or banned.Approximately 450 people will be working on the enforcement of the UK's Online Safety Act, a significant undertaking that represents about a quarter to a third of the entire personnel of the UK regulatory body Ofcom. The enforcement of this act will be a challenging and costly endeavor, and likely lead to extensive legal battles with major tech companies.
Reclaim Your Face: EU AI Act will fail commitment to ban biometric mass surveillance
The EU AI Act, celebrated by EU lawmakers for its commitment to human rights, including a ban on biometric mass surveillance (BMS), will actually fail to ban most dangerous BMS practices. Instead of a complete ban, the Act will introduce conditions for using these systems, with minimal restrictions on live and retrospective facial recognition, allowing many forms of emotion recognition and police categorization based on skin color.
Politico: Inside Biden’s Secret Surveillance Court
The Biden administration has established a secret surveillance court, officially known as the Data Protection Review Court, to address privacy concerns of European citizens under U.S. law, particularly in relation to transatlantic data flows. However, the court operates with a high level of secrecy: its location and decisions are confidential, plaintiffs cannot appear in person, and its rulings are binding on federal agencies without the possibility of appeal, raising concerns about transparency and U.S. intelligence operations.
The Latest from Top10VPN
This week, we updated our 2023 VPN vulnerability research to reveal VPN vulnerabilities increased by 47% in 2023 compared to the average over the two years prior, with a 43% increase in confidentiality impact and a 40% rise in severity. The research used the American National Vulnerability Database (NVD) and the Common Vulnerabilities and Exposures system to examine threats posed to VPN products over the past three years. We also used this data as the basis for our predictions for 2024
Top10VPN in the News
VOA: Turkey’s Latest VPN Ban is Another Block to Independent Media
Middle East Eye: How do you stop students cheating on their exams
Business Insider: Kenya Lost $27m to Telegram Shutdown in 2023
Somewhere on Earth: Internet shutdowns cost more than $9 billion in 2023