The Struggle to Control AI Rages On
Dear Reader,
Escaping the ubiquity of artificial intelligence (AI) is a near-impossible task in these times, whether it’s the multiple AI initiatives being rolled out by industries, or a growing number of artists and creators taking a stand against AI being fed on their work, or the urgency with which policy circles are seeking to come to grips with this issue. For instance, this last month alone saw the launch of three new major Large Language Models (LLMs) for investors to thirst over. Meta, in collaboration with Microsoft, just unveiled LLaMA 2. Taking a cue from ChatGPT and Google’s Bard, Meta’s open source LLM is now available for free research and commercial uses, allowing the model to learn and improve in the wild through iterative public use. Elon Musk, who not long ago disparaged AI’s catastrophic potential, announced xAI in July, an AI outfit with the goal of understanding nothing short of ‘reality’ in its totality. Not to be left behind was Chinese e-commerce giant JD, which recently launched ChatRhino for industrial adoption in various sectors, including logistics, finance, and retail.
As LLMs drop from the skies, regulators are trying to enforce guardrails as quickly as they can. China’s ‘Interim Measures for the Management of Generative AI’ seeks to control the flow of information that is proliferating through GenAI applications, while also attempting to bolster its local industry by exempting it from the more stringent regulations that it recently enforced on its tech sector. The measures also signal an intention to regulate how data is used for the training of these models, particularly in order to preserve intellectual property claims. Such an emphasis on regulating data was also visible in a very different regulatory effort that was also launched this month: the UNHRC adopted a resolution calling for global oversight over AI and ensuring it is grounded in human rights. Yet, as the data question raises its head again, important regulatory tensions between protecting civic rights and carving a space for claims of economic justice remain to be fully negotiated.
This seemingly breathless rise of AI faces a growing backlash from creators and users over the non-consensual use of their output and data in model building, with Hollywood having surprisingly turned into a frontier of civic activism around the issue. Technologies as fundamental to our social and economic institutions as AI will invariably push the wider public to contest the shape they are taking. But the question is whether these moments can be harnessed for meaningful structural and regulatory interventions.
In an ongoing series at DataSyn, we hone in on the political economy of AI and the GenAI conjuncture. This month, we start by examining Big Tech’s capture of GenAI and the business models it is employing to consolidate this power. Our second article explores the many dangers of this technology’s proliferation in education. In a bonus feature, we take stock of India’s digital public infrastructure and its discontents.
Finally, in some non-AI news, let’s talk about ‘Threads’, Meta’s micro-blogging platform, which came and conquered, to say the least. Outpacing even ChatGPT, the application amassed over a 100 million users at the outset, and added to Twitter’s woes of dwindling traffic and a dissatisfied user base. In a sea of alternatives seeking to dethrone Twitter as the discourse maker, Threads is well-placed to eat out the competition, as it rides on its millions-strong captive Instagram user base, a move which should raise red flags with competition regulators already. Its surprising embrace of the Fediverse through joining the ActivityPub, (an open, decentralized social networking protocol used by smaller apps like Mastodon) has also raised heckles with the community of alternative tech, who are unsure of what to make of such a move. We’ll continue engaging with this and more.
The DataSyn Team
THE BIG EXCESS
Who Learns and Who Profits in the Era of Artificial Intelligence?
Cecilia Rikap
Tracking Big Tech’s business models, Cecilia Rikap takes a sharp look at the recent GenAI boom and its roots in the operations of Silicon Valley giants. Comparing the strategies that different Big Tech firms are using to capitalize on this new gold rush, Rikap brings into relief the scale of monopolistic power that is emerging in the AI space.
Read on.
THE NEW DIVERGENCE
Generative AI and Education: Adopting a Critical Approach
Sopio Zhgenti and Wayne Holmes
A prominent anxiety around GenAI has been its potential for abuse by students for assignments. But as Sopio Zhgenti and Wayne Holmes argue, the repercussions for education go far beyond students having ChatGPT do their homework. They’re about the foundational moorings of knowledge.
Read on.
THE POLICY TABLE
What’s Public about India’s Digital Public Infrastructures?
Eshani Vaidya
With the growing hype around digital public infrastructures (DPI), there is a need for vigilance around what is being propagated under this banner. Analyzing recent developments in India’s digital policy scene, Eshani Vaidya raises important concerns about the direction in which DPIs are moving in India, as they portend a shift in critical public services from welfare delivery to a more commodified, government-as-a-platform model.
Read on.
The Sins & Synergies Lounge
What are some of the unique new challenges that GenAI poses with regards to regulating competition and protecting against anti-trust concerns? Check out this ‘Computational Power and AI’ policy submission from the AI NOW Institute which delineates the risks of AI monopolization inherent to concentrations in the cloud-computing market.
With AI proliferating, the free reign that LLMs have had in feeding off of the internet’s creativity has come under scrutiny. See, for instance, this piece in the New York Times about content creators pushing for greater control over their data, or this recent article from WIRED, about how AI looms large in Hollywood’s ongoing labor disputes.
Leading Silicon Valley figures and Big Tech leaders have warned against AI’s dangers and called for urgent regulation, even as they invest in profit from these initiatives. What may be the calculations underlying this discourse of doomsaying? Tune in to the Cyber podcast, where Lee Vinsel discusses why ‘Big Tech Wants You to Think AI Will Kill Us All’.
Even before the GenAI revolution, debates around artificial intelligence have been steadily brewing for a number of years. To get a taste of some of the key issues, take a peek into Bot Populi’s ‘Inside Intelligence’ track.
Online freelance workers are quickly becoming the first to adopt AI technologies into their work cycles, whilst also being the most at risk of being outrightly replaced by the same technology. Check out Rest of World’s illuminating coverage of this issue to find out more.
Finally, here’s a throwback to a piece from Logic(s) Magazine on the datafication of the shipping industry, which explores the ways in which digital technologies are reshaping eminently material practices.
Post-script
DataSyn is a free monthly newsletter from IT for Change, featuring content hosted by
Bot Populi. DataSyn is supported through the Fair, Green, and Global Alliance.
Liked what you read? To have such concise and relevant analysis on all things Big Tech delivered to your inbox every month, subscribe to DataSyn!