Reflections on Rezgar Akrawi s Article on Artificial Intelligence

K. Kaps
2026 / 5 / 10

6 May 2026

Rezgar Akrawi s article represents an ambitious and multi-layered contribution to the analysis of artificial intelligence (AI). The author undertakes to deconstruct the prevailing conception of technological "neutrality," arguing that artificial intelligence is organically embedded in the relations of capitalist production and -function-s as an instrument for reproducing class domination through more complex and less visible means.
The significance of the text is not-limit-ed to a general denunciation of the "capitalist use" of artificial intelligence. On the contrary, the article touches upon a broad spectrum of themes: the transformation of labour and the intensification of exploitation through automation, the role of data as a new form of capital, the algorithmic shaping of consciousness and social behaviour, as well as the concentration of technological power in the hands of states and monopolistic enterprises. Alongside these, the article raises questions of ideological hegemony in the digital environment, new forms of control and surveillance, and the extension of exploitation into the "digital space" itself as a field of value production.
The article does not rest at diagnosis alone, but proposes the necessity of an alternative, "leftist" -dir-ection in the development of artificial intelligence, grounded in democratic oversight, collective ownership, and the social orientation of technology.
A critical engagement with the text must therefore take into account not only its theoretical coherence and ideological premises, but also the breadth of the themes it opens up. Nevertheless, the present critical attempt does not seek to cover all these aspects comprehensively. It focuses selectively on certain points of the article, offering some preliminary thoughts while inevitably leaving aside other equally important issues that deserve separate analysis.
________________________________________
Akrawi s article begins from an astute starting point: artificial intelligence is not a neutral technology, but develops within specific social relations, and in particular within capitalism. This observation is significant, yet it needs to be formulated more precisely. Artificial intelligence does not operate within an abstract "capitalism," but within different forms of capital and amid the rivalries between them.
In industry, it is deployed to organise production more efficiently and to control labour. In commerce and finance, it accelerates the circulation of commodities and assists in risk management. At the same time, digital platforms accumulate particular power, as they control data, infrastructure, and access to markets. Thus, artificial intelligence is not merely a tool, but an integral part of contemporary strategies of capital accumulation.
________________________________________
1. Artificial Intelligence through the Lens of Value Theory
From the standpoint of Marxist theory, the fundamental point is that value is produced by living labour. Machines, however advanced, do not create value on their own,¹ but are incorporated into a process in which labour remains decisive. This does not negate the importance of technology, but helps us understand its role.
The matter becomes more complex with the emergence of new forms of labour. Digital and cognitive labour² do not refute the theory of value, but render it more demanding: it is not sufficient to recognise that these constitute labour, but necessary to determine whether and how they are -dir-ectly incorporated into the process of surplus value production,´-or-whether they -function- in a supportive capacity in its formation and realisation.³
This becomes clear when we examine how artificial intelligence is actually deployed. In production, for example, systems that -dir-ect workers in warehouses´-or-distribution platforms do not produce value on their own. They organise and accelerate labour, making it more intensive and more controlled.
In circulation, similarly, systems that recommend products´-or-adjust prices do not generate new value, but contribute to the faster sale of commodities and increase the probability of profit. In this sense, artificial intelligence intervenes at different phases of the same cycle.
At the same time, automation does not simply lead to a reduction of labour. In some areas it abolishes it-;- in others it transforms and displaces it. New forms of labour appear where they are not immediately visible — in data labelling, content moderation, and the training of the systems themselves. Thus labour is simultaneously curtailed in certain sectors, redistributed in others, and rendered altogether more fragmented.⁴-;-
A further critical element is the role of data. Although data does not constitute an autonomous source of value, it is decisive for the manner in which production is organised and profit realised. Through data, enterprises can accelerate the circulation of commodities, reduce uncertainty, and intensify control over labour.⁵-;-
Yet the exploitation of data does not occur under conditions of equal access. On the contrary, it is concentrated in the hands of a small number of large corporations that control the basic infrastructure and artificial intelligence systems. Thus this technology is -dir-ectly connected to forms of monopolistic power.⁶-;-
Viewed as a whole, artificial intelligence does not merely affect discrete -function-s, but is embedded in the total movement of capital, from production through circulation to consumption.
In this context, a fundamental contradiction emerges: on the one hand, technology increases productivity-;- on the other, it intensifies the pressures associated with the very process of value production itself.⁷-;- Capital attempts to address this tension through familiar mechanisms: the intensification of labour, cost reduction through the relocation of production to countries with cheap labour, expansion into new markets, and the commodification of data.
Akrawi s article acknowledges many of these developments, yet does not connect them systematically to the theory of value.
Without such analysis, the article s argumentation risks remaining superficial. "Exploitation" then appears as a general, almost moral term describing inequalities´-or-injustices without explaining the specific economic mechanisms through which profit is produced. In this way, the principal advantage of the Marxist approach is lost: the capacity to connect social experiences with the structure of value production.
Perhaps this is also the most interesting point. Artificial intelligence is not only a means of reinforcing capitalism, but also a terrain on which its contradictions become more visible: between labour and automation, between the collective production of data and its private appropriation, between the diffuse contribution of many and the concentration of profit in the hands of a few.
________________________________________
2. The-limit-s of Conceiving Artificial Intelligence as a Unified Mechanism of Domination
A second issue in the article s analysis concerns the manner in which artificial intelligence is presented as a unified and almost homogeneous system of domination. The picture that emerges is that whether in factories, hospitals, public administration,´-or-labour platforms, artificial intelligence operates in essentially the same way as a mechanism of surveillance and control.
This approach is not arbitrary. Indeed, the algorithmic organisation of labour and information is extending into an ever-growing range of fields. But if we remain at this level of generalisation, something crucial is lost: artificial intelligence does not constitute a unified mechanism, but an ensemble of different technological forms embedded in unequal and specific social relations.
From a Marxist standpoint, with its requirement for the concrete analysis of concrete conditions, there is no such thing as "artificial intelligence" as an abstract totality. There are particular applications embedded in different fields of production and power. An algorithm that organises labour in a warehouse is one thing-;- a system that supports medical diagnoses in a public hospital is quite another-;- and a system that regulates the visibility of content on social networking platforms is something else again. If all of these are treated as expressions of a single "algorithmic domination," the differences that determine both the mode of exercising power and the potential points of rupture are overlooked.
Every social phenomenon contains multiple contradictions, but in any specific conjuncture one of them acquires a decisive role that shapes the overall -dir-ection of development. In the case of artificial intelligence under contemporary capitalism, one such principal contradiction is that between the increasingly social character of the production of knowledge and data and its private capitalist appropriation.
This is particularly visible in large language models. They are trained on vast quantities of data produced collectively by users, workers, scientific communities, writers, and the general digital activity of millions of people. Yet the results of this collective production are appropriated by a small number of large corporations. Here the fundamental contradiction is concentrated: social production on the one hand, private appropriation on the other. This contradiction does not manifest uniformly, but is mediated by the particular social relations of each field, assuming different forms — from unpaid digital activity and precarious data annotation to the privatisation of scientific knowledge and the intensification of workplace control.⁸-;-
This differentiation is not secondary. Technology is -dir-ectly connected to the concentration and centralisation of capital. The large digital platforms and artificial intelligence corporations -function- as monopolistic nodes that control data, infrastructure, and algorithms. Thus the form that each application of artificial intelligence takes is not neutral, but depends on its position within this structure of power.
This becomes evident from concrete examples. A worker on a delivery platform does not confront "artificial intelligence in general," but a specific algorithm that regulates orders, time, and ultimately income. By contrast, a physician in a public health system may use decision-support systems embedded in a different institutional framework, involving contradictions of a different kind. The generalisation that "artificial intelligence everywhere surveils in the same way" does not help us understand these differences.
Artificial intelligence does not develop uniformly, either technically´-or-socially. In certain fields such as labour platforms, it is -dir-ectly linked to the intensification of exploitation. In others such as medical research´-or-certain public services, more complex forms appear in which social needs and capitalist constraints coexist. This unevenness is itself an expression of the very structure of the system.
This also has immediate political significance. If domination is presented as absolutely homogeneous and omnipresent, it becomes unclear precisely where power is concentrated and where fissures might emerge. Yet political analysis cannot be confined to general denunciations-;- it requires the identification of the nodal points of power concentration.
For example, control over data in a large social media platform carries a different weight from the use of a local hospital management system. In the first case, questions arise concerning the commodification of attention and the shaping of public discourse. In the second, the primary concerns are those of service organisation, with different-limit-s and possibilities for intervention.
From this perspective, contradictions do not have the same intensity´-or-the same weight everywhere. This means that political intervention cannot be uniform and abstract. It must identify where the contradiction between social production and private appropriation becomes most acute, and where it can be transformed into a terrain of struggle.
The fundamental weakness of the article is not that it recognises artificial intelligence as an instrument of domination, but that it tends to present it as a unified mechanism devoid of internal differentiations. A more rigorous Marxist approach reveals that artificial intelligence constitutes a terrain on which domination assumes multiple forms.
In this way, the analysis becomes more concrete and more politically useful: not only because it reveals the existence of domination, but because it helps identify where and how it is constituted, and therefore where forms of resistance and confrontation can develop.
________________________________________
3. Artificial Intelligence as a Field of Contradiction: From Capitalist -function- to Social Transformation
The discussion of artificial intelligence in Akrawi s article culminates in an apparent contradiction: on the one hand, artificial intelligence is presented as deeply embedded in capitalist relations of production-;- on the other, the possibility that it might serve as an instrument of social emancipation is left open. The problem is not that this dual picture is "wrong." The problem is that it is not adequately analysed as a contradiction requiring explanation.
If we begin from a fundamental Marxist principle — that the means of production are not neutral tools but are shaped within specific social relations and bear the im-print- of those relations in their operation — then artificial intelligence is no exception. It is developed with investment capital that demands returns, trained on data collected through commodified activities, and applied in environments where the basic criterion is the increase of productivity and profit. When an algorithm organises labour in a warehouse´-or-determines a platform driver s routes, it does not operate on neutral terms. It embodies capital s need to reduce costs, accelerate rhythms, control living labour, and intensify exploitation.
From this standpoint, artificial intelligence is indeed "structurally" capitalist — not in the sense that it is eternal and immutable, but because its present form is already embedded and shaped within these capitalist relations of production. The problem arises, however, when it is simultaneously implied that the same technology can be relatively easily reoriented towards social ends. If artificial intelligence is so deeply embedded in the logic of accumulation, it is by no means self-evident that it can simply "change hands" and -function- differently.
Every social phenomenon contains contradictory aspects. In the case of artificial intelligence, one aspect is its capitalist -function-: its use for the intensification of labour, for control, for the increase of surplus value. This is the dominant aspect in the present conjuncture. But it is not the only one. The same technology is based on collective knowledge, scientific labour, and data produced by millions of people. In this sense, it embodies a form of social cooperation that transcends the narrow boundaries of the enterprise.
This second aspect does not negate the first. It exists within it, in tension with it. The crucial question is not whether artificial intelligence "is" capitalist´-or-"can become" social. The question is which side of the contradiction is dominant, and under what conditions this dominance can be altered.
In advanced capitalism, the basic means of production are concentrated in large monopolies. Today, artificial intelligence is not dispersed among small, independent units-;- it is controlled by powerful corporations that possess the data, the infrastructure, and the computing power. This means that the question of "re-appropriating" artificial intelligence is not technical but deeply political — it is a question of confronting concentrated economic power. It does not merely concern how a technology is used, but who controls its material preconditions.
Let us consider a simple example. An artificial intelligence system used in a corporation organises labour on the basis of speed, precision, and cost reduction. Routes, times, and even workers breaks are calculated to increase productivity. If this system passes into collective control, it is not sufficient for it to remain unchanged. The very criterion of "optimisation" must change. Instead of profit maximisation, the goal might be the reduction of labour intensity, a better distribution of time, and the meeting of social needs. This, however, means altering the data collected, the objectives set, and the constraints built into the algorithm — that is, it means transforming the technical structure itself.
Here the dialectical relationship between productive forces and productive relations comes into relief. Artificial intelligence, as a productive force, does not develop in a vacuum, but is constituted within specific social relations that determine the objectives, criteria, and measures by which its operation is evaluated and -dir-ected.⁹-;- At the same time, its very development tends to reshape these relations, reinforcing´-or-transforming existing forms of labour organisation and control.¹⁰-;-
Thus the relationship is not external but internal: social relations are inscribed in the design of technology, while technology in turn reproduces and modifies those relations. Consequently, even under conditions of political power change´-or-institutional transformation, existing technologies do not remain neutral-;- they carry embedded within them the preceding social relations and require active transformation in order to -function- differently.
This allows us to avoid two simplifications that frequently appear in this discussion. On the one hand, the idea that technology is neutral and it suffices to "use it correctly." On the other, the idea that it is so deeply capitalist that it cannot change. The reality is more complex: artificial intelligence is shaped by capitalism, but also contains possibilities that are not exhausted by it. The outcome of this contradiction is not predetermined.
The critical point, then, is the transition. And here the article leaves a gap. The idea that the Left must "reclaim" artificial intelligence is reasonable as a -dir-ection, but remains abstract if not accompanied by a concrete theory of how this can occur. Such a theory does not simply concern the change of ownership´-or-control, but the transformation of the material conditions within which technology is designed and operates: the criteria of evaluation, the forms of labour it organises, and the ways in which it incorporates and utilises collective knowledge. Who will change the design criteria? Who will control the data? How will the technical infrastructure itself be transformed? These are not secondary questions-;- they are the core of the problem.
A consistent Marxist approach is obliged to treat artificial intelligence as a historically determined form of capital s development, which embodies specific relations of exploitation and is simultaneously embedded in the contradictions of the capitalist mode of production itself. These contradictions — and not some external "use" of technology — constitute the ground upon which questions of social transformation can be posed.
Ultimately, the contradiction identified in the article reflects a real contradiction in contemporary capitalist development. Artificial intelligence is simultaneously an instrument for intensifying exploitation and a condensation of collective knowledge. Whether it remains the former´-or-is transformed into something different does not depend on its "good use," but on the overall movement of class struggle — on which social force will impose its own criteria upon the very organisation of production.
In this sense, the real stake is not technology itself, but social control over its material and technical preconditions. And this is a question that is not merely theoretical, but profoundly political.
________________________________________
4. Artificial Intelligence, Exploitation, and Ideological Domination: Their Unity
The article connects artificial intelligence both to the intensification of labour and to the shaping of consciousness through the control of information.
Analysing artificial intelligence as simultaneously a mechanism of exploitation and a mechanism of ideological domination presents no inherent problem. On the contrary, it is entirely consistent with the Marxist tradition, which does not-limit- the domination of capital to the field of production alone, but recognises it also at the level of the reproduction of social relations. The issue, however, is not the mere coexistence of these two levels, but the manner in which they are connected to one another.
In historical materialism, ideology does not constitute an autonomous force that acts upon society from without. It is produced within the very material relations of production and reproduces those relations at the level of consciousness. This means that the analysis of the ideological -function- of artificial intelligence must begin from the manner in which it is incorporated into the process of value production and capital reproduction.
If we examine more concretely how contemporary digital platforms operate, it becomes clear that algorithms are not designed primarily to "control consciousness" in an abstract manner. They are designed to maximise measurable economic quantities: time on site, interaction, and advertising revenue. An algorithm that recommends videos´-or-news does not have as its immediate goal the shaping of political convictions. Its goal is to keep the user on the platform for as long as possible. Yet it is precisely through this process that ideological effects arise: certain information is amplified, other information disappears, and specific forms of discourse become dominant.
Here lies the crucial mediating mechanism: the ideological effect is not independent of the economic -function-, but is its product. The commodification of attention does not entail that experience and consciousness -dir-ectly produce value. Rather, they constitute a field of potential valorisation activated only through specific forms of human labour: algorithm development, content production, data analysis, and advertising targeting. Value does not derive from attention itself, but from the socially organised labour that renders it exploitable. In this sense, artificial intelligence does not "add" ideological domination on top of exploitation-;- it incorporates it into the very process of value production and circulation.
If this relationship is not clarified, artificial intelligence risks appearing as an almost autonomous force for the control of consciousness. And here we pass from materialism to a form of technological idealism, in which domination appears to spring from information systems rather than from the social relations that produce them.
Lenin insisted that the ideological domination of the ruling class does not arise spontaneously, but is organised through specific mechanisms: the state, the media, and institutions. Today, it is also shaped through the operation of digital platforms. Large corporations determine what becomes visible, while algorithms promote particular conceptions as self-evident. This process is not automatic, but rests on human labour and is embedded in the everyday practices of use, shaping the way we think and perceive the world.
The crucial issue was not merely the existence of ideological domination, but how it connects to class struggle. If the analysis of artificial intelligence gives the impression that domination is exercised everywhere in the same way — in production, in consumption, in culture, in everyday life — without hierarchy´-or-prioritisation, a serious strategic problem arises. Political struggle cannot be waged "everywhere in the same way." It requires the identification of the nodal points where the material power of capital is concentrated.
From this standpoint, production and the exploitation of labour remain the central terrain. Platforms, data, and algorithms do not negate this foundation-;- they restructure it. A driver on a delivery platform does not first experience "ideological manipulation" and then exploitation. He experiences exploitation through an algorithmic mechanism that simultaneously organises his behaviour. Ideology and economy are not two separate levels, but moments within the same process.
Let us advance one step further, with an emphasis on the dialectical relationship between base and superstructure. Mao stressed that, although the economic base is decisive "in the last analysis," in specific conditions the superstructure — politics, ideology, culture — can play a decisive role. This is of particular significance for understanding artificial intelligence.
Today, the architecture of algorithms — what is measured, what is displayed, what is concealed — is not neutral. It may reinforce certain forms of social behaviour and weaken others. In periods of social tension, this can genuinely affect the outcome of conflicts. For example, the visibility´-or-concealment of strike mobilisations in digital media is not merely a technical matter-;- it can influence the very development of class struggle.
Yet the emphasis on the role of consciousness does not mean that it is severed from its material foundation. On the contrary, it means that the struggle is waged simultaneously on multiple levels, with different principal and secondary elements in each conjuncture. The error would be to invert this relationship and consider control over information as the primary terrain of conflict independently of production.
If this occurs, the subject of analysis itself shifts. The working class, as the class that produces value, retreats to the background and is replaced by a general aggregate of "users"´-or-"consumers of information." This shift is not innocent. It weakens the concept of exploitation and -convert-s critique into a matter of "access to information"´-or-"control of data," distancing it from the relations of production.
A rigorously Marxist approach is obliged to avoid this shift. Artificial intelligence must be analysed primarily as a form of the organisation of labour¹¹ and the valorisation of value,¹² which simultaneously extends into the field of the reproduction of social relations.¹³ Its ideological -function- is not independent, but embedded within this process.
From here a clearer strategic picture emerges. The conflict is not simply located "everywhere," but is concentrated where artificial intelligence organises and intensifies exploitation: in workplaces, on platforms, and in data infrastructure. At the same time, it extends to the level of ideology, where the conditions of perception and action are formed. The unity of these two levels is not a theoretical luxury-;- it is a condition for understanding where and how effective intervention can occur.
In this sense, the fundamental requirement is not merely to denounce artificial intelligence as an instrument of domination, but to map the specific mechanism connecting the production of value with the production of consciousness. Only then can the point be identified at which class struggle can become genuinely effective.
________________________________________
5. Open Source, Regulation, and Democratic Control: Possibilities and-limit-s
The article proposes solutions such as open source code, transparency, "neutral" and democratically managed systems, and international regulation of artificial intelligence in the service of society.
These proposals appear, at first glance, realistic and technically feasible. Open source artificial intelligence models already exist that permit greater transparency in the operation of algorithms.¹⁴-;- Yet from a Marxist standpoint, a deeper question arises: can technology become "neutral" within a system that is structurally non-neutral?
The experience of digital platforms shows that even when the code is open, real power continues to reside in the infrastructure: in the data, servers, distribution networks, and ownership of the platforms themselves.¹⁵-;- In other words, "transparency" does not negate the class relation. Forms of organisation, even technically "neutral" ones, always embody specific social relations. The problem, therefore, is not merely access to the code, but who controls the -dir-ection of production and by what criteria.
The idea of international regulation likewise raises a genuine issue: artificial intelligence does not develop nationally but within global networks of monopolistic competition. The development of large AI models requires enormous computing resources concentrated in a small number of countries and corporations.
If we take Akrawi s proposals seriously — open source, transparency, democratic control, and international regulation — the fundamental question is not whether all of this is good´-or-desirable, but whether it can genuinely the power relations within which technology develops. And here reality proves far more intractable than theory would suggest.
A first characteristic example is the history of "open source software" itself. Platforms such as Linux began as collective, non-commodified endeavours with a strong element of cooperation and community. Today, however, they form the basic infrastructure for the largest capitalist enterprises in the world — Amazon, Google, and Microsoft. An "open" system not only failed to impede the concentration of capital, but was fully absorbed into it. From a Marxist standpoint, this is not a paradox: capital is not interested in the form of ownership at the level of code, but in the control of the conditions of valorisation. So long as the infrastructure, data centres, markets, and networks belong to a few, "openness" does not negate exploitation.
A second example concerns the very promise of "transparency." In many cases, corporations make portions of their algorithms public´-or-provide tools that partly explain how an artificial intelligence system arrives at a decision. Yet these practices offer only-limit-ed understanding, without revealing the full mechanism of operation.
The crucial point is that power is not located exclusively in the code, but primarily in the data and the computing infrastructure. Thus, even when a model appears "transparent," if it depends on inaccessible datasets and concentrated resources, effective power remains restricted to a small number of actors. The large language models developed by companies such as OpenAI´-or-Google are not merely code, but complex systems that incorporate data, infrastructure, and large-scale investment.
In this sense, "transparency" frequently operates as a partial and controlled process: it creates the impression of openness without altering the concentration of power, contributing more to the legitimation of existing structures than to genuine democratisation.
At the level of state regulation, the examples are equally revealing. The EU has sought to regulate artificial intelligence through frameworks such as the AI Act.¹⁶-;- Yet even the most advanced regulations frequently end up -function-ing as "rules of the game" that stabilise the market rather than mechanisms for overturning it. The experience of regulations such as GDPR and the AI Act shows that large corporations possess the resources to comply, influence legislation, and pass on costs. Smaller players´-or-collective enterprises, by contrast, frequently struggle to meet compliance requirements that presuppose resources and organisational structures they often lack. Thus, regulation can reinforce the concentration of capital.¹⁷-;-
The international dimension is even more revealing. The idea of uniform international rules for artificial intelligence founders on the competitive character of contemporary capitalism. Examples such as US restrictions on chip exports to China, the differing regulatory models of the US, EU, and China,¹⁸-;- and the conflicts surrounding data sovereignty¹⁹-;- demonstrate that "regulation" also -function-s as an instrument of geopolitical power. This is not accidental: in the stage of imperialism, technology constitutes a weapon of competition both among monopolies and among states, not a neutral field of cooperation.
There are also more "everyday" examples of the failure of the idea of democratic control. Social media platforms have introduced mechanisms of "community governance"´-or-"user feedback."²⁰-;- Nevertheless, the fundamental decisions about how algorithms operate are made on the basis of profitability. What is displayed and what remains marginal is not decided democratically, but according to how much attention it attracts and how much advertising revenue it generates. User "participation" is thus incorporated into the very process of value production-;- it does not overturn it.
These examples reveal something still deeper: even when the "form" changes — open source code, transparency, regulation — the very technological structures continue to carry the old social relations. A system designed to maximise efficiency, prediction, and control does not automatically become an instrument of collective emancipation simply because it has become "open"´-or-"regulated." This requires a redesign of its very objectives and operating criteria, something that cannot occur without a change in the relations of production.
On the whole, these negative examples do not mean that proposals for transparency, open source code,´-or-regulation are useless. They mean, however, that within capitalist relations of production such proposals tend to be absorbed and curtailed. In plainer terms: they may improve certain aspects of the system, but they are unlikely to change its core. And that core, from a Marxist standpoint, is none other than the production and appropriation of surplus value.
If this question is not posed, the discussion of "democratic artificial intelligence" risks remaining at the level of managerial forms, without touching the material relations that determine how and for whom technology operates.
________________________________________
6. From Use to Design: Technology as a Terrain of Social and Class Conflict
The article maintains that the Left does not need to reject technology, but to understand it in depth and endeavour to -dir-ect it in the interests of workers. This understanding is not a matter of "modernisation," but a basic precondition for substantive political action.
If technology is rejected wholesale as capitalist, it becomes more difficult to understand how labour, communication, and power are organised today. By contrast, when we know how these systems operate, we can identify their logics and the points at which they can be contested.
In workplaces, for example, algorithms determine rhythms, evaluations, and conditions. Without an understanding of these mechanisms, workers struggle to respond to them, reject them,´-or-change them. Thus the lack of knowledge ultimately reinforces inequalities.
Knowledge of technology alone does not resolve problems, but it is necessary. Without it, every political effort remains superficial, because it fails to touch the fundamental ways in which power is produced and maintained.
This position is connected to a conception of technology as a terrain of class conflict, where the same technology can be used in different ways. Worker surveillance systems, for example, can also be used to record and organise working conditions with a view to collective action.
Nevertheless, it remains crucial to distinguish between the use and the design of technology. Use concerns the manner of deploying a system in specific conditions, while design determines its possibilities from the outset. A technology may appear neutral in its use while embodying specific social relations in its very structure. An artificial intelligence system in a warehouse may be presented as a tool for organising labour, but if it has been designed to measure time continuously, minimise idle time, and maximise the pace of production, then it inherently embodies logics of discipline and intensification.²¹
The distinction between "use" and "structure" does not exclude the significance of use. It does, however, establish that use is never entirely free. Structure operates as a kind of "silent politics": it -dir-ects, constrains, and frequently reproduces inequalities even when no explicit intention to do so exists.
From this perspective, class conflict concerns not only the regulation´-or-use of technology, but the very manner of its production: who designs it, for what objectives, with what data, and according to what criteria of evaluation. Thus, even systems that target "productivity," "interaction,"´-or-"user engagement" embody specific social logics such as employer control´-or-the commodification of attention. Correspondingly, open source projects may depend on large cloud computing infrastructure such as Amazon Web Services, Microsoft Azure,´-or-Google Cloud, thereby reproducing relations of dependence on concentrated capital.
In the final analysis, technology is not simply a cognitive object that can be "mastered" through understanding. Technologies are not fully malleable, as they develop within specific social and economic conditions and embody the corresponding relations of production. For this reason, their transformation cannot be achieved solely at the level of knowledge´-or-use, but presupposes political control and power over the data, the infrastructure, and the fundamental mechanisms of production.
Even after changes at the political level, technologies continue to carry within them the social relations in which they were created. They therefore do not transform themselves automatically, but require continuous reshaping through practical and social intervention. This process is neither linear nor immediate, but unfolds through conflicts, as different social forces seek to reshape the manner in which technical systems operate.
In this process, multiple resistances arise: from existing institutional structures, from the very technical design of the systems, and from the interests that are in conflict´-or-have already been incorporated into those systems. Thus the social use of technology does not constitute a simple application of political decisions, but a terrain of permanent negotiation and conflict.
In this sense, the "conquest" of technology is not an instantaneous event, but a protracted process of transformation. Even when the exercise of power changes hands, technology does not adapt automatically, but requires redesign and re-signification in practice, through continuous social and political struggle.
________________________________________
Taken as a whole, Rezgar Akrawi s article offers a broad and multi-dimensional perspective on the role of artificial intelligence in contemporary society, connecting technology with questions of labour, power, organisation, class struggle, inequality, political agency, and consciousness.
It implies, if only in-dir-ectly, that productive relations are reproduced so long as the power relations sustaining them remain unchanged.
Its value, however, lies primarily in the fact that it opens a wide-ranging discussion and raises questions that have no easy´-or-immediate answers. The present critical engagement has dwelt on certain aspects of the article only, leaving aside others that could serve as the subject of further discussion. In this sense, Akrawi s text -function-s more as a starting point for reflection than as something exhausted in a single reading. This may well be its most essential quality.
________________________________________
7. Notes
* This text is a critical engagement with Rezgar Akrawi s article "Artificial intelligence, reproducing capitalist class domination by more sophisticated means," published on 7 January 2026 in Radical Politics, at: https://radicalpolitics.org/2026/01/07/artificial-intelligence-reproducing-capitalist-class-domination-by-more-sophisticated-means/
________________________________________
¹ Machines merely transfer to the product the value they already contain. In this sense, artificial intelligence, like every other technology, belongs to what Marx called "constant capital."
² Cognitive labour: labour in which the principal instrument is the mind rather than the body, such as programming, software design, data analysis, research, and writing. Digital labour: labour -dir-ectly embedded in digital infrastructure and performed through digital platforms, such as data labelling and content moderation.
³ If every digital trace, every interaction, social communication, cultural expression, content consumption,´-or-simple use of a platform is designated as productive labour, then the concept of labour is severed from the wage relation, commodity production, socially necessary labour time, and the production of surplus value. The distinction between productive and unproductive labour loses its meaning. A worker who labels images for an AI system is paid by a company, contributes to a product that will be sold, and produces surplus value. A "mere user" scrolls and clicks but is not paid-;- he produces data, which assists in circulation and increases profits, but this is not -dir-ectly productive labour in Marxist terms.
⁴-;- Some examples. In healthcare, AI used for appointment management appears to automate administrative work, but in reality creates new demands: staff must verify data, correct errors, and manage cases the system cannot handle. In industry, "smart" factories reshape rather than simply eliminate labour: workers take on roles of supervision and maintenance, while a whole layer of labour arises outside the factory in data analysis and algorithm development. In tourism, automated bookings conceal invisible labour in content management, price updates, and customer problem handling. In education, automatic marking systems change rather than reduce the teacher s role, calling for the interpretation of platform data and pedagogical intervention, while invisible labour behind the systems continues in exercise design and model training.
⁵-;- Data does not constitute a homogeneous category. Some data appears as digital traces of user activity not produced within wage labour and thus lacking a value-form from the outset. However, a significant portion is produced as a secondary product of the productive process itself, generated within wage labour time and therefore already embodying labour. The full economic valorisation of data occurs when capital collects it, encloses it, and incorporates it into regimes of ownership and control, transforming it into an immaterial raw material for the production of digital products and services.
⁶-;- The view that large technology companies possess an unassailable advantage due to data volume is contested. Despite the enormous accumulation of companies such as Google´-or-Meta, competition remains intense, as smaller companies can excel through innovative algorithms´-or-specialisation. OpenAI with ChatGPT reshaped the market, while Midjourney achieved a leading position without access to massive user datasets. Technological developments such as synthetic data are also reducing the relative importance of data volume.
⁷-;- On the one hand, enterprises have every incentive to replace workers with machines to reduce costs and increase productivity. On the other hand, if living labour is the sole source of value, its reduction undermines the basis of profit. This is a central idea of Marxist theory: as the weight of machines relative to labour increases — what is termed the rising organic composition of capital — greater pressure is exerted on the rate of profit.
⁸-;- Large models are trained on books, articles, and works of art representing historically and socially accumulated production, yet the valorisation is carried out by corporations without -dir-ect compensation to creators. Similarly, when AI is used for surveillance and administrative decisions, the data originates from society but the use may reinforce state control´-or-serve private contractors.
⁹-;- In a business environment, what is "measurable" is typically associated with profit, efficiency,´-or-increased usage rather than social benefit, justice,´-or-worker wellbeing.
¹⁰-;- Artificial intelligence reshapes social relations because it makes control more continuous and precise, transfers power to algorithmic systems, alters the criteria for evaluating labour, and generates new forms of labour organisation. Algorithms assign tasks and regulate work rhythms in real time, making power more impersonal and difficult to contest. Detailed monitoring intensifies existing forms of control. Performance evaluations based on numerical indicators affect wages, promotions, and dismissals. On delivery and mobility platforms, algorithms determine who works, when, and on what terms, generating forms of labour that are more flexible yet more precarious.
¹¹ Artificial intelligence changes the manner in which labour is organised and controlled: automation of tasks, worker monitoring, algorithmic allocation of labour, intensification of production, and reduction of labour costs.
¹² This refers to the form of organisation of the process through which capital seeks to expand, producing more profit and surplus value. Artificial intelligence is used to raise productivity, reduce labour costs, achieve better control of markets and data, and enhance profitability.
¹³ Artificial intelligence does not affect only labour within the factory´-or-enterprise, but also everyday social life and power relations: the shaping of behaviours through algorithms, surveillance, the production of ideology, the reproduction of social inequalities, and influence over education, information, and consumption.
¹⁴-;- Open´-or-semi-open source models such as LLaMA, Mistral, and Stable Diffusion permit users and researchers to download, modify, and examine their operation. However, this transparency remains partial, as it often does not include the training data´-or-the full production context, confirming that technology does not automatically become neutral´-or-emancipatory through access alone.
¹⁵-;- Examples such as Android, open source AI models, and decentralised networks such as Mastodon show that open source code does not in itself entail a decentralisation of power. Control remains concentrated in those who possess the data, the computing infrastructure, and the distribution networks.
¹⁶-;- Artificial Intelligence Act. https://artificialintelligenceact.eu/
¹⁷-;- GDPR set strict rules for personal data. Large corporations possessed the resources to adapt, while smaller enterprises faced significant costs and in some cases withdrew from the European market. Research has shown that following GDPR, entry by new players in advertising and data markets declined and dominant platforms were reinforced. The AI Act similarly increases development and operational costs in ways that large corporations can absorb but that create high barriers to entry for smaller players.
¹⁸-;- The EU, with the AI Act, categorises systems according to risk level. The US relies more on market forces and innovation with softer and more fragmented regulation. China applies strict state control with an emphasis on political stability and surveillance. There is no common framework, but competing models of AI governance.
¹⁹-;- The EU promotes policies of digital sovereignty and restrictions on data transfers to reduce dependence on American corporations. The US relies on the power of its own corporations, while China maintains strict national control over data. Data, which is critical for artificial intelligence, thus becomes an object of geopolitical competition among states.
²⁰-;- These are modes of user participation in the operation of a platform, primarily through reports, evaluations, and usage data. However, this participation is-limit-ed, as it is not accompanied by real control over the rules, algorithms,´-or-infrastructure. It -function-s more as a form of incorporating users into the system s operation than as substantive democratic control.
²¹ A well-known example is artificial intelligence systems used for recruitment in large corporations, where training data based on historical hiring patterns already contained gender, racial, and class inequalities, causing the system to learn to reproduce those inequalities. In social networks, systems designed to optimise time on platform and advertising revenue empirically lead to the amplification of extreme content, the formation of informational echo chambers, and the spread of misinformation. In predictive policing systems, reliance on historical data that reflects intensive surveillance of poorer neighbourhoods creates a self-reinforcing cycle in which existing inequalities are not merely preserved but reproduced and intensified.


Source:
https://antigeitonies3.blogspot.com/2026/05/rezgar-akrawi.html
https://antigeitonies3.blogspot.com/search/label/%CE%9A.%CE%9A%CE%B1%CF%88.




Add comment
Rate the article

Bad 12345678910 Very good
                                                                                    
Result : 100% Participated in the vote : 1