Skip to content
  • Home
  • Latest Blogs
  • Forums
  • Feed
  • Resources
  • Rules/FAQ
  • Chatroom
  • Donate
  • Login
  • Register

CASPER Forum

Computational and Statistical Political Economy Research

  • Forums
  • Members
  • Recent Posts
Forums
Political Economy
Computational Econo...
Capital as an Artif...
 
Notifications
Clear all

Capital as an Artificial General Intelligence with Inner-Misalignment

    Last Post
  RSS
Ivan Williams
 Ivan Williams
(@madredalchemist)
Trusted Member

Since reading Ian Wright's essay "Marx on Capital as a Real God" ( https://ianwrightsite.wordpress.com/2020/09/03/marx-on-capital-as-a-real-god-2/ ) I've noticed overlapping concepts in the field of AI safety. I am not an expert in this field; whilst I have some technical experience with machine learning I am only familiar conceptually with AI safety. 

In short, misalignment in AI safety is the rift between human intentions and how those intentioned are instantiated into a AI agent.

Outer-Misalignment is the disparity between human intention and the AI's goal and Inner-Misalignment is the disparity between the AI's goal and how that goal is mapped to an optimizer.

Please refer to the expert, Robert Miles: https://www.youtube.com/watch?v=bJLcIBixGj8&t=1239s

Miles also suggests something very interesting; The firm itself as an Artificial General Intelligence: https://www.youtube.com/watch?v=L5pUA3LsEaw

This is quite similar to Wright's hypothesis; Capital as an autopoietic social logic, a sort of anthropogenic super-organism.

I think what this all suggests is perhaps that Capitalism as a global system is some hive-mind of AGIs who themselves are a kind of distributed cyborg.

More importantly, in terms a direct utility, is the notion of misalignment being transferable to the analysis of capitalism by considering the firm as an AGI.

The question then becomes: Is capital misaligned? How?

I would say that for us, capital is misaligned; it kind of goes without saying for the lot of us.

The question of how is then conveniently characterized into inner and outer misalignment.

Whilst outer-misalignment is clearly important I don't have any particular mechanism in mind but my assumption is that it pertains to the political formations which permit capital to cohere, I'm eager to read what you all think about this.

As for inner-misalignment, this may originate from the exhaustive nature of capitalist accumulation. We could say capital is an AGI whose mesa-optimizer is configured to maximize the amount of surplus value extracted and misalignment is then a result of the insufficiency or conflict created by the two primary mechanisms of achieving this; improvement of the productive forces or the intensification of exploitation.

Quote
Topic starter Posted : 28/12/2022 7:31 pm
Tomas Härdin
 Tomas Härdin
(@thardin)
Estimable Member

The AI safety "field" is mostly computer woo in my view, and the entire framing of "misalignment" applied to capital implies that there is some "correctly" aligned form of capitalism. I also wouldn't call capital misaligned, at least from the point of view of the capitalist class.

Ian Wright's view is interesting, and he's quite clear that he uses occult terminology for lack of better vocabulary. By no means is capital part of the Unseen, since we can see its actions with our own eyes. Capital is made real by the real actions of the people that serve it.

Finally the notion that AGI can exist is equivalent to saying machines can create value, which they cannot. Capital depends on workers to create value for it. Capital cannot self-valorize. If capital were an AGI then we would expect the opposite.

ReplyQuote
Posted : 28/12/2022 8:49 pm
Ivan Williams
 Ivan Williams
(@madredalchemist)
Trusted Member

To clarify I meant that capitalism's misalignment is characteristic; i.e modes of production are being considered as AI and misalignment is the incongruence of that particular industrial metabolism with the human beings who comprise it. 

In this sense there would be no "alignment" of capitalism because capitalism is the misalignment. You could have an alignment of the mode of production which would be communism.

I should drop the 'general' from AI in my hypothesis. AGI seems to be very challenging to categorize and more challenging to pose what its exact consequences would be. Further the question of its possibility is separate from the point I am trying to make. 

With that being said I think AI safety is consequential, however the focus on AGI is idealist and obscures the relevant substance. I think the problems AI safety concerns itself with apply to AI in general and will manifest in advanced large-scale AI systems well before the dawn of AGI simply as a consequence of corporate corner-cutting.

ReplyQuote
Topic starter Posted : 28/12/2022 10:18 pm
Tomas Härdin
 Tomas Härdin
(@thardin)
Estimable Member

I don't really see the reason to frame such concerns as "AI" at all. We already have autonomous systems that can misbehave, and a rich literature on how to deal with them. There is nothing special about AI. It's just fancy regression.

One thing struck me earlier today: the people who worry about AI safety use quasi-scientific language to talk about something that doesn't exist, whereas Wright uses quasi-occult language to talk about something that does exist.

ReplyQuote
Posted : 29/12/2022 1:00 pm
Ivan Williams
 Ivan Williams
(@madredalchemist)
Trusted Member
Posted by: @thardin

We already have autonomous systems that can misbehave, and a rich literature on how to deal with them. 

Is this 'Fault Tolerant Engineering'?

ReplyQuote
Topic starter Posted : 29/12/2022 1:31 pm
Nicolas Villarreal
 Nicolas Villarreal
(@casperadmin)
Member Admin

I mean you could argue that the entirety of Marx's capital is describing exactly the way that capital is "misaligned" with human intentions. I'm sure Ian is very familiar with these concepts, he works in the tech industry. 

I think one useful thing to get from this framing of the occult, is that both Capital and AGI are real Gods in the sense of merely being real, Idols in the classical sense. They can (or could in the case of AGI) reward or punish you in the real world, and some people actually do worship them. But in many ways I feel that AGI will never be what the idol worshipers of AGI think it will. Reza Negrestani, for example, makes an argument that we need to think about AGI as a subject, and that intelligence can only really be developed through relationships with other agents which also defines the self. 

I've posted a bit on twitter how recent language models with their behavior of acting more like a kind of person, writing in their style, if we say they are that person. Like saying "You are a climate researcher, write a report on rising temperatures" or something will get you something closer to real scientific literature than just asking them to write the report. This phenomenon reminded by of the Althusserian concept of interpellation, where being called by someone else allows them to define us, and reshape us as a subject. I think that this kind of behavior may be what gets us out of the alignment problem, at least as much as humans are exempt from it. 

An AGI which really acted like a paperclip maximizer wouldn't be that intelligent by most definitions, and it certainly wouldn't be self conscious. 

But systemic forces going behind people's backs, while bad if it's meddling in your life, can be used for progressive purposes.

 

@thardin The AI safety "field" is mostly computer woo in my view, and the entire framing of "misalignment" applied to capital implies that there is some "correctly" aligned form of capitalism. I also wouldn't call capital misaligned, at least from the point of view of the capitalist class.

I actually think this is incorrect. One of the big points for Marx in Capital was how capital produced results counter to the interests to the capitalist class in the form of lower profit rates, greater consolidation decreasing the size of the capitalist class. Capital's ability to destroy all existing social formations would eventually extend to the capitalist class itself, and that was the big hope for Marx. 

Capital, like AGI, or even the Sate, can lead to unexpected results through their systemic logic, but this isn't because they are all alien powers beyond our control. We can have impacts on all these things, quite concretely as individuals and groups of individuals, we are active agents, but their historical evolution cannot be fully grasped until we them through to the end. 

Capitalists have actually succeeded, after all, of putting their interests before the systemic logic of capital. Something I've elaborated on here:

Small Business’s Class War Could Finish Off American Dynamism (palladiummag.com)

To Hell With The American Gentry - Cosmonaut (cosmonautmag.com)

And if it's possible for the capitalist class, it's also possible for other historical actors.

If we can conceive of these systems, of Capital, the State and AGI as agents or even subjects in their own right, if only historically instantiated by particular embodiments, then we can see that the same limitations of foresight apply. There is no guarantee their history will be determined by their own internal tendencies, they have no special power over the future anymore than we do. 

 

ReplyQuote
Posted : 29/12/2022 4:34 pm
Ivan Williams
 Ivan Williams
(@madredalchemist)
Trusted Member

@casperadmin I think the modeling of modes of production as nested agents (agents comprised of agents comprised of agents) could be a very fruitful research program.

I think the critical aspect would be the modeling of resource constraints and technological development.

What I wanted to highlight in this thread is that if one where to consider capital in the framework of AI alignment 'capital misalignment' would be characterized by a mesa-optimizer (inner-optimizer) which is strictly a maximizer; because capital contains no internal regulation it can only be regulated externally i.e destructively, by colliding with in-kind constraints, triggering mass-death, and restoring the conditions by which it can continue to maximize. 

One could even consider 'primitive communism' a mode of production which minimizes labor; the consequence of this strict minimization is vulnerability to natural phenomena (disease, weather etc.).

I think this is a useful model because we can consider modes of production in terms of possible meta/mesa-optimizers.

Tomas mentioned in his critique of neoclassicism ( https://www.youtube.com/watch?v=0f-MWeJCsRs)  that a viable plan ought to minimize labor as it would be within everyone's interest to have as much free time as possible.

This would indicate a mode of production that optimizes (achieving plan goals whilst minimizing work or conversely changing plan goals to accommodate the amount of work people are willing to do) rather than maximizing or minimizing.

Circling back to the top of the post I think the consideration should be how does a particular 'meta'-agent emerge from the behavior of 'base'-agents and material conditions (i.e the environment and the artifacts of prior 'meta'-agents); the formalization of this might be achieved through a correspondence between the 'base'-agent dynamics (a cellular automata rule(s) for example) and the 'meta'-agent' optimization.

ReplyQuote
Topic starter Posted : 30/12/2022 12:07 am
Forum Jump:
  Previous Topic
Next Topic  
  Forum Statistics
16 Forums
69 Topics
462 Posts
0 Online
708 Members

Latest Post: DEWAWINBET LIVE SLOT GACOR ONLINE | JAM HOKI MAIN QQ SLOT ONLINE | POLA RTP SLOT CUAN TERUS | MPO SLOT MAXWIN Our newest member: anislavple Recent Posts Unread Posts Tags

Forum Icons: Forum contains no unread posts Forum contains unread posts

Topic Icons: Not Replied Replied Active Hot Sticky Unapproved Solved Private Closed

 

Open Chatroom

Login required


Copyright © 2023 CASPER Forum.

Powered by PressBook Child WordPress theme