Limited compute has been giving me headache lately so I've been looking into more svelte HPC paradigms and came across Grid Computing. I'd been reading up on it and rereading Tomas' blog: https://www.xn--hrdin-gra.se/blog/2022/02/04/towards-large-scale-linear-planning/ and wondered if it was possible to compute linear plans at scale on distributed networks; can you connect enough iPhones and PlayStations to plan an economy?
I had also been thinking about possible synergy with blockchain; a network of devices organized into a virtual supercomputer controlled democratically via blockchain protocols. I think this could be potentially revolutionary in that it could mediate a social relation in which participation is voluntary, easy and there is a mutually-beneficial incentive in cooperation.
The critical obstacle in this paradigm is the actual deployment of computational resources in a manner that reproduces physical resources within the common. Compute has value, however, if it can only ever be rented out to capital then the network is reduced to a progressive mutual fund. GIU has a good discussion on 'Yeoman Coders' and the how capital had facilitated the creation of a computational commons (open source) along neoliberal auspices: https://www.youtube.com/watch?v=cOmM8o9xCkY
https://en.wikipedia.org/wiki/Blockchain
https://en.wikipedia.org/wiki/Grid_computing
(related) https://en.wikipedia.org/wiki/Heterogeneous_computing
The limiting factor to large scale LP solvers is memory. Or more specifically bandwidth between memory and CPU. Grid computing does not solve that issue. What you could do however is have a grid of nodes for collecting statistics and distributing plan solutions.
It might be tempting to think LP can be "broken up", that multiple nodes can solve their own little separate part of the system. That is equivalent to using a block preconditioner, which is certainly better than no preconditioner, but on its own it can't guarantee that you converge on a solution.
You don't need "blockchain" to do internet based democracy. Just use a regular database and a web frontend. You could use something like git with signed commits to for example distribute law and to be able to blame said laws. That would be a kind of blockchain. Public accounting might also benefit from something like that, to make sure it can't be tampered with. That's a useful property of Merkle trees.
As a free software guy I am more optimistic compared to GIU, but they are correct that free software, especially in its defanged "open source" incarnation, is very limited in scope. The freedom it provides assumes a certain level of computer literacy in its users.
Thanks! It's good to know these details I wasn't even thinking about memory constraints; is their anything grid computing could be useful in this circumstance for?
Maybe? Right tool for the job etc.
I spoke with a colleague, he said these kinds of paradigms are well suited for algorithms that are embarrassingly parallel
https://en.wikipedia.org/wiki/Embarrassingly_parallel
I wonder if there's enough there to jump start something like Ian Wright's "Venture Commune"
If we're talking of LP then your colleague is wrong. It is easy to see that sparse matrix-vector multiplication, the fundamental operation in solving LP, is not embarrassingly parallel. Splitting the solver up on separate nodes is equivalent to using block pivot. The only case where this can work perfectly, thus achieving embarallel speedup, is if your economy consists of entire independent subunits. No real-world economy behaves that way, not even the DPRK, and things like climate constraints ensure this cannot even be allowed. An economy built like that will behave worse than capitalism.
In simpler terms, you must have communication between nodes for them to be able to converge on a common solution. Else you must almost certainly rely on exchange, which begets business cycles and so on. I do not really see a reason to even bother splitting things up when we already know that a single solver will do.
This said, when it comes to data collection, some kind of distributed system is perfectly reasonable. You'll want to collect and process data that can then be fed into the one (potentially redundant) solver. We could imagine something like git distributed via IPFS, where the solver(s) run on sets of data with known hashes, reusing previous solutions to speed the computations up. We could then double-check that everyone arrives at the same solution. If computations are performed using say fixed point arithmetic then every cluster should arrive at the same solution and it should be enough to just compare hashes. Performing the same computation in two or more places ensures there's nothing up anyone's sleeve.
It just struck me that this kind of redundant computation isn't even necessary. The only thing participants need to do to see that there's no funny business going on in the solver is download the computed plan and check that the norm of the gradient is close to zero, assuming we aim for staying in the middle of the system as I suggest here.
If we're talking of LP then your colleague is wrong.
Not LP, just trying to think of different applications of virtual supercomputing.
Presuming there are enough significant applications one could imagine a cooperative computational trust that could eventually acquire the kind of clusters necessary for large-scale planning.
i.e Use distributed resources to create a surplus which can be used to acquire other resources for the commons.
I imagine the computational trust could service other cooperatives.
It seems like a good way to jump start the venture commune discussed by Ian Wright: https://www.youtube.com/watch?v=C-cViPD1-Jo&t=1627s
The computers necessary for solving even rather large LPs aren't impossible expensive. 100,000€ likely gets you something that can deal with tens of billions of variables. Even just a laptop gets you quite far, at least on the order of millions of variables. I would worry more about protocols, both digital and human ones.
Venture communism is an interesting idea and has been on my radar for quite a while. I'm going to have to read up no it again (or just watch that video, heh)
Having watched the video, and having read more since I first heard of venture communism, it appears to be a Proudhonian scheme. Some thoughts:
- Members need to be able to requisition means of subsistence from the system in-natura, without having to resort to money. This is important for tax purposes
- Exchange is used to coordinate production, which begets anarchic production. Value form etc etc
- It appears to be vulnerable to attacks from within. The reliance on "principled leadership" is a weakness of the proposal. It is not enough that the leadership is "ideologically opposed" to exploitation. Things must be so arranged that revisionism cannot find fertile soil
I do think there are some ideas that can be taken, for example people being able to buy into the system.
Having watched the video, and having read more since I first heard of venture communism, it appears to be a Proudhonian scheme. Some thoughts:
- Members need to be able to requisition means of subsistence from the system in-natura, without having to resort to money. This is important for tax purposes
- Exchange is used to coordinate production, which begets anarchic production. Value form etc etc
- It appears to be vulnerable to attacks from within. The reliance on "principled leadership" is a weakness of the proposal. It is not enough that the leadership is "ideologically opposed" to exploitation. Things must be so arranged that revisionism cannot find fertile soil
I do think there are some ideas that can be taken, for example people being able to buy into the system.
The highlighted is what I'm getting at with the idea of people being able to network their devices into supercomputer; most people even some homeless have at least a smartphone so in effect anyone can allocate their compute to the 'computational commune' (I'll just call my idea this from hereout) and receive some payout. Most importantly the resource becomes more valuable the more devices are networked into it so there is a structural incentive towards collaboration rather than hoarding.
In short:
- everyone can buy into the computational commune
- everyone has an incentive to buy into the computational commune
- everyone has an incentive for others to buy into the computational commune as well
The above three points seems to be what "Venture Communism" is trying to achieve however, exactly to your point, ideological dedication is a paltry foundation for such a system.
But what if that system was backed by something material and (practically) universally accessible like distributed compute?
I don't really see what you're trying to achieve. It's not the computation itself that is the issue, it's everything else. Getting data in and out, getting people to actually want to use the system, detecting incorrect data etc. For distributed computing there is BOINC.
BOINC is kind of what I'm talking about but one difference would be that it would be on the scale of millions of devices as opposed to hundreds of thousands.