sadly, there is a lack of finely detailed data we can use to test economic planning algos and methods since seemingly the most discrete we can get data is by sector.But, we should be able to simulate economic data, right?

Here are some of my initial thoughts:

1. we would want to simulate an IO table. the IO table could be individual commodities or, i am heavily inclined toward this, commodity-locations. so, a loaf of whole grain bread in seattle's 40th grocery store is different than that same loaf in small-town kentucky.

2. there are two ways things can be interconnected. the first is the raw number of inputs a given output uses, and the other is now related all its inputs are to each other. you could imagine that in an economy there are commodities which use almost identical inputs or inputs of similar sectors to its own.

so, to generate a fake IO table, you would have 3 parameters:

1. the number of commodities

2. the distribution of the number of unique inputs each commodity uses

3. how related each commodities inputs are to other commodities (ie how correlated or non-random the data is)

I can see each affecting algo performance but if we come up with a way of mathematically defining these we can generate many different potential economies and test algos against them without having real data.

my biggest conceptual hurdle is currently determining how we'd quantify parameter 3 and whether it is something observable? it may be the distribution of the number of times a given input is used. i'd expect it to be zipfian for reason being that that seems to be the way large-scale things like this shake out.

I did something similar to this for the video I did with Paul. Semi-randomly placed points on a globe, connected based on an inverse square law. 1) I call m and 2) I call q.

@joe Here's the Octave code I used. File upload is limited to 2 MB so I can't upload Blue_Marble_2002-thumb.png but it's just this image from wikipedia scaled it to 4096x2048 and saved as png.

ffs, stupid upload limitations. I'll just paste the code:

% phi range: -pi to pi % theta range: -pi/2 to pi/2 n = 1024; nref = 1024; m = 2*n; ex = 2; % exponent in distribution if exist('watermap') == 0 water = reshape([10,10,51], [1,1,3]); % RGB value of water im = imread('Blue_Marble_2002-thumb.png'); b = im == water; watermap = b(:,:,1) & b(:,:,2) & b(:,:,3); endif function [x,y,z] = cart(phi, theta) r = 6371.0e3; % meters x = r.*cos(phi).*sin(theta); y = r.*sin(phi).*sin(theta); z = r.*cos(theta); endfunction function d = dist(p1, p2) phi1 = p1(1); theta1 = p1(2); phi2 = p2(1); theta2 = p2(2); [x1,y1,z1] = cart(phi1, theta1); [x2,y2,z2] = cart(phi2, theta2); dx = x2-x1; dy = y2-y1; dz = z2-z1; d = sqrt(dx^2+dy^2+dz^2); endfunction points = []; for i = 1:n while true phi = rand()*2*pi-pi; % bias heavily toward equator r = (2*rand()-1); r = abs(r)^ex*sign(r); theta = asin(r); % avoid the Antarctic if theta < -pi/3 continue endif x = floor((phi+pi)/2/pi*(size(b,2)-1) + 1); y = floor((pi/2-theta)/pi*(size(b,1)-1) + 1); if watermap(y,x) == 0 break endif endwhile points(end+1,:) = [phi, theta]; endfor function out = kd(points, dim) [s,i] = sort(points(:,dim)); points = points(i,:); if size(points,1) > 64 sz = floor(size(points,1)/2); a = kd(points(1:sz,:), mod(dim,2)+1); a(:,3) = a(:,3)*2; b = kd(points((sz+1):end,:), mod(dim,2)+1); b(:,3) = b(:,3)*2 + 1; out = [a;b]; else out = [points,zeros(size(points,1),1)]; endif endfunction #[s,i] = sort(points(:,2)); #points = points(i,:); points = kd(points,1); S = sparse(m,n); % TODO: compute all pairwise distances in one go % then apply the randomness also in one go via rand(n,n) < (300e3./d).^2 randtime = cputime; [x,y,z] = cart(points(:,1),points(:,2)); dx = x - x'; dy = y - y'; dz = z - z'; d2 = dx.^2 + dy.^2 + dz.^2; d = sparse(rand(n,n) < (300e3/ex^ex)^2./d2); for i = 1:n S(2*i-1,:) = d(i,:); S(2*i,i) = 1; % non-negativity constraint %for j=1:n % d = dist(points(i,:),points(j,:)); % if rand() < (300e3/d)^2 % S(2*j-1,i) = 1; % endif %endfor endfor randtime = cputime - randtime q = nnzS/n H=S'*S; nnzH = nnz(H) % cut down the plots a bit k=ceil(n/nref); figure(1); colormap('hsv'); scatter(points(:,1), points(:,2), [], points(:,3), '*') axis([-pi,pi,-pi/3,pi/2]) title('World with k-d tree coloring'); print('world.png') figure(2); spy(S(1:k:end,1:k:end),1) title('spy(S) with world model'); print('kd-spy.png') S2 = sparse(m,n); S2(1:2:m,:) = speye(n); S2(2:2:m,:) = sprand(n,n,(nnz(S)-n)/n/n); nnzS = nnz(S) nnzS2 = nnz(S2) figure(3) spy(S2(1:k:end,1:k:end),1) title('spy(S) with random sparse matrix') print('random-spy.png')

I know paul has done some recent work on this, which I believe is what Tomas is talking about. Googling it I found this blog about someone who's run a simulation like this

Simulating Socialism (2): Optimising Production Plans - Designing History (designing-history.world)

has anybody here ever played with this? https://github.com/DREAM-DK/MAKRO