Skip to content
  • Home
  • Latest Blogs
  • Forums
  • Feed
  • Resources
  • Rules/FAQ
  • Chatroom
  • Donate
  • Login
  • Register

CASPER Forum

Computational and Statistical Political Economy Research

  • Forums
  • Members
  • Recent Posts
Forums
Political Economy
Cybernetics
Ways of compressing...
 
Notifications
Clear all

Ways of compressing technical coefficients, posits, combinations etc.

Page 1 / 2 Next
    Last Post
  RSS
Tomas Härdin
 Tomas Härdin
(@thardin)
Estimable Member

One limit to large-scale planning is memory. A way to improve this situation is with more compact storage formats for system matrices. This thread is for collecting such ideas.

Currently I estimate 8 bytes per non-zero, broken down as 32 bits for the value and 32 bits for the index in the column. With q=160 non-zeroes per column this comes to 1.25 KiB. Can we do better?

One idea is to normalize each column, then store the logarithm of the absolute value of each coefficient, grouped into one set of positive and another set of negative coefficients. 16 bits should be sufficient without losing much precision over 32-bit floats, possibly 12 bits. Another possibility is to use posits, which are probably computationally cheaper to convert to floats in the calculations. I also think they're neat. Let's say that 16 bits is possible for the values.

What about indices? Let's assume we have no more than 2^32 rows for now. We could code these indices as enumerations of combinations instead. log2(2^32 choose 160) = 4174.3 or 26 bits per non-zero. This brings the total from 64 to 42 bits per non-zero. Can we do even better?

We are free to reorder rows in the system matrix however we want. If non-zeroes tend to fall into small ranges as a result of this then we can code the offset and size of each range. The number of bits needed for coding the positions in this smaller range is then obviously less than 26 bits on average. How much less is hard to say in advance.

Finally one can use general purpose fast compressors like lzo. If one segregates indices and values into separate files then each one probably compresses better than having indices and values interleaved.

Quote
Topic starter Posted : 12/08/2022 2:51 pm
Topic Tags
planning math
Ivan Williams
 Ivan Williams
(@madredalchemist)
Trusted Member

Do you plan to exploit sparsity or are you assuming a dense matrix representation; I remember cockshott saying the more disaggregated the I/O table the sparser it gets.

ReplyQuote
Posted : 12/08/2022 3:07 pm
D Z
 D Z
(@dz1789)
Active Member

Given the scaling laws of sparse economic matrices, Cockshott has suggested using a sparse list represention.

ReplyQuote
Posted : 12/08/2022 3:23 pm
Ivan Williams
 Ivan Williams
(@madredalchemist)
Trusted Member

I think he starts talking about sparsity around 33:00

https://www.youtube.com/watch?v=U76472dXNVg

ReplyQuote
Posted : 12/08/2022 3:48 pm
Joseph
 Joseph
(@joe)
Estimable Member

Your mention of not losing much precision got me thinking:

what exactly is the level of precision we need to strive for? if we're measuring things in terms of labor time, surely being more precise than the minute isn't worthwhile?

ReplyQuote
Posted : 12/08/2022 4:06 pm
Joseph
 Joseph
(@joe)
Estimable Member

another thought would be doing something incredibly similar to UTF-8 or UTF-16 or some other variable length encoding.

one possibility: we take our coefficients and multiply them by some power of 2 that we set as our level of precision we can store them as an int.

most of those ints will be far less than whatever the maximum int is so doing encoding similar to UTF-8 or UTF-16 means we would be saving bytes.

this would definitely save a lot of space storing indices since half of all of them would need half as much space, a quarter would need a quarter as much, etc.

for the coefficients themselves we would need real data. best case scenario i would see is a zipfian distribution with the most frequent ints being close to 0.

ReplyQuote
Posted : 12/08/2022 4:31 pm
Tomas Härdin
 Tomas Härdin
(@thardin)
Estimable Member
Posted by: @madredalchemist

Do you plan to exploit sparsity or are you assuming a dense matrix representation; I remember cockshott saying the more disaggregated the I/O table the sparser it gets.

Yes this is for a sparse format, hence why I mention indices. For each column you store how many non-zeroes there are, where they are and finally what value they have.

ReplyQuote
Topic starter Posted : 12/08/2022 4:32 pm
Joseph
 Joseph
(@joe)
Estimable Member
Posted by: @joe

another thought would be doing something incredibly similar to UTF-8 or UTF-16 or some other variable length encoding.

one possibility: we take our coefficients and multiply them by some power of 2 that we set as our level of precision we can store them as an int.

most of those ints will be far less than whatever the maximum int is so doing encoding similar to UTF-8 or UTF-16 means we would be saving bytes.

this would definitely save a lot of space storing indices since half of all of them would need half as much space, a quarter would need a quarter as much, etc.

for the coefficients themselves we would need real data. best case scenario i would see is a zipfian distribution with the most frequent ints being close to 0.

to give an example:
the character "a" is number 65 in Unicode
the character "Ე" is number 7234 in Unicode
the character "𐏉" is number 66505 in Unicode

to store "a", UTF-8 uses 1 byte: 01100001
to store "Ე" UTF-8 uses 3 bytes: 11100001 10110010 10010100
to store "𐏉" UTF-8 uses 4 bytes: 11110000 10010000 10001111 10001001

even tho is supports numbers up to 1,114,112 it averages far fewer than the 20 bits per character one would need for a non variable length encoding.

ReplyQuote
Posted : 12/08/2022 4:44 pm
Tomas Härdin
 Tomas Härdin
(@thardin)
Estimable Member

@joe I think that's overkill for coefficients. But also posits do something similar to this with the way regimes and exponents are encoded, but on a bit level, and constrained to be fixed length in total.

Another thing that strikes me is that we can also normalize rows. Ideally we'd find scaling factors that make all coefficients close to 1. That might not always be possible, but if we can then the number of bits shrink to something like 8 for 99% accuracy.

We could also split the matrix up into parts that cover different dynamic ranges. Let the system matrix S = A+B. A are all coefficients that are between say 0.5 and 2. We can get away with coding them using far fewer bits. Hopefully the number of non-zeroes in B is much smaller than in A so that we can spend more bits on them.

What I'm sketching here is a system that looks something like D(A+B)Ex >= Db, x >= 0 where D and E are diagonal matrices that are cheap to store.

ReplyQuote
Topic starter Posted : 12/08/2022 5:00 pm
Joseph
 Joseph
(@joe)
Estimable Member

@thardin 

I think I'm a bit lost as to why we need to work so hard on saving space?

if we have a 20 million commodity economy with an average of 160 inputs per commodity, then a COO storing of the data would take around

160 * 20 million * 32bit floats = 12.8 GB for the coefficients and
2 * 32 bit int * 20 million = 160 MB

just under 13GB

doing some quick R code for an economy 1/20th of the size

a <- runif(1000000*160, 1.175494*10**-38, 3.4028237*10**38)
b <- 1:1000000*160

returns higher values:
coefficients take 1.3GB
a single index takes 8MB
total estimate is around 26GB

many desktop gaming computers have 32GB of ram and some beefier rigs have 64GB.

Is this effort to account for even larger planning tasks? like locations, etc.?

ReplyQuote
Posted : 12/08/2022 6:25 pm
Tomas Härdin
 Tomas Härdin
(@thardin)
Estimable Member

@joe The idea is to try and suss out the limits of linear planning with currently available technology. This partly serves as ammunition against the Austrians. If storage space per column is halved then the number of variables can double, and arguments that planning can't be done are likewise halved as a result, informally speaking.

Your COO calculations are off 🙂 Coordinates are stored per non-zero, not per variable. It should be 38.4 GB. Grouping values by column reduces that to 25.6 GB (32+32 bits per NZ) which is what I start started the thread with.

In the other thread I mentioned that current technology seems top out at 188 TiB per node, assuming M.2 drives can stream data fast enough to be useful as "RAM expansions". Beyond that you have to start running your solver across multiple machines. It's useful to know what this means in terms of variables, which in turn tell you many production methods * planning periods you can have. The higher the number of variables the more disaggregated planning we can do. This has political implications.

ReplyQuote
Topic starter Posted : 12/08/2022 6:38 pm
Joseph
 Joseph
(@joe)
Estimable Member

@thardin thank you for the explanation! i know how annoying it can be when a total newb is in a convo so i want to say i appreciate taking the time to hear me out and explain things.

ReplyQuote
Posted : 12/08/2022 7:32 pm
Joseph
 Joseph
(@joe)
Estimable Member

@thardin

Just read up on posits and watched Gustafson's talk. Wow!

Hopefully it won't be a decade before we get hardware support for posits because this is incredible!

ReplyQuote
Posted : 13/08/2022 12:41 pm
Paul Cockshott
 Paul Cockshott
(@wpc)
New Member
Posted by: @thardin

One limit to large-scale planning is memory. A way to improve this situation is with more compact storage formats for system matrices. This thread is for collecting such ideas.

Currently I estimate 8 bytes per non-zero, broken down as 32 bits for the value and 32 bits for the index in the column. With q=160 non-zeroes per column this comes to 1.25 KiB. Can we do better?

One idea is to normalize each column, then store the logarithm of the absolute value of each coefficient, grouped into one set of positive and another set of negative coefficients. 16 bits should be sufficient without losing much precision over 32-bit floats, possibly 12 bits. Another possibility is to use posits, which are probably computationally cheaper to convert to floats in the calculations. I also think they're neat. Let's say that 16 bits is possible for the values.

What about indices? Let's assume we have no more than 2^32 rows for now. We could code these indices as enumerations of combinations instead. log2(2^32 choose 160) = 4174.3 or 26 bits per non-zero. This brings the total from 64 to 42 bits per non-zero. Can we do even better?

We are free to reorder rows in the system matrix however we want. If non-zeroes tend to fall into small ranges as a result of this then we can code the offset and size of each range. The number of bits needed for coding the positions in this smaller range is then obviously less than 26 bits on average. How much less is hard to say in advance.

Finally one can use general purpose fast compressors like lzo. If one segregates indices and values into separate files then each one probably compresses better than having indices and values interleaved.

I am a great fan of data compression, but I dont think that altering the numeric form of coefficients is worth while. Only a moderate saving of space and a big cost in terms of performance converting from standard ieee32 format.

It may be worth compressing the product codes since any given column is unlikely to have many disitinct inputs - almost certainly less than 2^12. 

This might be worth it in terms of savings.

One would have to look at rate of cache faulting with the compressed and uncompressed indices and see if that offset the extra indirection instructions involved.

ReplyQuote
Posted : 14/08/2022 1:15 pm
Tomas Härdin
 Tomas Härdin
(@thardin)
Estimable Member

@wpc True, and it may be premature optimization either way. But the HPC crowd tend to do precisely this kind of stuff. Things like normalizing rows and columns are worthwhile either way, for numerical reasons.

As far as product codes go, I expect they would be stripped before stuff is fed to the solver. I doubt lp_solve for instance has metadata associated with each non-zero. It's probably enough to use database row indices as the basis for variable names.

ReplyQuote
Topic starter Posted : 14/08/2022 7:12 pm
Page 1 / 2 Next
Forum Jump:
  Previous Topic
Next Topic  
Related Topics
  • Contract Bidding System for Flexible Trade Between Socialist Enterprises/Countries
    3 months ago
  • Simulating an Economy to test planning?
    4 months ago
  • Developing a "user-ready" planning program
    4 months ago
  • Working backwards from a 5-Year Plan's end goal to make the targets for each intermediate year?
    7 months ago
  • Which programming language is best suited for economic planning?
    7 months ago
Topic Tags:  planning (7) , math (1) ,
  Forum Statistics
16 Forums
70 Topics
469 Posts
0 Online
1,318 Members

Latest Post: Automatic Differentiation Applications to Planning? Our newest member: blairbefor Recent Posts Unread Posts Tags

Forum Icons: Forum contains no unread posts Forum contains unread posts

Topic Icons: Not Replied Replied Active Hot Sticky Unapproved Solved Private Closed

 

Open Chatroom

Login required


Copyright © 2023 CASPER Forum.

Powered by PressBook Child WordPress theme