The Scale of Labortime Numbers
I've been trying to think of what the range for living labor values would be for commodities.
1. The most labor-intensive thing I can think of that can't be broken down into identical sub units (so not a rail-line, a road, etc.) and is regularly built would be the U.S. Gerald R. Ford-class aircraft carriers.
2. The least labor-intensive thing I can think of that wouldn't be used/bought in bulk quantities is an aluminum beverage can.
These two things are open to debate, it's just me brainstorming. There could be other low-labor individually used/bought commodities I'm not thinking of (maybe produce sold by the unit?). I'd love to hear anyone who can think of a time when values for inputs would be larger or smaller than this.
So what labor time numbers are we talking about here?
The living labor added to build the Gerald R. Ford aircraft carrier was 49 million hours according to the Government Accountability Office. Very helpful that they report that number in labor hours.
As for aluminum cans, a single plant in Colorado owned and operated by Ball produces 6 million cans per day and employs 320 people. The plant runs 24/7 so each can is roughly 320 * 24 / 6000000 = 0.00128 hours
So the order of magnitude difference we're looking at is 10^10.
If one were to model an economy, it's likely that without bundling commodities (dozens of eggs, grosses of aluminum cans, etc.) you'd only need enough storage (uncompressed in any way) to store 10^10 or around 34 bits.
But thardin posted earlier thinking that 12 bits would be enough to store this. I decided to crudely test this.
1. I take every 1000th number from 1 to 49,000,000 (the range adjusted to make everything an integer)
2. I take the ln() of these so get a much smaller number
3. To see how many digits we need, I round the ln() to 2 decimal places.
4. I define error as abs( ( e^rounded_log - labor_time ) / labor_time ) * 100
5. the max error is 0.5% and the largest ln() value is 24.62
6. to store 2462 as an int takes 12 bits
7. therefore, storing half the ln() as an int 100x the size would take only 11 bits and would keep error below 1%
Conclusion: i think thardin is correct but I think it might take only 11 bits to store a labor time with less than 1% error
Shortcomings: i did not do bit manipulation to get these results but used 32 bit floating point numbers.
I've also attached a screenshot of the error function plotted using wolfram alpha for my approximately 11 bits of precision (i take the ln(), divide by 2, and round to 0.01 precision).
This is only slightly related but I didn't want to make another topic.
I think we can get a more accurate estimate for the number of commodities in a large economy like the united states.
Excluding books, amazon sells 17.9 million products. But, these numbers are likely an overestimate of the true number of commodities a plan would need. Many products are different brand names of the same thing, different amounts or concentrations of the same thing, etc.
Capitalism (as we know) loves to duplicate efforts. Oftentimes it's obvious that two products for sale are actually the same product made in the same factory but under different brand names.
I've done some very basic searches for a few products I could think of which would be most rife for duplication of this sort:
acetaminophen - 431 results ibuprofen - 311 results naproxen - 178 results cetrizine - 296 results diphenhydramine - 272 results dextromethorphan (cough syrup) - 214 results vitamin c - 4000 results vitamin A - 10000 results vitamin B12 - 1000 results vitakim K - 2000 results vitamin D3 - 2000 results vitamin B1 - 1000 results vitamin B6 - 2000 results claw hammer 16oz -335 results 15" adjustable wrench - 629 results 9.5" double edge pull saw - 109 results canned green beans - 160 results canned kidney beans - 302 results canned beets sliced - 326 results printer paper 92 GE Bright white 8.5x11 20lb - 95 results rubbing alcohol - 494 results
Now, these numbers are probably also drastic overestimates. But if we assume even a low-ball estimate that the first page of results are indistinguishable that means that for many products there are about 50x less than the raw estimate from scrapehero.
Now, this isn't true for all categories or even all products within categories but it does mean we can be safe to reduce our estimate for the number of commodities in an economy. I remember Paul Cockshott mentioning a researcher who estimated the late Soviet economy had around 3 million commodities and I think that's perhaps a little too small for the modern United States but maybe only be a factor of 3.
My argument: We should be using estimates for the number of commodities that we would need to plan for *in a planned economy*. The market economy of the United States is not something we'd be planning.
I'd think a reasonable estimate is around 10 million commodities.
@joe You are missing that columns in the matrix would likely be normalized, so it's only the relative magnitudes that matter. Your calculation is also off because those 320 people probably work 8 hour shifts, definitely not 24. By my calculations this comes to 0.65 cans per labour second +- vacation.
Let's look at Carlsberg Group as a potential consumer of aluminium cans. From their 2021 report we can find the following numbers:
119.6 million hl (12 Mm³) beer produced
39,375 employees of which 31% are in production
Denmark has a five week vacation. Assume all beer goes into 33 cl cans. Then we have 36 billion cans produced per year or 2.9 million cans per employee in production. This comes to 0.44 cans per labour second.
We have two coefficients that are close in absolute magnitude: +0.65 and -0.44, and the absolute ratio between them is just 1.48. I suspect the story is similar in many other industries, just that the numbers are differently scaled. Scaling the columns can be seen as just changing units, say from hl to km³ in this case.
@thardin once again i fail simple arithmetic, thanks for catching my off-by-three error.
I'm a bit confused as to why we can normalize to within a single commodity?
@joe My apologies, I meant rows, not columns. This works because A*x >= b and D*A*x >= D*b are equivalent, where D is diagonal. D can be initialized with the 2-norm of the rows of A and A then scaled accordingly. The result is that the dynamic range of coefficients in A is reduced and moved to D. This means A can be switched to say half-precision (16-bit) floats which are implemented in hardware (GPUs and the upcoming AVX-512) while D remains 32-bit.