Save Gas with Data Packing

xtremetom
5 min readSep 2, 2022

Storage writes are expensive, write less

Chances are you have interacted with a smart contract and thought — geez that was expensive. I know I have.

The culprit is usually the number of writes to storage or just simply bad logic. I cant help too much with bad logic, that usually comes with experience and trial and error. I can help by showing you a cool way to potentially reduce how many writes to storage your contract might need.

What is data packing?

We are used to seeing a unit256 as type definition for a variable:

uint256 number = 1

However, that uint256 is simply a data type that houses 256 bits. Clearly our little 1 is not using up all 256 bits, so how do we use up the other bits?

Uint256 is not the only available uint, if you know a variable will not exceed certain values you can start using smaller types. If you know a value wont exceed 2⁶⁴ you could use a uint64 and this it where things get interesting.

256 / 64 = 4

If uint64 uses 64 bits, that means you can pack 4 uint64s into a single unit256.

Gwei is a Solidity constant:
https://docs.soliditylang.org/en/v0.8.15/units-and-global-variables.html

That function outputs the following number:

25108406941546723056364004793593481054836439088298861789185000000000000

If we feed that number into this function, it will output our original gwei:

Pretty cool right?
With data packing we can store multiple variables of smaller value types in a bigger value type.

Lets look at what is actually happening.

How does data packing work?

Packing the data

Lines 2–5: Defining our data as uint64. I use gwei cos I’m lazy and wanted some big values without typing a lot. Its safe to use these values as I know they do not exceed 2⁶⁴.

Line 7: packed is already defined as a unit256 in the returns of the function. Here we are setting the value to eth1k. This occupies the first (right to left) bits in the unit256.

192 bits | eth1k (64 bits)
// packed = 1000000000000

Line 8: We use the bit wise OR assignment operator (|=) to performs the bitwise OR operation on the binary representation of the operands and assign the result to the packed.

In our case we are casting eth2kto uint256 and shifting it left (<<) by 64 bits because we know the first 64 bits are occupied by eth1k. Which will result in packed looking like this:

128 bits | eth2k (64 bits) | eth1k (64 bits)
// packed = 36893488147419103233000000000000

Line 9: Again we use the OR operator, and shift eth3kleft by 128 bits. We know eth1k occupies the first 64 bits and eth2k occupies the second 64 bits. Resulting in packed looking like this:

64 bits | eth3k (64 bits) | eth2k (64 bits) | eth1k (64 bits)
// packed = 1020847100762815390427017310442723737601000000000000

Line 10: Same as line 9 but shifting eth4kinto the last available 64 bits:

eth4k (64 bits) | eth3k (64 bits) | eth2k (64 bits) | eth1k (64 bits)
// packed = 25108406941546723056364004793593481054836439088298861789185000000000000

Unpacking the data

Line 2: We know that eth1k occupies the first 64 bits, simply casting packed (uint256) to uint64 isolates those bits and gives us eth1k.

Line 3: eth2k sits in the next 64 bits. We can use the same trick as we did with eth1k, we just have to move eth1kout of the way. We can do this by shifting packed to the right (>>) by 64 bits:

64 bits | eth4k (64 bits) | eth3k (64 bits) | eth2k (64 bits)
// eth1k has been pushed off the end :)

Line 4: eth3k sits 128 bits to the left, so if we shift packed 128 bits to the right (>>) we can use the trick from line 2:

128 bits | eth4k (64 bits) | eth3k (64 bits)
// eth2k + eth1k have been pushed off the end

Line 5: eth4k is 192 bits to the left of packed, repeat the same trick from line 4 and we have our data:

192 bits | eth4k (64 bits)
// eth3k + eth2k + eth1k have all been pushed off the end

How does data packing save gas?

Amazing, we can pack and unpack data, but why bother?

Writing to storage less saves gas and in the above example you can see writing the packed data to the mapping instead of the four separate variables saves a massive 66,470 units of gas. This is because we are doing a single write instead of four separate writes.

Structs or data packing?

Data packing really shines when you use it in place of structs, like this:

wow, that is so much more code

Yes, there is more code. You have to pack to store the data and unpack to use the data. However, if the data being packed is updated and stored regularly the gas saving really add up and your users will thank you for looking out for their wallets.

Packing addresses: Ethereum addresses are 20 bytes long. In order to pack them in a unit256 we need to cast them to a unit160 — see line 37

Unpacking addresses: To get an address from a unit160 you simply use the address cast on your uint160 address — see line 49

Nested packed data: Yep, you can nest packed data. Above we have taken an array of 6 uint16s and packed it into a unit96. We can then pack that into our uint256 — see lines 29–38.

Unpacking nested packed data: First you need to obtain your nested packed uint and then unpack as you would any other data — see lines 52–59.

Cool, what else can I do?

Anywhere you have data you can employ data packing and its not restricted to uint data types.

One really nice use case is storing data in the id of an NFT. This would allow you to serve up deterministic data without having to write to storage.

More, I need more information

Bit manipulation is fun and this is just the tip of the iceberg. If you would like to dive deeper I highly suggest checking these articles out:

Where to find me

I can normally be found in the Cool Cats discord channel
https://discord.gg/WhBAAHnSz4

Or on Twitter
https://twitter.com/xtremetom

https://twitter.com/coolcatsnft

--

--

xtremetom

Im a Found of Cool Cats, general Web 3 consultant with 20 yrs experience as a builder, marketer and company owner in Web 2.