The repo contains instructions to reproduce the list of runestone fairdrop addresses according to the rules defined in LeonidasNFT's tweet
Those rules are:
-
Held at least 3 inscriptions
-
At snapshot block height 826,600 (the state of the Bitcoin network after block 826,600 [0000000000000000000262aed91368b42764a507fd68a3b0bcf32791f85dd9eb] was mined but before block 826,601 [00000000000000000000b0621174c1354a8ec55f16115f4a103727f171f34191] was mined)
-
Including cursed inscriptions that are indexed by ord and have a negative inscription number post-Jubilee
-
Excluding inscriptions whose file type starts with either “text/plain” or “application/json” (for example “text/plain;charset=utf-8” inscriptions would be excluded)
The resulting output files have been provided in CSV and JSON format in this repo, however anyone should be able to follow the steps below to produce the exact same files, and validate that these are correct and unmodified.
Start the ord server with the JSON API enabled and let it sync up to block 826600
by setting the --height-limit
flag. Note that this flag is off by 1, so to sync to 826600
it needs to be set to 826601
RUST_LOG=info ./ord --height-limit 826601 server --http --http-port 8080 --enable-json-api
extract_inscriptions.py
will call the ord
API for each inscription and extract details about that inscription, including it's address location and the content type.
These details will be written out into JSON Lines (JSONL) files in the inscriptions
directory
This script requires the beautifulsoup4
package, so install that first:
pip install beautifulsoup4
Then run the extractor script:
python extract_inscriptions.py
Note that this will take a long time to run. Be patient.
If you want to play around with the next step while you are waiting, an xz compressed archive of the inscription snapshot files has been provided for download at the following link:
http://45.61.136.53/inscriptions.tar.xz
The extractor script should produce the exact same files as contained in this archive.
The fairdrop_addresses.py
program reads in all of the inscription metadata in the JSONL files, and processes the data to produce an address list CSV file based on the rules defined for the fairdrop.
This program uses the pyspark
and pandas
packages to process the data, so to run you must first install them:
pip install pyspark
pip install pandas
Then run the program:
python fairdrop_addresses.py
This will output a CSV file named fairdrop_addresses.csv
which is the list of addresses which match the fairdrop rules, as well as a corresponding fairdrop_addresses.json
JSON file.